Understand Spine Leaf Network Architecture Packet Networking Summer Camp Q&A

Document created by hausmus Employee on Aug 6, 2018
Version 1Show Document
  • View in full screen mode

On July 24th, Wayne Hickey captained the Ground Control: Understand Spine Leaf Network Architecture webinar, a beginners guide on Spine Leaf, as part of the Packet Networking Summer Camp - Space Camp series. 

 

The following is a summary of the questions received during the webinar and answered by Wayne to help you learn even more about the latest trend in network topology. We welcome your additional questions in the comments below (login or register) and invite you to register and watch the webinar, here

 

 

What security options are available for Data Centers?

There are several options available for security, the two most popular are MACsec L2 Encryption, which supports encrypts payload, and TCP Header; and L1 encryption at the physical layer that adds MAC Header to the just mentioned encryption list. In some very secure environments L0 or optical encryption is used to protect all message payload and addressing information.


What is Data Center Interconnect (DCI), and how does it relate to Spine Leaf?

DCI is when two or more datacenters are physically connected together to allow them to work together, share resources and pass workloads to each other. An example of DCI would be our Waveserver Ai, where large datacenters are optically connected using 100G-400G Coherent DWDM wavelengths.

 

What is a Hyperscale datacenter?

Hyperscale datacenters have a minimum of 5,000 servers and 10,000 square feet of space. There are approximate 400 HDC’s world-wide, of which 44% operate in the US. The top 5 are Google, Amazon, IBM, Oracle, SAP, and Tencent. To give some perspective, the rapid growing HDC market in 2017 was $75B, the top 5 spent approximately 70% of the $13B spend in Q4FY17 – according to a recent online Synergy Research Group report.

 

What do you mean by east to west flow?

The direction of traffic flow, most machine-to-machine communication flows between data centers and not to the Core, or south to north.

 

Traditionally I've heard that a flat network is bad, what makes the Leaf/Spine better or would it not be considered flat? I could have my terms mixed up.

These are different topics. Spine Leaf is a full mesh topology, with two stages. At least two Leaf (stage one) L2/L3 gateways are connected to a L3 Spine (stage 2). Both vertical (South-North) and horizontal (East-West) traffic flows are effective, as the number of hops are known. Also, L3 routing protocols can be used. 

 

Flat networks refer to removing all subnets and use only one subnet for everything, with a redundant DHCP server for all WAN connections. The advantage of flat networks is simplicity, you can simple plug in anywhere. The disadvantage is loss of control (routing) and increase in network noise, as you’ve lost control. Depending on size, flat networks can get messy to figure out problems, whereas routed traffic only affects that subnet and not the others.

 

I know this isn't a sales pitch, but what types of Ciena equipment would you visualize in a Leaf/Spine set up?
Ciena’s 8180 Coherent Network Platform and 5170 service aggregation switch are Spine Leaf capable

 

What mean by Clos?

Clos network is a multistage circuit network developed by Charles Clos in 1952. In Spine Leaf each server is three hops away from other servers, server 1 (source) is connected to Leaf or Stage 1 of the Clos, routed to stage 2 of the clos (Spine) and forwarded to stage three of the Clos or the destination Leaf for server/device connection. Clos can be expanded to 5-stage Clos, by virtualizing the 3-stage or dividing the topology into clusters and adding another top-spin layer, sometimes referred to as super-spine layer.

 

Is Leaf Spine appropriate for connecting a proprietary DC to a hyper-scale environment like Amazon, Microsoft, and Google?
Data centers outside of the hyperscale environment use the Core (Internet/Cloud) to connect to that environment.

 

Is there a hybrid approach between 3 tier and Leaf Spine to enable North South traffic to integrate into east west data flows?

Traffic flows between 3-Tier and Spine Leaf are connected using the Core.

 

Will OTN be used to support 100G (+) switching via long haul networks?
OTN can be used to support 100Gbps traffic and higher-speed connections. Coherent DWDM Packet Optical can be used as well.

 

Why do we need Spine switch?
A Spine switch is used to connect Leaf switches, the Core, and provide L3 routing of traffic flows between data centers.

 

Would there be an advantage to connecting the Leaf switches/routers for further redundancy? Same with the Spine level?
No, every Leaf is connected to the Spine, providing full mesh. Same answer.

 

Are there ever situations where Spine switches are linked directly to each other?
Not really, as Spine Leaf is full mesh. However, Spine connections to the core may warrant MC-LAG or some other chassis to chassis protocol, but typically don’t.

 

What's the trend in connecting L2 datacenters? OTV versus VXLAN or TRILL)?
VXLAN is an overlay (encapsulation) to extend L2 to L3 and scale beyond 4096 L2 VLANs, OTV is Cisco proprietary that runs over any IP transport, Trill is used by Cisco’s FabricPath to overcome the limits of STP in the data center. Each have their advantages and disadvantages. The latest trend is Segment Routing, as you can provide strict network performance guarantees, and efficiently use resources.

 

In the example given - assume this is currently the total capacity possible on the leaf switches? 800G? Is there an upper limit on Leaf and Spine switch architectures?
Upper limits are determined by number of Leaf uplink ports. In the example there are 4x 100G uplinks meaning only 400G of traffic can be sent to Spine, 100G to 4 Spine switches. With a 32x 100G Spine downlink up to 32 Leaf switches can be added.

 

Can you please compare the three-layer architecture with Leaf and Spine?
3-Tier has three layers (Access, Aggregation/Distribution, and Core), while Spine Leaf has two layers (Spine and Leaf). Being pod base 3-Tier traffic flows and scales South-North, when built out a new pod is created, making is not scalable for East-West traffic. Spine Leaf is Mesh based, suitable for both North-South and East-West traffic flows as all traffic flows to every Spine and has the same number of hops. Latency can be a concern for 3-Tier as the number of hops can be different and take different routes. For Spine Leaf, latency is deterministic with the same number of hops.

 

What happens after Spine switch for the data (coming from Leaf switch and servers)?

Spine switches also connect to the core, allowing traffic to flow outside the Data Center.

 

Is in-band or out-of-band network management design preferred with Leaf-Spine design?
Out-of-band management requires more cabling but can be more secure using trust boundaries independent of in-band. In-band requires upfront configuration and setup. Both can be used or combined to suit your requirements. For some the preference is both, enabling highest level of access, others choose out-of-band for security, and some avoid additional efforts in cabling with in-band.

 

What model Ciena type equipment is used at Spine, Leaf?
Ciena’s 8180 and 5170 are Spine Leaf capable.

 

What typical fiber distances are seen between Spine and Leaf switch for DC infrastructures?
Greater than 80km.

 

What is the typical oversubscription ratio at Spine switch?
Spine switches are not oversubscribed, Leaf switches are, typically 4:1 oversubscription, but it depends on traffic volume and you choice/cost of leaf switch uplink to down (device connection).

 

Is it safe to say that the reason Spine Leaf architecture is an option today is because of the availability of OTN networks as opposed to more traditional network speeds?
Spine Leaf architecture is primarily driven by hyperscale data center and cloud traffic volumes.

 

Should all Spine devices connect to the core?
Yes, having core access provides alternative core routes as well as scale to the core.

Attachments

    Outcomes