> | > Scaling Cisco Unified Computing System (UCS)

Scaling Cisco Unified Computing System (UCS)

Posted on Saturday, June 27, 2009 | 4 Comments

So Rodos, how much can I scale my Cisco Unified Computing System (UCS)? Great question. For those with a short attention span scroll to the table at the end, otherwise keep reading.


UCS is built for scale, when you look at the numbers its impressive. Yet the devil is in the detail when it comes to scaling it out.

On initial thought you can look at the datasheet for the Fabric Interconnects and come up with some figures. There is the 6120 and the 6140 with 20 and 40 ports so with two for redundancy you could run 20 chassis or 40 chassis, each of these can have 8 B200 blades. Thinking this way is theoretically right, but thats not going to be a real world case.

Lets dive into what you would really need to do to hook your UCS environment together, doing some real world calculations.

First, there are five different interfacing requirements that need to be provisioned for:
  • Some Ethernet uplinks into the rest of the datacenter
  • Some Ethernet downlinks to the chassis
  • Some Fibre Channel links toward your Storage Fabric
  • The Ethernet link for the management system
  • The Ethernet links for the high availability of the UCS managers
The following picture shows where we can take each of these from.

Lets look at each one in turn.

One. Cluster Ports
There are 4 ports here, two of these are the dual 10/100/1000 Ethernet clustering ports which are used for connected two 6120/40s together, they do sync and hearbeat. You direct connect these with a standard Ethernet cable. The other two ports are reserved for future use. All of these ports are dedicated and you can not use them for any other purpose.

Two. Management port.
This is a dedicated 10/100/1000-Mbps Ethernet management port for out-of-band management.

Three & Four. SFP+ ports
The SFP+ ports take a number of cable types (copper or fiber) of varying lengths. They may be used to connect to the 2100 Fabric Extenders (FeX) modules inside the 5100 Chassis (that contains the blades). They may also be used to connect up to your data center switching core or aggregation point. We are going to come back to these two in some more detail.

Five. Expansion modules.
The expansion modules are used to provide further external connectivity. There are three types available.
  • Ethernet module that provides 6 ports of 10 Gigabit Ethernet using the SFP+ interface
  • Fibre Channel plus Ethernet module that provides 4 ports of 10 Gigabit Ethernet using the SFP+ interface; and 4 ports of 1/2/4-Gbps native Fibre Channel connectivity using the SFP interface
  • Fibre Channel module that provides 8 ports of 1/2/4-Gbps native Fibre Channel using the SFP interface for transparent connectivity with existing Fibre Channel networks
Most people are probably going to go with the 8 port FC one.

Okay, now that we have gotten all of that background out of the way (this is turning into a Chad diatribe post!) we can get to the interesting bit.

To provide bandwidth and redundancy you are going to consume ports.

If we go back to those uplinks to your aggregation switches, say a pair of Nexus 7000 you are at least going to need some redundancy and bandwidth. As most of the switching will occur in the 6100's you probably don't need a massive amount of bandwidth out. I think a safe bet initially is two 10G links out of each 6100, at a pinch one out of each.

The real issue is around the links between the 2100 FeX units in the 5100 Chassis back up to the 6100s. Here is what they look like.



Here is where they sit in the back of the 5100 chassis.



Now you are going to have two FeX for redundancy. That means you are going to consume a minimum of one port of each 6100. But is that enough. If one was to fail, you now only have 10G of bandwidth and all of your storage and networking traffic for all eight blades are going to be going over this link. Also remember there is NO internal switching inside the chassis, all inter-blade traffic has to go up to the 6100 to be switched. Therefore I think the real world situation is to provision two ports from each FeX and this halves the amount of chassis you can connect into the 6100.

So here is a table that does some calculations based on how many uplink ports you want to your aggregation switches and how many ports you want to run from each chassis. It also shows how many blades this would give you and how many racks you would consume, given two chassis per rack (you are going to need a lot of power if you go more than two).



You can see that with 4 ports from each chassis and 4 links to the aggregation switches you are looking at either 9 or 19 chassis. Thats either 72 or 152 blades which is a LOT.

Out of interest I did some quick calculations on number of racks and possible VMs. If you put 48G of RAM per blade which is optimal price wise you could safely estimate 48 VMs for a low RAM environment (1G per VM and a core ration of 6:1) or a high RAM environment (2G per VM and a core ration of 3:1).

So for a 6120 a realistic figure is five racks housing 9 chassis, 72 blades and close to somewhere between 1700 and 3400 VMs. Thats not bad for a total of 40 cables!

Of course playing around with things there would be a few ways of tweaking at the edges of this, but I think you get the idea.

Rodos

Comments:4

  1. great post, Rodos - I'm on the UCS training next week so I expect to be able to join in this discussion more.

    I think there is an interesting point about redundant capacity here, when you "lose" a FeX port to the 6100 - I need a diagram and a capacity plan to see the impact on the design, but I get your point.

    Great post! Keep 'em coming, sleep is for wimps :-)

    Steve

    ReplyDelete
  2. A quick update on some further details. I heard today that when connecting your FeX up to the Interconnects don't cross between the two, one FeX should connect up to only one Internconnect, even if its two or more interfaces/cables.

    This means though that if you loose a FeX you can't loose the Interconnect that the other FeX is connected too. You don't have double redundancy. So unlike you SAN fabrics where you cross, you can't. Hopefully this will change down the track.

    ReplyDelete
  3. Hi, Guru:

    Can you help to clarify the following as it contradicts with what I am reading at https://supportforums.cisco.com/docs/DOC-6158?

    EXCERPT from the above URL:
    1. A passive midplane provides up to 20 Gbps of I/O bandwidth per server slot and up to 40 Gbps of I/O bandwidth for two slots.

    So in short if I have lost 1 FEX, I will only have 20GB of bandwidth, "not" 10GB as in your blog, right?

    Thanks.
    Bennie

    ReplyDelete
  4. Bennie, the excerpt is referring to the connections to the blade, how many 10Gb ports it can see per slot. A single slot blade (a 250) can see an A and a B side 10Gb port, so thats 20Gb. A dual slot blade (a 200) can see two 10Gb ports on each A and B side, so thats a total of 40Gb. But these ports are mapped back to the IO Mux in the chassis combines that bandwidth from all those blades on its side of the fabric (A or B) up to the F-I. How many uplinks you have then determines the available bandwidth that will be shared across various blades.

    Hope that helps.

    Rodos

    ReplyDelete

Powered by Blogger.