Scaling Cisco Unified Computing System (UCS)
So Rodos, how much can I scale my Cisco Unified Computing System (UCS)? Great question. For those with a short attention span scroll to the table at the end, otherwise keep reading.
Lets dive into what you would really need to do to hook your UCS environment together, doing some real world calculations.
First, there are five different interfacing requirements that need to be provisioned for:
- Some Ethernet uplinks into the rest of the datacenter
- Some Ethernet downlinks to the chassis
- Some Fibre Channel links toward your Storage Fabric
- The Ethernet link for the management system
- The Ethernet links for the high availability of the UCS managers
Lets look at each one in turn.
One. Cluster Ports
There are 4 ports here, two of these are the dual 10/100/1000 Ethernet clustering ports which are used for connected two 6120/40s together, they do sync and hearbeat. You direct connect these with a standard Ethernet cable. The other two ports are reserved for future use. All of these ports are dedicated and you can not use them for any other purpose.
Two. Management port.
This is a dedicated 10/100/1000-Mbps Ethernet management port for out-of-band management.
Three & Four. SFP+ ports
The SFP+ ports take a number of cable types (copper or fiber) of varying lengths. They may be used to connect to the 2100 Fabric Extenders (FeX) modules inside the 5100 Chassis (that contains the blades). They may also be used to connect up to your data center switching core or aggregation point. We are going to come back to these two in some more detail.
Five. Expansion modules.
The expansion modules are used to provide further external connectivity. There are three types available.
Most people are probably going to go with the 8 port FC one.
- Ethernet module that provides 6 ports of 10 Gigabit Ethernet using the SFP+ interface
- Fibre Channel plus Ethernet module that provides 4 ports of 10 Gigabit Ethernet using the SFP+ interface; and 4 ports of 1/2/4-Gbps native Fibre Channel connectivity using the SFP interface
- Fibre Channel module that provides 8 ports of 1/2/4-Gbps native Fibre Channel using the SFP interface for transparent connectivity with existing Fibre Channel networks
To provide bandwidth and redundancy you are going to consume ports.
If we go back to those uplinks to your aggregation switches, say a pair of Nexus 7000 you are at least going to need some redundancy and bandwidth. As most of the switching will occur in the 6100's you probably don't need a massive amount of bandwidth out. I think a safe bet initially is two 10G links out of each 6100, at a pinch one out of each.
The real issue is around the links between the 2100 FeX units in the 5100 Chassis back up to the 6100s. Here is what they look like.
Here is where they sit in the back of the 5100 chassis.
Now you are going to have two FeX for redundancy. That means you are going to consume a minimum of one port of each 6100. But is that enough. If one was to fail, you now only have 10G of bandwidth and all of your storage and networking traffic for all eight blades are going to be going over this link. Also remember there is NO internal switching inside the chassis, all inter-blade traffic has to go up to the 6100 to be switched. Therefore I think the real world situation is to provision two ports from each FeX and this halves the amount of chassis you can connect into the 6100.
So here is a table that does some calculations based on how many uplink ports you want to your aggregation switches and how many ports you want to run from each chassis. It also shows how many blades this would give you and how many racks you would consume, given two chassis per rack (you are going to need a lot of power if you go more than two).
You can see that with 4 ports from each chassis and 4 links to the aggregation switches you are looking at either 9 or 19 chassis. Thats either 72 or 152 blades which is a LOT.
Out of interest I did some quick calculations on number of racks and possible VMs. If you put 48G of RAM per blade which is optimal price wise you could safely estimate 48 VMs for a low RAM environment (1G per VM and a core ration of 6:1) or a high RAM environment (2G per VM and a core ration of 3:1).
So for a 6120 a realistic figure is five racks housing 9 chassis, 72 blades and close to somewhere between 1700 and 3400 VMs. Thats not bad for a total of 40 cables!
Of course playing around with things there would be a few ways of tweaking at the edges of this, but I think you get the idea.