Whilst at Cisco Networkers in Brisbane this week I caught up with Robert Burns who leads the support team for Server Virtualization & Data Center Networking at the Sydney TAC. Cisco run a follow the sun program so Rob and his team cover UCS support for the globe at certain times of the day. As the TAC team get the first access to the hardware they also give great feedback to the BU on the technology.
The video below is an interview I did with Rob on the role the TAC plays for UCS support and what he thinks of the technology.
Its great to know that there is a comprehensive bunch of people who really know the kit well to support any potential issues that may arise. After chatting to Rob I can tell you, he knows his UCS.
Whilst at Cisco Networkers in Brisbane 2009 I prepare a blitz of social media. One of the things I did was use twitter to get the message out on Cisco UCS. The conference ran for 3 days and every 30 minutes between 9:00am and 5:00pm I sent a new UCS fact or tip. The purpose was to get discussion and interest going on this great technology. Hopefully it would get a few people coming to my Employers stand to talk UCS with me (which it did).
People who were not at the show found it helpful as well, there were many retweets.
Here is the list in all its glory. Its hard to say much in 140 characters!
Unless they are manually pinned, FLOGIs are assigned the FC uplinks in the appropriate VSAN in a round-robin fashion.
Menlo is the code name for one of the Mezz adapters. A CNA with 2 10GE (with failover) and 2 FC ports.
Oplin is the code name for one of the Mezz adapters. 2 10GE ports only, no failover functionality.
Palo is the code name for one of the futr Mezz adapters. Provides multiple Eth vNICs or FC vHBAs (limits apply)
The min between rails for chassis mounting is 74cm. Overall depth of chassis is 91cm if you include power cbls.
Understand those UCS acronyms in the UCS Dictonary of terms. http://rodos.haywood.org/2009/08/cisco-ucs-dictionary.html
If a F-I failes the dataplane failover depends on how you setup HA. Control plane (UCSM) takes approx 100 sec.
The IOM multiplexes IO ports & BMC from blades along with CMS and CMC to the 10GB ports (1,2or4) going to the F-I.
For only one Fabric-Interconnect (lab use maybe) you must place the IOM in left slot which is Fabric A.
Allow 2Kw per chassis of 8 blades. My testing shows @ 50% CPU load 1600 watts consumed.
Only the first 8 ports of the Fabric-Interconnect are licensed up front. Add port licenses for greater ports.
Create Pin Groups & apply to multiple service profiles to do manual pinning to North uplinks. else round robin.
The half width B200-M1 has 12 DIMM slots, the full width B250-M1 has 48 (but its not avail yet).
@stevie_chambers writes great information on operational practices with UCS. http://viewyonder.com/
Default F-I mode is end-host, North traffic is not switched rather each vNic is pinned to a uplink port or port channel.
If you need grid redundancy for you power ensure you order 4 PSUs as 3 only provides N+1 redundancy.
Smart Call Home is valid for Support Service or Mission Critical but NOT Warranty or Warranty Plus contracts.
You can not use 3 uplinks from an IOM to its Fabric-Interconnect, only 1,2 or 4.
LDAP for RBAC uses the mgmt port IP's on the F-I as source of reqsts, NOT the shared virtual IP address.
Server pools can auto populate based on qualification of mess adapter, RAM, CPU or disk.
A helpful list of UCS links and resources can be found at http://haywood.org/ucs/
The F-I's store their data on 256Gb of internal flash. Backup can be done from GUI or CLI to a remote sftp loc.
Templates can be either intial or updating. Modfying an updating template updates existing instances too.
In UCS maximum 242 VLANs are supported. Remember that VLANs 3968 to 4048 are reserved and can not be used.
Within the RBAC the privilages are for updating, everyone can view the UCSM configurations.
Serial EEPROM contain in chassis mid-plane helps resolve split brain of F-I, each half maint by each a IOM.
Warning. Even though the 61x0 Fabric-Interconnects are based on the Nexus 5000 they R not the same so don't compare btwn.
All uplinks from an IOM must go to the same Fabric-Interconnect.
Only the Menlo card does internal failover 4 Eth when a IOM looses an uplink.All other reqr host multipathing software.
KVM virtual media travels over the CMS network inside the IOM and therefore only runs at 100Mb.
There is a limit of 48 local users within UCSM, for more interface to RADIUS, LDAP or TACACS+.
UCSuOS - UCS Utility Operating System is "pre-OS configuration agent" for the blade, previously named PNuOS.
Fabric-Interconnect backup can be performed to either FTP, TFTP, SCP or SFTP destinations.
The CLI is organized into a hierarchy of command modes, use "scope" and mode name to move down modes.
@bradhedlund writes great technical information on UCS. http://www.internetworkexpert.org/
Each blade and chassis contains a locator beacon which flashes blue when enabled via the GUi, CLI or manually.
The F-I runs in NPV end-host not switch mode. You must connect to external FC storage via the expansion modules with FC.
UCSM split brains may be due to a partition in space or a partition in time.
An amber power light on the blade indicates standby state, green means powered on so check before removing it!
If there is a "*" next to the end of the scope in the CLI don't forget to execute "commit-buffer"!
Removing a blade will generate an event and set a presence of 'missing". The blade needs to be decomish from the ivntry.
UCSM lets you cfg >2 vNics/vHBAs. Atmpt to associate it and receive a major fault due to insuff'nt resc. Wait for Palo.
The "show tech-support" command details the config and state of your environment. Use liberally.
UCSM can pull stats at a collection interval of 30sec, 1 2 or 5 minutes. Modify via the collection policy.
Connect each Chassis IOM to its F-I via low cost Copper Twinax up to 5m, otherwise Fiber with apprt SFP+ trancvr.
Visio icons for UCS can be downloaded from http://www.cisco.com/en/US/products/prod_visio_icon_list.html
Cisco NetPro Forums has a Unified Computing section so learn, share, support. http://short.to/rv07
For further insights into UCS after Networkers follow my UCS feed for updates http://rodos.haywood.org/search/label/UCS
Whilst at Cisco Networkers I caught up with Brad Wong. Brad is the Product Manager for Nexus and the Unified Computing System (UCS) in the Server Access Virtualisation Business Unit (SAVBU) at Cisco Systems. I have meet with Brad a few times before and he was very gracious to give his time for me to ask him a few questions around FCoE.
I certainly think that FCoE is important for Data Centers over the next few years, yet there is confusion around how to use it today and where its going. So I was very keen to get Brads take on it, after all he drives the products where most of this lives.
Given a bit of time I will post up some deeper details of some of the things that Brad mentions along with a series of links.
At the Customer Appreciation Party at Cisco Networkers 2009 in Brisbane Australia I was fortunate enough to be introduced to Tommi Salli (thanks Andrew White from Cisco).
Tommi is a Senior Technical Marketing Engineer for the Unified Computing System (UCS) within the Server Access Virtualisation Business Unit (SAVBU) at Cisco Systems. Tommi was one of the co-authors of the original UCS book "Project California: a Data Center Virtualization Server - UCS (Unified Computing System) by Silvano Gai, Tommi Salli, Roger Andersson", which can be purchased through Lulu. I ordered my copy within hours after it was available and it is now dog eared and covered in highlighter. The book is an introduction to the technology, I therefore don't really use it any more unless I am after some great words when writing up prose on a particular topic.
We discussed lots of areas of UCS together and I thought it would be good to do a quick video, which Tommi was gracious enough to do. Thanks mate! Hope you enjoy watching it as much as I enjoyed doing it.
With over 20 years working in the IT industry I have had varied sub careers. My first decade was as a programmer, developing applications whilst working and living in Asia. There was the obligatory dotcom involvement in a fun start up. Working in the SI space I loved being able to work at integrating many various technologies and solving a wide variety of IT problems.
Falling in love with server virtualization caused me to become involved in Cloud Computing which became a great passion due to how much it could help IT do greater things.
Today I spend my time assisting a large team of Solutions Architects across A/NZ at Amazon Web Services. Just like everyone at Amazon I enjoy working hard, try to have some fun and hope to be a small part of making history.