I am not known for being a tin fan, give me some half decent hardware to run VMware and I am usually happy. However with Nehalem (5500) I have started to become interested. After all the 5500 is being touted as great for virtualisation for many reasons.
Aaron Delp did an introduction to memory on Nehalem on Scott Lowe's blog, which is a great read. Aaron does a good job of helping to understand the decisions around memory selection and memory speed.
What I wanted to add was some details about how the situation is different on the Cisco UCS blades, in particular the expanded memory blade, the UCS B250-M1. This blade has the Cisco ASIC memory extension architecture that lets it address up to 4 times the memory of a standard Nehalem processor. This ASIC is called Catalina.
What Catalina does is expand the number of memory sockets that can be connected to each single memory bus. The ASIC is inserted between the processor and the DIMMs on the memory bus, minimizing the electrical load, thus bypassing the control signal limitations of the Nehalem CPU design. Being done at the electrical level its completely transparent to the OS. The BIOS is extended to initialize and monitor the ASIC and to perform error reporting.
In order to increase the number of memory sockets without sacrificing memory bus clock speed, the ASIC adds a small amount of latency to the first word of data fetched. Subsequent data words arrive at the full memory bus speed with no additional delay. The first word delay is in the order of 10% but I have heard from some spies that testing shows this is looking like a non-issue. Its especially a non-issue compared to the constant 10% latency hit and 28% drop in bandwidth you would get if you populated the channels in the normal Nehalem way.
What this means is that with the B250-M1 you can get the best price/performance ratio whilst either having the largest amount of RAM possible with expensive high density DIMMs or a large/medium memory configuration with inexpensive DIMMs.
If you have been watching the UCS space you will have noticed that Cisco Rack servers were recently announced. Low and behold the UCS C250-M1 has the extended memory Catalina ASICs too.
To think of all that talk that UCS was just a bit of tin with some networking hidden inside.
If you want more details on Catalina see the Cisco Extended Memory Whitepaper.
If you have any insights (maybe you are a tin person), drop a note in the comments. These will certainly make some sweet ESX hosts!
Rodos
Hey! Scott didnt' write that, I did! hehe. Just messing with you. Great read!
ReplyDeleteAaron Delp
Aaron, my apologies, it was indeed yourself who wrote that, forgot that Scott had contributors now. I updated the article. Thanks for pointing it out.
ReplyDeleteRodos
Great blog, Rodney! I knew the UCS was packed with memory, but nobody explained it to me this well. (I'm not very smart but I can lift heavy things)
ReplyDeleteTim
Hey Rodney! No sweat at all. You didn't need to update it by any means but thank you for doing it! I'm just glad you found it helpful!
ReplyDeleteAaron Delp
This is a succint explanation of the Catalina ASIC and why it gives Cisco an edge. I do not think competing bladeservers can match Cisco on the amount of memory per core. Converting this memory advantage into customer success, that is the real challenge. If Cisco could offer collateral or a POC that shows benefits of the extra memory- that will settle the argument about significance of Catalina. I have some very good ideas, and want to hear from others. I am a SOA Expert, and my blog is at:
ReplyDeletehttp://soarealworld.wordpress.com
You can email me at:
technicalarchitect2007 @ gmail.com