> | | | | > UCS local disk policy + some vBlock

UCS local disk policy + some vBlock

Posted on Friday, February 05, 2010 | 5 Comments

I have been reading through all of the VCE vBlock reference documents that were recently published as announced by Chad. The last thing we want is for our implementation to be forked, away from the blessed best practices. [jump to the end for brief comments on the guides, this post is about something else]

In the deployment guide it details various UCS manager policies that should be created, I noticed that it specifies creating a "Local Disk Configuration Policy" set to "No Local Storage". The default is for any configuration.


Sidebar - Local disk configuration policy explained.
What the Local Disk Configuration Policy does it configure up the installed disks in your blades as the service profile is deployed to them. Forget going into the BIOS and setting things up, this is virtual hardware and stateless computing people. You just pick a policy, of say RAID Mirror, and when your server profile is applied to the blade it configures the RAID controller automatically. As an aside, you can also have local storage qualifications to even say what size disk you want, so you can deploy your server profile asking it to find a spare blade that matches your requirements.
The reason why I noticed it was because this caught me out during deployment/testing. When it means No Local Disk it really means no disk. We started with this exact"No Local Disk" policy. During some deployment we noticed that no spare blades could be found. After a short time of head scratching we realised that the only blades left were some that had local disks. Its a true testament to stateless computing when you start to forget what hardware you have and where it is, just letting the systems consume it for you. A quick change to a policy of any configuration and it was off and deploying again.

Of course I am going to put the policy back the way it was eventually (when we pull the drives out of that set of machines), here is why :
  • Security - To perform stateless computing you are booting from SAN and local disks are usually not required. The only case would be local scratch disk that was transient. You don't want to be writing data to the local storage and then for some reason redeploy your server profile onto another blade, leaving that data behind, bad security move.
  • Scrub Policy - Those who know a bit about UCS may say, "Rodos, just create a Scrub Policy". A Scrub Policy scrubs the disk so that a subsequent service profile has clean disks. Problem is that its not effective. Not being one to trust anything I dug into how it scrubs, all it does is overwrite the start of the disk with some zeros, it does not scrub the whole disk with multiple passes. Its a future function to make it a more secure scrub but as it is now I bet you could somehow get at that data.
So my recommendation which concurs with the vBlock guidelines is. Boot from SAN, set a No Local Storage policy and let the automation of UCS stateless computing take care of things for you.

Rodos

P.S.

My thoughts on the VCE vBlock guides themselves. I have skimmed through them all, initial impressions. Of course I will send some notes to those inside the VCE organisation through channels but I figured people would be interested. Reading the VCN (Netapps) document is on my list too, will be interesting to compare.
  • Don't think these will do your work for you. They leave more as an "exercise for the reader" than you might think. Its not a design of your system and you are going to have to do some significant work to create a solution. I know, I have just done it.
  • There is a lot of detailed information in the deployment guide about UCS and UCSM, very detailed. There is a bit about the EMC storage and a token amount on VMware. Sure it is not a very fare comparison because its easy to describe and detail how to build up the UCS system, whereas in contrast its not like you can describe laying out a VMax in 20 pages. Also the VMax design and implementation service comes with the hardware anyway. The VMware component consists of how to install ESX, not a mention of vCenter Server. Nothing about setting up N1K and its VSMs or PowerPath/VE etc even though they are a requirement of the architecture. Not saying that should be there in detail, but you are not deployed without it and its not even mentioned. Contrast this to the UCS blade details which has every screenshot on how to check the boot from SAN has been assigned correctly in the BIOS.
  • My gut feeling is that no one from VMware really contributed to this, it was a Cisco person who did the VMware bits and EMC did theirs.

Comments:5

  1. The issue I have with a no disk policy is that it rules out ESXi! There's no USB/Flash option for UCS and boot from SAN for ESXi is not supported.

    ReplyDelete
  2. Duncan that is a VERY good point. I will decline to make any comment or your lawyers will be after me. However I will state that we have deployed ESXi knowing the current status of experimental for ESXi boot from SAN. Move on people, nothing to see here.

    PXE boot, thats a whole new ball game. ;)

    ReplyDelete
  3. PXE boot, been there... done that. works great, but again experimental support. not really an option for Enterprise environment.

    ReplyDelete
  4. you might not easy to see pxe enable in production environment due to security tie down in the production DC. Maybe Cisco should consider to have the flash card supported in the next release for esxi integration

    ReplyDelete
  5. Rodney -

    As I understand it, the *original* intent of the "No Local Storage" policy was to allow for the automated disabling of the onboard disk if present, not just "any blade without disks". Don't be surprised if that functionality appears at some point.


    Just as a quick note, the "disk scrub" part of the scrub policy isn't intended to be a security feature. It's just a convenience feature to be sure that partition tables are removed so that the next service profile to be applied won't have to remove any left over partitions. If any organization has specific security policies requiring a true disk cleanse (such as multiple pass overwrites, etc), that should be done before disassociation.

    As far as using local storage in the blades, this removes the statelessness of the blade - the local storage has state, as does the configration of the OS contained therein. This breaks the whole purpose of the UCS model. True, you can use it that way, but why?

    That said, I realize a lot of customers will want to boot ESXi locally and they access their shared remote storage. There's not currently a flash/USB way to do this easily on UCS. What would be *really* interesting to me is if Cisco provided some integration with UCS Manager to make the ESXi configuration (since there isn't much of it) part of a service profile, and just deploy the ESXi image and configuration along with the service profile. Now that would be really cool.

    - Dave

    ReplyDelete

Powered by Blogger.