Home > March 2010

March 2010

Virtumania Interview

Wednesday, March 31, 2010 Category : 1

Whats the scoop behind this new Virtumania podcast that Rich Brembley has started? Well watch the video for the run down.



I am an fan of the Infosmack podcasts as I have previously mentioned. Mark and Greg are helping Rich out and people like Rick Vanover are a regular guest. I talk to Rich about the formation and reasons behind this new podcast, where they got the name from and some details about upcoming topics.

The thing that has really impressed me from talking to Rich and Rick over the last few days at HPStorageDay is how much effort goes into creating such a quality podcast. The actual recording goes much longer than what you hear. There is quite a bit of post processing and daft/review that goes into removing talk overs, dead spots and stuff that does not work. This means that what you hear is concise and polished. Maybe thats what makes both these podcasts the must listen to for the virtualisation community at the moment!

Thanks to Rich who put up with squinting into the sun so I could get good lighting for the recording!

Rodos

HP on Virtualisation

Tuesday, March 30, 2010 Category : , 1

Today was spent at HP Storage Tech Day (http://www.hp.com/go/techday2010) at HP in Houston, TX.

One of the speakers was Mike Koponen who is the HP Solutions Marketing Manager for Virtualisation. I guess you could possibly think of Mike as the HP version of Chad or Vaughn. In his role Mike covers relationships and guiding the go to market activities around VMware, Microsoft and Citrix.

Here is a video interview with Mike where I asked about their strategy.



Some more details from the events today regarding HP and VMware.

Today HP detailed the release of the vCenter Plugin for MSA, P4000, EVA and XP arrays, which enables you to see the underling storage configuration and attributes from within vCenter. We noticed in the later demo this is a read only view you, it is not possible to make any changes from within the plugin. This is a free tool. It may also be used with HP Insight Control Integration for servers so you can see from with vCenter both server and storage details from a single pane. Insight Control is a licensed element though. There is no documentation up yet and we are waiting for the slide deck so we will need to wait a little bit to dig into the details of the features. But this is good news for HP customers, who I think have been waiting a while for this level of integration.

During the talk there was numerous references to the reference architectures which HP have in relation to building out the stack, "solution blocks". However we don't hear a lot from HP about these, certainly not like the hype around the vBlock Reference Architectures or the Netapp Secure Multi-Tennant Reference Architecture.

Its interesting to remember that HP's relationship goes all the way to the OEM level which gives them a different relationship, which was quoted as "access and opportunities for collaboration and co-development". In was stated that HP is VMwares largest OEM partner.

Lastly an interesting statistic, for customers who are going to make server purchases over the next 12 months, 57% of those x86 servers will have virtualisation on them.

So here is my take. I think HP have a lot more to say or engage with about virtualisation and VMware in particular, the thing is they are just so quiet about it no one really knows. The other vendors do a great job about making sure the community of partners and end-users know all about their integration and features, so thats what people talk about. HP, if you do have lots of great information, functionality and support then do yourself and your customers a big favour and get out there and start engaging with the community where they are at.

Rodos

HP Storage Day Agenda

Saturday, March 27, 2010 Category : , 0

HP have put up a page with all the resources for Storage Day, www.hp.com/go/techday2010.


The agenda is now listed. Looks like some interesting sessions. Interesting that there is a customer presentation, whats that about? Some EVA and some scale out NAS (X9000). I notice that Chris Evans (@chrismevans) has already done a writeup on the P2000 and P4000 so I will have to ensure I read that and talk to Chris before those sessions.

I hear they are giving away some storage at the Tweetup on Monday night, so if you are in town, come and join us!


Agenda March 29-30, 2010

Monday March 29 - all times are CDT - Central Daylight Time
8:00am - 8:30amLobbySessions begin in Commons, room: Ontario South
8:30am - 9:00amIntroductionCalvin Zito, Kyle Fitze, Thomas Rush
9:00am - 10:00amSession 1Opening Session: Storage and Converged Infrastructure: Tom Joyce
10:00am - 11:15amSession 2Storage technology and Converged Infrastructure: Paul Perez, VP and Chief Technologist, StorageWorks Division
11:15am - 11:30amBreak
11:30am - 12:00pmSession 3Customer presentation
12:00pm - 1:00pmLunchLuby's
1:00pm - 1:30pmSession 4The market is coming to HP: Andrew Manners, Marketing StorageWorks
1:30pm - 2:00pmSession 5Solutions: Ian Selway, Mike Koponen
2:00pm - 2:15pmBreak(Move to M4.2-311)
2:15pm - 2:55pmSession 6EVA Update and demo: Kyle Fitze, Director Storage Platform
2:55pm - 3:35pmSession 7X9000 Overview and demo: Efren Molina
3:35pm - 4:15pmSession 8What’s new with P2000 and demo: Norman Morales
4:15pm - 4:55pmSession 9What’s new with P4000 and demo: Chris Hornauer
6:00pm - 7:00pmTweet-up!Brix Wine Cellar
http://twtvite.com/HPHouStorage
Please RSVP on Twtvite
Tuesday March 30 - all times are CDT - Central Daylight Time
8:45am - 9:00amLobbySession begins in M4: M4.2.101
9:00am - 9:30amSession 10Previous day recap / Q&A / Transition to Halo
9:30am - 10:15amSession 11ProCurve via Halo (CCM31510 and CCA52416): Lin Nease
10:15am - 10:30amBreak(Move to lab in M4)
10:30am - 12:00pmSWD Lab Tour in M4.2-311Events Lab – Roger Javens; EBS – Phil Lang; MSA – Dave Sheffield
12:00pm -Lunch/EndOffsite lunch, return bloggers to hotel

Well my flight to San Francisco is about to board.

Rodos

Top 5 Best Practices for Architecting "Existing Workloads" for VMware Cloud

Thursday, March 25, 2010 Category : , 2

Inspired by Steve Jins "Top 10 Best Practices Architecting Applications for VMware Cloud" here is the list of five best practices for preparing your "existing workloads" for VMware vCloud. After all, its going to take time and money to re-architect your applications and you may want to gain some of the benefits of cloud from your existing systems.


Here is what you can do.

Virtualise
Yes, your migration is going to be easier if its already a virtual machine. A V2C (Virtual to Cloud) is going to be a simpler transition that an P2C (Physical to Cloud). The figures vary, but a recent Gartner report (Oct 2009) indicated that only 16% of workloads are running as a virtual machine today. How is work on your virtual first policy going? What is your current virtualisation level and when will you be at the 90% level?
Centralise
What is the dispersion of your current workloads? Do you have a ROBO (Remote Office / Branch Office) or a centralised model? Are all of your workloads in close proximity to the accessing users or systems? How many data centres do you have? Once you start to move to a Cloud model your workloads are going to be access via a network, whether that be your own, a private network with your service provider or the Internet. The proximity of the applications is going to change and this may have unexpected effects.
The more work you can do beforehand it educating and training your yours the less you will be taking on as you move to a Cloud model. Likewise any associated application issues can be dealt with in preparation. Do you need to change some of your printing environment given a more centralised architecture?
Centralising is a good pre-emptive strike to needle out any application issues before you start moving workloads to the Cloud.
Network
How tidy and efficient is your wide area and local area networking? Is your IP address register up to date? Would you easily be able to break down your subnets based on associated workloads or are the servers intermixed with the addressing for of your other infrastructure such as networking devices, printers or even desktops. Do you fear changing an IP address on a server because you know it will probably bring down your core application, or who knows what? Is your WAN routing a mess with lots of old entries, a convoluted mix of static routes with no dynamic routing protocols? Have you implemented Quality of Service (QoS) to ensure that important traffic is protected and given priority. How long will it take for you adjust the bandwidth around your network if needed?
A key element to Cloud is that it is connecting to remote services over a network and to be prepared for Cloud you want to ensure that you network is going to be up to the task.
Automation
How much of the configuration and operation of your workloads is automated? Is everything efficient or are people always running into the server room to remediate and repair? Are new machines deployed from a number of templates or is everything built from scratching by popping a CD into a tray or mounting the ISO? The more automation you can achieve the better for continuing the same practices once you have moved to a Cloud service. It will be easier to add a few additional orchestration features (such as some vCloud API calls) to your deployment method rather than scratching your head whilst waiting to transfer new images to the Cloud every time you need a new machine. How is your server and application patching, patchy at best?
A good activity here is to start to utilise the vApp features of vSphere. Packaging workloads into containers that contain one or more virtual machines. Can you extend the networking of your vApps to be more automated through the use of IP pools?
Cost Model
What does it cost in terms of dollars and kilowatts to run a workload in your environment today? Not how much did a physical server which runs ESX cost you when you purchase it last year, but how much to run a workload? If you do not know your internal costs, regardless of whether you are passing those costs back to internal departments, its going to be much harder for you to do a business case for moving some workloads into a Cloud. When the Cloud provider quotes you a dollar per Gb for storage what does it cost you to operate and maintain storage per Gb now? Simply taking the purchase price of your SAN and dividing it by the total capacity of the raw disks is not an acceptable model. How much do you pay for electricity and how much does that SAN consume?
Understanding your current internal costs is important to prepare for moving to Cloud.
There are your five steps for Architecting your existing workloads for VMware vCloud. Of course there are many more things that might be done but I think these are the big ones, and I have seen these validated time and time again since developing them over a year ago.

Now here is the really amazing thing about these five components, each of these will drive great return and potentially cost savings to your business right now. Virtualising more of your workloads, improving your network, efficiently using your vSphere infrastructure and automating more. Even something like understanding what your real internal costs are may drive improvements, "Wow, we analysed our storage and realised that our consumption is really high, if we started to use thin provisioning we could reduce consumption and delay that storage upgrade".

It is probably worth mentioning that I am not saying all of these are mandatory before moving to Cloud consumption, just that they will be very beneficial and ease that journey. I am also talking in regards to IaaS and utilising a Cloud service provider with vCloud Service Director (Redwood) or potentially vCloud Express. However, these may also be important if your goal is based around developing ITaaS with internal Cloud first, and then federating in the future.

As always, I appreciate your thoughts in the comments.

Rodos

The Granularity of On-Demand Cloud - Today vs Tomorrow

Tuesday, March 23, 2010 Category : , 4

Cloud is often characterised by a payment model of resource consumption, pay-as-you-go, only pay for what you use, CPU by the hour.

Over recent weeks I have been doing some thinking around cost and payment models for Cloud and two blog posts really caught my attention (happens when you are mulling over things).


The first was a series by Steve Jin from VMware who posted on Doublecloud.org detailing 10 best practices for architecting applications for VMware vCloud. Steve reminded me in a clear way the important distinctions between traditional vs applications built for the Cloud.
Most of the applications running on virtual machines are just converted as part of physical machines to virtual machines, or installed and run in a way just as before. Essentially they are not much different from counterparts running on physical machines. We call these applications as “Application In the Cloud” (AIC).

Cloud environment brings in new opportunities and challenges for application development. Modern applications can, and should, be designed or re-factored to fully leverage cloud infrastructure. When that happens, we call these applications “Applications For the Cloud” (AFCs), versus AICs as described above.
Its a great summary, Applications In the Cloud compared to Applications For the Cloud. AFC are built to really take on the dynamic characteristics that Cloud can bring such as rapid provisioning, statelessness, JeOS.

The second element was a comment by Lori MacVittie inside an post entitled "The Three Reasons Why Hybrid Clouds will Dominate".
Amidst all the disconnect at CloudConnect regarding standards and where “cloud” is going was an undercurrent of adoption of what most have come to refer to as a “hybrid cloud computing” model. This model essentially “extends” the data center into “the cloud” and takes advantage of less expensive compute resources on-demand. What’s interesting is that the use of this cheaper compute is the granularity of on-demand. The time interval for which resources are utilized is measured more in project timelines than in minutes or even hours. Organizations need additional compute for lab and quality assurance efforts, for certification testing, for production applications for which budget is limited. These are not snap decisions but rather methodically planned steps along the project management lifecycle. It is on-demand in the sense that it’s “when the organization needs it”, and in the sense that it’s certainly faster than the traditional compute resource acquisition process, which can take weeks or even months.
The granularity of demand is a real insight I believe, but only when you combine it with a concept of AIC versus AFC.

As I see it there are two major types of workloads that a organisation is going to want to run on an external Cloud. The new breed which are built for the new Cloud model. These will scale out, be light weight, stateless and fit perfectly with a consumption based model. Run a few small VM's, with low RAM and CPU consumption, as demand increases the application dynamically scales itself out to meet demand, whether that be user generated or application generated, such as a month end processing run. Paying for cycles in this model is great, as long as you can have some confidence that the resource will be available when you want it.

The challenge is that most organisations do not have applications which behave this way. There might be a few, but its not the majority. Certainly newer applications are being creating this way and people such as Steve Jin are promoting it, yet organisations take years to go through the expense and time to re-architect their systems. Likewise, ISV take time to the buy-vs-build applications which are much more attractive to organisations attempting to avoid internal development costs.

The Applications For the Cloud are our longer term game and its in these that the granularity of consumption is key, by the hour.

Yet the second type of workload, the traditional application, is the most predominant today, that is, Applications In the Cloud. These applications do not scale out well and they are typically very poor at being dynamic. Reconfiguring them for multiple instances is often a complex manual process, involving application and networking reconfiguration. Just look at Exchange, you don't just go and scale it out on the fly for a day. Organisations will look to Cloud in the short term to provide them some saving with traditional applications and the granularity of demand is much less. Changes are scheduled, expected, longer.

As Lori MacVittie has suggested, the workloads will be in project scale terms rather than by the minute. We need resource for the duration of our SAP upgrade, our merge and acquisition integration project, our Christmas period, for a semester of scheduled classes. Further, because these applications do not scale out easily the resources need to be reserved or guaranteed to deliver consistent performance results.

It's my view that larger, bulk reservations which are more project based for AIC are going to be predominant in 2010/11. Then as the weight shifts to AFC in the longer term much greater granularity in costing models will be dominant.

We must remember, not everyone has the luxury of a newly developed application which can burst dynamically into Amazon EC2.

Would love to hear peoples thoughts on this, so post in the comments.

Rodos

P.S. Its interesting to see the areas where VMware are expanding their portfolio and developing, lots around the AIC space with AppSpeed, SpringSource, Hyperic, Zimbra, Redis.

HP Storage Tech Day

Wednesday, March 17, 2010 Category : , 0

In a few weeks I will be attending the HP Storage Tech Day in Houston, TX. Thanks to Calvin Zito (@HPStorageGuy / http://www.hp.com/storage/blog) for the invite.


This is a follow on from the one held last year and sits alongside the HP Infrastructure Software and Blade Day which occurred last month. The first event was the imputes for the Gestalt IT Tech Field Days.

The agenda has not been finalised but here is what is currently proposed.
Topics such as:
  • HP Converged Infrastructure, and how HP StorageWorks fits into it
  • Updates on HP’s storage platforms, and
  • How HP is competing with the other major storage vendors
You will hear from HP executives such as Paul Perez – VP and Chief Technologist of HP StorageWorks, Andrew Manners - StorageWorks Marketing VP, and several other key executives. You’ll also see demos of specific HP StorageWorks products as well as have a chance to tour some of the HP StorageWorks facilities.
I have done a bit with HP EVA's in the past and always liked aspects of their technology.

There has been so much going on in the storage world in the last year but I have not heard a lot form HP on storage. The biggest news was David Donatelli moving over from EMC and the subsequent law suit. We have had the rise of VCE and Acadia, Netapp have produced their Secure Multi-Tennant Cloud architecture. On the blade front we have HP bashing heads with Cisco around UCS, after all the now infamous Tolly report came out at the Blade Day.

So whats the strategy ongoing strategy for storage at HP? They went and picked up LeftHand, they picket up Donatelli. At the high end HP they have XP which is OEM'd from Hitachi and we have just seen Oracle drop Hitachi in preference for their new step-child Sun.

I am sure there will be lots of technical speed and feeds which I look forward to, but here are some of the high level questions I hope I can get a feel for as well. If I am thinking about these things then I am sure Enterprise customers are thinking about these as well.
  • Where does the product line sit? The products from Lefthand are great, I think they may have been the first cab off the rank with a VSA. So whats its future and what do we compare it with. Is it the HP competitor to Dell EqualLogic? EVA and XP, where are things at?
  • What about automated storage Tiering, at the sub-LUN level. We know EMC are talking up FAST2, 3PAR are releasing it and Compellent have had it for a while. Whats HP's thoughts on these areas.
  • Stack integrations looks to be the flavour of the month, EMC (with Cisco and VMware) as well as Netapp are pushing it hard. So what is HP's take on integrating their own stacks with blades, Procurve/3Com and Storage.
  • Cloud. If there was another flavour of the month its Cloud. What are HP's thoughts around storage for Cloud, either internal or service provider.
It all starts on Monday the 29th of March. I will be using Twitter and some blog posts to share my thoughts. I am sure there will be many more questions that will come to the surface before then but if you have any particular thing you would like to grill HP about their storage then post in the comments or DM on twitter (@rodos).

Should be some real geek fun.

Rodos

P.S. For the long disclaimer bit. HP will be paying my travel and accommodation expenses for this event. I can and will write what I feel like, good or bad for HP. In my day job I work for a company that is a Partner of just about every vendor. We have a awesome HP BPSA on staff (Rodos waves at Jakes), plus we are a HDS and Netapp partner. We resell UCS and I am bit of a UCS fanboy. It is what it is, this is my personal blog. Wearing two hats keeps my head warm in winter.

Powered by Blogger.