Home > October 2008

October 2008

Paul Maritz "totally gets" it

Friday, October 31, 2008 Category : , 0

The man who coined the phrase Web2.0, Tim O'Reilly knows something about clouds. But according to Tim, so does Paul Maritz. In a blog post a few days ago Tim said of Paul,

Paul Maritz (CEO of VMware, who, by the way totally gets what I'm talking about here)
Its good to know that the man running the ship is held in good esteem by his peers when it comes to cloud!

Microsoft may cut up the Paul Maritz "ruby"

Thursday, October 30, 2008 Category : , 0

How many times have you heard Paul Maritz, CEO of VMware say “Ruby on Rails”? I know it has been a lot lately. A quick Google search shows nearly 600 references.

Why does Maritz mention Ruby a lot? Because he believes that there needs to be a decoupling of the applications and the current traditional operating systems. For example this quote from a recent interview 

There's a tremendous amount of ferment in how applications are being developed today. They're largely being developed around new frameworks such as Python, Java and Ruby on Rails.

We think it's risky to tie all your applications of the future to Windows or Linux or any other flavor.

Now move to the new vCloud initiative. A key component of the VMware play here is that by performing compute virtualisation VDC-OS can take the workloads of today and tomorrow. If ISV’s develop in these platforms they can be packed up into Just enough Operating System (JeOS) machines with something like rPath and deployed onto your internal or external clouds.

Then this week Microsoft announced their cloud initative, Azure. One of the key elements here is that it will not provide compute virtualisation like vCloud or EC2 but rather require you to write your applications to their framework, based on .NET.

So here it the reason for this post, you were wondering how all this tied together. A Software Development Kit has just been released for Tech Preview that allows Ruby applications to run on the Microsoft Azure cloud.

The purpose of this project is to provide an interoperable open source Ruby Software Development Kit (SDK) - set of libraries, tools, prescriptive patterns & guidance & real world sample applications that will enhance productivity for developers. Developers will be able to leverage the .NET Services to extend their Java applications by using the Microsoft Azure services platform to build, deploy and manage reliable, internet-scale applications.

Its not game changing but it makes you think. The key thing to remember here is that Azure is a close shop for Microsofts datacenter only. You can’t run it yourself. I think VMware have really captured the right model here, providing the VDC-OS to run your own cloud, and then provide the API between internal and external clouds for federation. Of course I am focusing on the Enterprise market here. Of course ISV building SaaS products may think its great to run everything on Google appEngine or Microsoft Azure, but wouldn’t you like the option to run it on your own cloud as well, just in case things change, or you want to run an internal test and dev on the same environment! Microsoft may have removed the lock in from an OS perspective, with Azure they have just swapped it for data center lock in.

Interesting days for clouds and software development. Expect Paul Maritz to keep his “ruby on rails” mantra for a while yet.

O'Reilly interview Bill and rPath rates a look

Tuesday, October 28, 2008 Category : , 0

Andy Oram is the Editor over at O'Reilly, a few weeks ago when vCloud was announced he spoke to Bill Shelton, Director of Cloud Computing at VMware.

Andy's details of the interview and analysis can be found in this posting,
vCloud: VMware adapts to cloud computing.

Much of the information is familiar to the recent VMTN
podcast on cloud last week.

VMware's announcements in regard to cloud computing a couple weeks ago represent an important industry shift and deserve attention without trivializing.
Oram outlines the API under development (moving from SOAP to REST based), the new vServices such as chargeback and SLAs (he does not refer to it as Beehive), vApp and OVF.

One interesting element is the following.
Partnering is critical to vCloud deployment. On the
vendor side, VMware is working with a large collection of partners--the vast
majority of whom have shown interest in vCloud--to offer services over it. On
the format side, by supporting a standard XML format, VMware can interoperate
with other management services such as
rPath.

There have been a few press type releases about rPath over the last month but I have not seen much about it in the VMware community.

rPath have tools to package applications for running on various cloud providers. They do this by combining your application with just enough operating system (JeOS) for it to run on a hypervisor as a virtual appliance. From the same build they can produce an image to run on multiple hypervisors such ESX, or EC2; you can run then internally or out on a cloud provider. They don’t support Google
AppEngine as it’s not based on a hypervisor, as discussed by rPaths founder (see the April 17 entry at Bill’s blog). Given this, supporting Microsoft’s announced Azure Cloud is going to be difficult for rPath as well. Still if ISVs go down the rPath route they have flexible appliances that can in effect be rolled into whatever flavor cloud is right for their business model. This is good for VMware, as they want to be able to run a wide variety of use cases for cloud computing.

All interesting stuff, however lets make sure that our datacenters are running VDC-OS well and reaping its benefits. Then once the vCloud API gets implemented we can take advantage of internal and external clouds for both current and future workloads.

Whats in a virtual data center operating system?

Monday, October 27, 2008 Category : , 0

Episode 20 of the VMTN podcast was with Leena Joshi who is the marketing manager for VDC-OS, I think.

Here is my summary of the key points.

  • VMware have come up with the term VDC-OS to describe what it is that they provide in terms of software. How do you describe what ESX, in its various flavors, actually is to a data center? Combine into that all of the features and functions, lets call them vServices, such as HA, DRS, SRM etc, which make it an even larger entity. Well, the answer is that all of these things can be described as a Virtual Data Center - Operating System.
  • VDC-OS is the category of the software which VMware provides which aggregates hardware providing services that are unique, such as availability, security and scalability. This is not a name change for ESX, rather a categorization of what it is. VMware is more than ESX and the solution set is actually more than a hypervisor.
  • Why call it an OS? This is because VMware provide services in a distributed fashion that would otherwise be provided by the server OS.
  • Just as every machine needs an operating system so every data center needs a data center operating system.
  • With VDC-OS you can take many physical machines and turn them into a giant computer. Adding machines in and pulling them out as required. This giant machine can provide many vServices to you workloads. Key is that it provides a unified abstraction layer that does not require modification of any of your existing x86 applications for them to run.
  • This VDC-OS has an expanding API so that you can use it along with these vServices. Vendors can come and expand or enhance the base functionality using these APIs. Am example of this is the new storage API to allow improved snapshoting, so that the storage array can deal at the virtual machine level rather than the LUN level. Another example is being able to plug in a specialist switch from Cisco, the Nexus 1000v to provide great functionality and integration with the physical data center switching. This is not a closed platform; you can choose your vendors for the different components.

I think VMware have done a good job here of creating a collective descriptor for what it is that they do for the data center. It certainly makes the vCloud initiative fall into a better perspective, more on that in a future blog post.

The question that this raises for me thought, as much as I am in support of the ever expanding capabilities of VMware and virtualisation into the data center, is this. If VMware want to be the OS for the data center, what happens to the work loads in the data center that are not virtualized? I know this is stating the obvious but there is currently a lot more in the data center than the VMware environment. Should a VDC-OS manage the storage and the networking as well, rather than just abstracting them once they are setup and plugged in.

Can VMware be the OS for the data center or is it really a new layer between the machine workloads and the resources they use, the RAM, CPU, Storage and Network. It’s a way of building a large grid computer, your own cloud, whilst being able to use your existing systems without change, and giving them all of them these new features and functions wrapped around them (mobility, security, scale, availability). I wonder if they thought of VDC-BIOS as well as OS. In the early days of x86 it was the BIOS that handled the IO (but things have moved on from those early days, but let’s not let reality get in the way of an analogy). Are we looking at the BIOS of a large grid computer rather than an OS? Same thing really, the BIOS is just the boot loader these days. Maybe the hypervisor is the BIOS for the VDC-OS which manages the whole entity or collection.

At the end of the day I think VDC-OS is a great name and one that will make it easier to understand what VMware are doing in terms of a category. The remaining question though is the remaining physical servers and infrastructure. One starts to think about Cisco VFrame and other service orchestration technologies. Are vServices going to need to develop to cover many more areas within the data center?

Today and in the next years VMware is certainly go to be “a” or “one of the” data center operating systems in your environment. Yet if we look into the crystal ball, many workload types which can not be virtualized now should be able to in the future. Once everything is consumed in and a virtual machine, we certainly then may just have “the” Virtual Data Center - Operating System.

Transcript of "All about the vCloud" podcast from VMTN

Sunday, October 26, 2008 Category : , 0

This is the transcript of the latest VMTN Communities Roundtable podcast on vCloud. There was so much important and useful information for those people thinking about vCloud covered that I figured it was worth doing a transcript, so more people could have access to the information, as not everyone is going to listen to an hour of audio. 

Also having the discussion as text allows greater commentary and analysis as people can quote and refer to elements. 

After the large effort of transcribing, I will hopefully in the next day or so post my own analysis and summary of what I see as the interesting components.

Episode Number 22 with a topic of vCloud.

All about vCloud - Communities Roundtable podcast #22

http://blogs.vmware.com/vmtn/2008/10/all-about-vclou.html

John Troyer as the host.

Guest is Bill Shelton who runs the vCloud initiatives and virtual appliance market place.

There was a previous session on VDC-OS.

John reflected on the confusion of the world cloud with SaaS and other things.

John: So what is vCloud? Is it an API, a feature, a product or an eco system of service providers.

Bill: Agreed the hype is deafening.

At VMworld there was the announcement about the vCloud initiative. There are 4 components.

First there is a set of new technologies VMware are bringing to market which are above and beyond the current infrastructure stack. Some of these are the same functionality but made to scale in a better way. Some of them are new components. These fill the gap between the compute cloud out there and the virtualisation stack that VMware delivers. Some examples are enabling technologies which are below the water line such as charge back components or console proxies so you can get a console out on the public internet protocol more reliably and securely. It’s also above the water line where VMware are investing in some new services that would run on top of this infrastructure. Examples such as data center as a service so you can use easier web based interfaces to provision and manage hardware and even some hooks in the existing software stack that connect directly into a vCloud service.

Second is where is this technology going to be instantiated? VMwares approach is not to build their own data centers. Instead they are going to work with the service provider community. Though we have seen some large names from some service providers, some Telco’s, some Web2.0 players such as engine yard all signing up at a directional intent level to participate in this eco system as service providers that would stand up vCloud services.

Third, no platform is better than the content that runs on it. VMware are interested in running a very broad base of software on this cloud platform and instantiated on this echo system. vApps is the next generation of application container that is a new and improved way to describe applications. Its OVF based but has some additional hooks and policies that VMware think are important to make it so that applications can be moved, potentially between parties and into a cloud service or out of a cloud service.

The fourth part is the customer base. Customers participate by using software that has hooks into these vCloud services, as well as writing their own integration level solutions on the API that we are putting out as part of the technology stack that the service providers will have out there.

These are the four components. It’s a tough story as there are a lot of moving parts, partner echo systems and technologies. It’s hard to compress all of this into a press release. So VMware grabbed one use case to highlight the vision and that was the case of the flexible capacity so you can take an external service provider that has spare capacity, and take an alert on a on-premise work load and provision that to an external piece of capacity to balance the work load. This is sometimes called cloud bursting or flex capacity. This is what was highlighted on stage and the press release just to make it a bit more tangible and its one of the use cases that will come out of this vision.

John:  Raised a questions based on some of the discussion on Rods blog relating to Nick Carr and Gartner. It’s not really pieces and parts but more an extension of the enterprise things we are already doing and extending them improving their reliability and mobility. Is this how people are going to start or should we be thinking about it in a different way.

Bill:  This is a good point. Sifting between the hype and the reality there are some substantial blockers to cloud computing. What VMware are doing with their offering is starting to think about some of those blockers. One of them is if we look at some of the clouds looking at the Web2.0 community with great elastic characteristics and a light weight entry, a disposable infrastructure model, billed by a credit card. There have been some service level interruptions that have paused people from diving in a little deeper that they would otherwise. And then even when they are up and running in continuous form there are some intermittent service interruptions that don't get all the press but this needs to be addressed before we are going to see more critical work loads move from the enterprise into these compute clouds. We have made investments in that and that’s one of the reasons why we do very well in business oriented virtualisation computing because we have a very reliable platform. We would like to bring this into the compute cloud. What enables this are some of the investments in app monitoring and being able to report back on application performance. There are various ways to measure this, at the very least give someone really good visibility into the performance characteristics and uptime of their application whilst it’s in a compute cloud in a predictable way, so they have the trust that they know what’s going on. Another blocking VMware are working on is the app compatibility story. If you look at app engine from Google, very interesting model, obviously a price point that’s hard to compete with but they have limited themselves to people building new apps for that architecture. Still a problem but less so with EC2. We are looking to build much more of a bridge where you can take existing applications and put them into compute clouds. So when we look at vCloud and the technologies and the API which will be coming out at the first half of next year we should be assuming we are working with the same storage semantics that people are used to working with, file systems and the things of this nature.

John: Mentioning the APIs lets take this down to a more concrete level. What are the vCloud APIs and what can we see now and what will we be seeing?

Bill. One of the specific promises made at the announcement is that in the first half of next year publishing the vCloud API. Lets talk about what it is not, since most people will be familiar with the VIM APIs, this is not simply proxy-ing of VIM APIs. This is a very different approach. First of all it’s a RESTful [http://en.wikipedia.org/wiki/Representational_State_Transfer] implementation. For those used to some of the other compute clouds out there that make programmatic control of the infrastructure available, REST is, we think the best implementation to enable really easy, a low bar to entry of manipulating of objects, intuitively dealing with things. It deals with URIs and endless support across all different languages to manage URIs, make connections, we think its a good way to engage with the backend services. So that’s the choice of the actual technology. From a design perspective our intent is to be far more abstracted and far more simpler to deal with. So compared to the VIM APIs if you wanted to any type of rich operation, lets say provision a new machine, that could have four or five discrete steps to it, some of them could be synchronous, some could be asynchronous, you would have to coble together the work flow of things and map out the dependencies and trap the error conditions in case one of them didn't come through. What we have done is boil these things up to much simpler course grained operations so you can provision a machine through one call, sit on your response code and deal with an error code due to something such as a lack of billing information to make that provision or what ever else might be the case. But we are definitely trying to make it a much simpler way to very immediately pull together services that would sit on top of that infrastructure. The scope of the API at this point is working on making sure we cover all your basic infrastructure operations so provisioning of machines, all your basic state transitions, capturing inventory of what you have, we have added a couple of new containers that we think are helpful for people managing infrastructure, so that people can take a larger pool of infrastructure and chop it up, so if the case was you are a large company and you wanted to make a volume purchase of capacity you could then chunk it up and hand it off to business units for individual projects and then manage it in those individual containers. We will be pushing out the documentation for everybody to start engaging with and providing feedback at the end of the first quarter next year.

John: And that’s something that I could be interacting with in my own data center or at a service provider.

Bill: Correct.

John: On chat Rod asks is their anything in the API's on chargeback.

Bill: There is. There is an acknowledgement of chargeback and here is a model for one of the underlying pieces of plumbing with an ability to configure the service provider, which could be internal IT organisation who is standing up a service for a business unit or external Internet ready service provider. There is a new component which is part of the VDC-OS which is for charge back and you have a connection via a construct we call an account that then maps to pre allocated resources. We don't get into the details of how to described how someone is going to charge but we definitely give you the plumbing and relational mappings so that you can turn things over to people to prevision their own things and then let someone turn around and capture both in a pre-allocated lease type arrangement or after the fact usage based model, pin costs to specific accounts and even within that account sub domains so you can then have a larger account who then subdivides that into different projects and understands how the resources and the uses of those resources maps back to the different projects.

John: Will we be able to apply this to our current VI3 infrastructure or will this be for a later generation?

Bill: From a pragmatic answer. There is nothing about the API that would be to my knowledge specifically express something that could not be expressed in your current VI. Now as we all know there is an API and there is implemented API and what we are panning to folks as an implementation of that API to accelerate time to value and the use of that API there are going to be some dependencies there on the latest components that are part of the next version of our platform. So there are some option value there. It probably depends what calls we are making to the API. Its definitely our intent to make sure that someone can pick up the API and very quickly implement that API and make use of it. In that use case our latest version would probably be very helpful or may be required depending on the calls being made. But if there is some super hardened back compatibility reason with maybe a little more elbow grease that should be possible but don't quote me on that until we launch the final version of the API. At which point we will do the dependency graphing to make it clear for folks on what would possibly not run in the current version.

John: We have talked about provisioning. That immediately brings to mind use cases like DR and SRM and how storage is moving around. We have storage VMotion internally, how does this vCloud API and provisioning relate to these kinds of uses cases, DRS, SRM, moving storage around. 

Bill: You know we definitely see them as complimentary and that’s one of the interesting use cases so if we look at the disaster recovery eco system out there both from the service provider side of the camp as well as from the end customer and the software developer side of the camp, we definitely at this level see an opportunity to be able to move to more shared infrastructure types of back ends to DR. Right now we are seeing a lot of that in backups. So if we look at the adoption of say S3 as a backup target for the eco system right now, that has really validated that that is a relatively good work load to terminate at a really nice shared infrastructure storage cloud. If we apply that to compute cloud and disaster recovery we are bullish on that being a place where we can probably drive DR into some even more aggressive price points and maybe enable some lower end markets that currently aren't able to afford the fixed target and source target hardware and dedicated links and all this business. There might be more of an opportunity to do more of a poor mans DR where you are terminating VMs into a shared storage pool and its less of a scenario of where you have the big red button that fails everything over as SRM enables you to do with great precision its a little bit more of DR with backup kinds of workflows where you have to recover the VM and provision it. So we see them as intertwined and we are working with the SRM team right now on how the road maps of these initiatives complement each other and at what point. We are pretty excited about that, we are just not ready to lay out at what time of how and when those things come together and for what audiences.

Matthew: How do applications work on the cloud? We need to find a way to cut the underlying operating systems out of there so that it runs on one product instead of two products, that’s my base question.

Bill: It’s a good point. If we look at the space out there right now there is an enormous amount of affinity between the application stack and the application architecture and the cloud platforms themselves and in many cases they are kind of vertically integrated. So our first take is that we think we can enable a lot more by not going down that path and sticking to our roots which if we look at our heritage it was about ... one approach we are not going down should be clear is that we are going to create this new cloud computing platform and as soon as the ISV community adopts in and goes and reworks all their apps then everybody is going to capture that value. I think we have all seen that can be a pretty levered play but the change of actually getting it through can be difficult. So we are definitely going to start with a low barrier to entry, if you have got a VM you should be ready to start to participate and have a point of entry into these compute clouds. And that’s a VM that uses very standard storage semantics, its not just that you have packaged up a runtime in a VM but then the application architecture assumes totally different very proprietary external components. Instead relatively standard components should be able to run and our intention is that those will be directly transferable. But at the same time you are right, there is a lot of ... if we stop there that’s a great app compat story but it does not capture a lot of higher end things that can be done in a compute cloud around things like optimizing the OS and for that reason we have, you will see a lot more investment in our virtual appliance initiative and in this vApp container so that you can tune that application more and more for the vCloud. So here is an example, we will look at this and apply a more object oriented mind set to it. Its our belief that the more the application is a self contained object the more you are going to be able to make use of that cloud infrastructure, so examples would be one that we are working on right now that we call late bound infrastructure services is that, lets do a compare and contrast, lets say that you had a real basic backup policies in a work load in an enterprise a lot of those would be an agent in that guest then a lot of the policies would be sitting of in a backup server and when you picked up that workload and moved it into a compute cloud nothing would resolve and the agent would not know who to talk to, the policies would be lost and all that. So the idea here is that when we talk about policies being part of the solution is that in the actual application in the metadata which is the XML wrapper to a vApp we have given a vocabulary to express the data protection policies so that when the application gets picked up and moved into the vCloud that wrapper is interpreted and then depending on the policy that is set when its instantiated without any pre-authoring knowledge the backup service is bound to that application to then honor that policy at some level. Were not going to get too ahead of ourselves here getting lost in the weeds of policies around workloads we want to just kind of deal with the real basic policies of what are the data protection policies, does it have a requirement for high availability, basic building blocks like that we can then bound the resources that already exist in our platform to that workload. When it actually arrives there is a mechanism that the service provider can even calibrate their offering and have different price points on the services that then do get bound when that work load is passed in.

John: Is there a place where we can get more information of vApp or OVF? The standards has now been released, correct?

Bill: OVF as a standard is out there you will find developer tools. You will find some pointers from the virtual appliance area of the VMware website. Also on the DMTF as it’s an open standard. Now some of the gap between the vApp and just a regular OVF container is some of the policies we are talking about and at this point I would say you are not going to find the hardened documentation on that. That’s something we should be looking at to be coming to market in the first half of next year in conjunction with the APIs because obviously you can see that as we are describing these things these things are interplayed. One thing that should be noted from the vApp side of the camp, there are four ways you could arrive at a vApp, with the emphasis being choose the one that works for you, hopefully all of them are intended to be very light weight. We have a new authoring environment called studio which currently produces virtual appliances and the next wave of our platform coming to market that will be upgraded to also author vApps. Just straight within the VI Client, the next release of our platform, you have the ability to do a real quick drag and drop assembly of a vApp. It should be noted that one of the key attributes of a vApp here which is again one of these basic policies is just the way to assemble multiple VMs under one container. So instead of having to really operate on the VM level you can say I have the HR app and the HR app is made up of a three tier lamp stack and you can put all of those three VMs together and then operate as one unit, start and stop it and mange it more at that level versus the independent VM level. So drag and drop in the VI client. In the end we are talking about an XML meta data descriptor here so pop open vi or emacs, your text editor and operate on that. And then finally we are working on some plugins to the popular developer environments so that you have the ability to extend your developer environment, where it be Eclipse or some of the Visual Studio tools from Microsoft and I would not be surprised if we didn't see some third party authoring solutions that currently are focused on the virtual appliance space, if they were to roll over and embrace and expand their offerings to support vApps to.

Newt: Going down a bit lower in getting ready for the vCloud what should we do to get ready from a network perspective and setting up systems like an N tiered application that we wanted to prepare for the cloud next year, what are the things we need to consider now, what do we need to prepare for?

Bill: You are thinking about I am developing applications what could I do to make these applications to possibly reduce my switching cost to when this is available or are you think more from a service provider perspective of putting infrastructure in place to then or both.

Newt: A bit of both.

Bill: From an infrastructure perspective we do not have hard guidance on reference architectures at this point. you know, here is your reference architecture to setup a vCloud instance. We are just not in a position yet to do scale testing of a reference implementation then start to feed that back. But I will say with a lot of the large hardware vendors, both in the networking side and on the server side, that work has just kicked in and is progressing pretty well where ideally, you could imagine that the list of vendors here who are very excited about making sure that as the first half of next year as we are unrolling the software stack their ready to step in with reference guidance on hardware implementations. So I can't point you to anything right now from us that’s going to say hey here is, say you are a big Cisco fan, here is your reference implementation using Cisco. But I could tell you that we plan on the first half of next year in concert with the specification for the APIs becoming available to have more of that kind of guidance available.

Newt: Sounds like there is a lot of more work on VMware side to put together the details that we may need but it’s definitely getting there and it sounds like you guys are on the right path.

John: What are service providers, where that’s 3rd party providers or me as the data center guy have to do in terms of hardware. What about common chip set issues in the cloud. Most providers are at this point just buying what ever is cheapest or who knows what they may have, Intel and AMD chips in different families and they may have non 86. We paper over a lot of the hardware differences with extended VMotion capability and we paper over some others but how far away are we getting away from having to have the exact same configuration and hardware configuration everywhere?

Bill: That’s a good point. There are two dimensions to view this problem. What is the required commonality between, and the mobility of workload is the big theme that we emphasize in our cloud space, so depending on whether the cloud space you are setting up, whether the entire life time of the VM workload that you are instantiate is within that cloud or the cloud only represents only one phase of its life time this will change. But let’s assume that it’s the more complex model where the cloud needs to be somewhat compatible with an external environment. Right now the current implementation is that this is not truly what you would call a wide area vmotion, that would bring with it a bunch of hardware compatibility requirements, our model of taking a workload and moving it into the cloud, we are looking at some replication technology and some kind of push ahead of the migration technology approaches such that we are dealing at the raw VM level and you don't have a situation where you'd actually need the compatibility between the two chip sets and you should be pretty safe in that regard. And that goes for both the storage and for the server hardware. Now if we look within a compute cloud you have a different problem and a different and interesting thing here which is that compute clouds become very large and by definition you are trying to extract and roll up massive chunks of infrastructure under common interfaces and get rid of a lot of the plumbing details. And obviously plumbing details include pockets of new servers vs old servers and things of this nature. We have the ability, what we call the bin packing of work loads, the ability to segment gear such that the polices get applied to where the workload actually lands. Of course the model here for the placing of workloads is not as it might be in the traditional data center where really the admins choosing to a very specific level where they are placing that workload. Instead this is all taking place through workflows we are implementing and then you can customise, but we do give you a way to set it up so you can have some heterogeneous gear back there and make sure that common work loads that needs to co-exist on the same gear makes it to those buckets.

John: That works for me. People are familiar with the situation, people have homogeneous pools of machines as well so I think its the same kind of situation. At least we are familiar with the way it works today. Rod has a question about management, internal vs external, cloud vs VC.

Rod: One of the issues that everyone is experiencing in the large installations today is the unified view of all of their machines across their virtual centers. We are seeing that both VMware and third parties are bringing out technologies to show a unified view, and in version 4 we are going to have federated virtual centers, so one thing  that customers are going to be really interested in is that once I have some of my workload in the cloud how do I manage that and view that. Bill do you envision that in my virtual center I will be able to see my virtual machines and they will be identified as some in the cloud and some out of the cloud? I will be able to through virtual center and the API you are producing to restart machines, suspend them, initiate snapshots and have a unified view.

Bill: Let me give you the roadmap on how we are planning on implementing that. This brings us to a couple of different investments on the technology side so let’s place this. We talked about there is under the water plumbing and there is above the water line services and obviously one of those above the water line services is the way to capture the view of workloads that are running in a vCloud instance and that view in the short term, safe to say within the next year, we will see that as a plug that you can then add to VC via the extensibility mechanisms that currently exist in VC. So this gives us the ability to bring together under one pane of glass, one click away looking at infrastructure at different service providers along with looking at your internal infrastructure and give a common vocabulary to look at things like performance and uptime and all that business.  That’s kind of version one. The next version which is the next generation of that and it would be premature to give timing around that, but we indent on taking the next step and putting those resources that are off in the cloud and putting them into the native inventory, so that its not just a web page that is an extension or a new tab if you will it is fully integrated into the native inventory of VC.

Rod: That sounds good. Management in large VMware installations is already a challenge and we don't want to add to that so its good you are thinking through those things.

Bill: Let me highlight that. This is a super point and let me take it in another direction that I am sure is on some peoples minds which is VC scalability. Is anything we are doing here scoped to the current scale capabilities of VC and the answer is no.  What we have done and what we talk about when we talk about the under water plumbing that we have. We know that there is much more scale coming from VC, so I won't dismiss that but its not the type of scale that’s going to get you to the 50,000 physical box layer. And that’s where we have a story in the architecture, that we can get you up to these super high end size areas. That’s an architecture that is very common that we see out there in cloud computing today, we implement a message bus, a pub sub architecture, and we take VC and various other components and out them into highly reproducible cells that you can just scale out in a horizontal way that just sit and wait on the message queue and as the message queue starts to back up then more of these  cells become created and the message queue starts to empty then you collapse the cells. You have really taken VC and some of the other components that previously were the end all node and kind of create these as elements in an architecture that can scale out and have a very simple directory that then maps a given VM to a VC, so depending on you point of access you can always very quickly track down to which VC if you need to richer data that VC holds you could quickly go talk to that VC directly. There has been a fair amount of thinking in how we fundamentally change some of the limitations that might have existed previously in that architecture.

John: Wow, that’s pretty interesting. You heard it hear first. I don’t want to get down to far in the virtual appliances and the vApp and how you trust them, that’s kind of a whole separate conversation and which we have been having for a couple of years that is probably worth having at some point on this podcast. But Godfree does ask, especially with 3rd part virtual appliances and vApps, and maybe you can talk briefly what we are doing with the virtual appliance market place, how do we trust the workload bundles that are coming into our clouds.

Bill: It’s a great question. Let me give a couple of different areas of investment and let me dive into any one of them into more detail.  The first one which is meaningful and definitely customers have told us that this is meaningful which is if you go to our virtual appliance market place you have a whole bunch of different things up there. Some are very well thought out, ready for production use in very mission critical ways and some of them are kind of projects that people have thrown together and we purposely encourage both and we love the fact that the market can evolve those as they need. But what we did do is we added a tearing mechanism we called certified virtual appliances and now its call VMware Ready virtual appliances. You will find on the external web site the criteria for what it is to be VMware Ready. This is not just a light weight rubber stamp mechanism. We have a vetting process where the ISV submits the virtual appliance to us and we have a team that actually tears it down and does everything from inspecting it for any residual SSH keys that might have been left around, correct use of disk configuration, correct versioning on VMX files, permissions on critical resources, whether that be VNC passwords and things of this nature, to, I am not going to say address everything but everything we have ever learned about to pose an issue to someone we make sure we pull this back and its a closed loop system, so as the eco systems learns about building virtual appliances and securing them and making them more reliable we fold it into that program and customers seem to like that Monica and the ISVs find a tonne of value and they don't view it as a burden they view it as you are making me a much smarter virtualized vendor. So that’s one layer of involvement that people can look to depending on whether you are buying or whether you are making software that we think is useful and we will continue to evolve and we will definitely appreciate feedback in that regard. If you get down to some of the more technical, tactical elements of trust, the first key to security is really not to trust anything so what is the perfect level of trust. So for example some of the things we do is if you hand over a VMX file to us to load in, obviously VMX has a vulnerability that’s not a super trusted element, someone can readdress an element to go to some other point of storage so we are obviously translating that, scouring that off, making sure that what ever resources someone is pointing to in their VMX layer that they have ability to touch those resources and we have some isolation and some constraints there on. We go through about 50 different initiatives where we trap these things but the principle here is that we don't really trust anything that comes into the cloud. On another level network isolation. We have within the vCloud we talked about these different containers that we are enabling, one of the key benefits of those containers, we talked about them from a charge back perspective but one of the other things is that it gives you another container to create network isolation, such that there is isolation within these virtual data centers so that someone is not off listening on the TCP/IP stack they should not be. There is firewalls that have been integrated in, some of which will be visible to customers and they can configure in a managed way but elsewhere we just have firewalls that you don't even know they are there but they just create the need isolation.

John: You are not trusting the service provider. We are not just trusting the base functionality and trusting the service providers to do that kind of containment themselves, we are providing some capabilities?

Bill: We are and that’s a good way to characterize it. But at the same time there is going to be, or plenty of service providers out there, I will highlight Savis. This is a big theme of their business and this is one of the reasons we chose a partner eco system approach vs building our own and this is one of the critiques right now is that you build your own and you have got a one size fits all. And maybe how aggressive they are at implementing and locking down security would meet your needs  but maybe it won't. Our bet is that if we put the enabling tools and the philosophical design hooks in place these service providers will go off and differentiate themselves accordingly.

John: We only have a few minutes left. Service provides and partners, there is a listing on your web site. Partner eco system, is there anything we can do with them now or do we have to wait for this to roll out or are there current offerings.

Bill: There is value out there and there is more coming, so what you will see in the press announcement is that we basically view the partner eco system and we see it evolving in different stages and it will be up to the service providers to decide at what level they want to engage. The first one is the VMware Ready service providers out there, these are folks that use VMware today, there is no standard API, and things of that nature to engage with our services but you will find the worlds leading service providers up there and they are using our software and you can imagine it being more point to point in the arrangement and in the integration solution but obviously the fact that two parties are using VMware you can do some pretty creative things and that’s the first layer of engagement. The second one is when we talk about being optimized, that's where there VMware Ready optimized. These are folks that have now taken on the API and now the building blocks that enable to commonality of both the enterprise customer as for the small medium business and the service provider can be expressed in a more consistent way. And that’s where we get the API in there and that’s where all of a sudden that’s were we will be looking to work with, not just those two parties but with ISVs to build services on top and then the final one is where you have what we call VMware optimized cloud services and that goes all the way to the point where you will have the ability to blend their services right into our on premise software.

John: People may not realize that there are people offering VMware as a hosting service even today.

Bill: Absolutely and that’s where we want to highlight. There is value to be had and services to be consumed today. This is a roadmap that takes us to a much better level of integration to spawn what we think will be some pretty cool eco system plays. Where software vendors and cloud, ISVs writing to these platforms and getting a lot more tooling out there. Right now, if you did your own cloud infrastructure, a lot of the tooling would need to come from yourself as the service provider. As we evolve this there should be a 3rd party eco system that comes in to help with a lot of that tooling and really accelerate the time to value on any integration and service setup.

John: Let’s talk about some use cases. We talked about different kinds of fail over for DR. Rod had an idea that I actually blogged last night as I was promoting this podcast which was, if I can fail over to the cloud I can also fail over from the cloud, I can use their bandwidth and use their service levels, but like when gmail goes down it can fail over to my internally hosted data center which I may not have chosen to be my primary site but my backup site. That kind of turns things on its head and I hadn't thought about that. Are there other uses cases? I want to talk about VDI for a second. It seams like the VDI desktop is already living in the cloud. Toms a VDI guy and he had some questions or commentary on how VDI is our start of the cloud right now.

Tom: Some of the clients I am working for we are designing what we consider to be a fledging cloud, in that its a desktop service that can offer hosted desktop either at a data center or in their own environment, a single cloud, desktops on demand, ramped up ramped down. VMware have most of the production line already, DRS, HA, DPM, etc, they all add value to the cloud. Thinking about how that would move from what we have got now, through VI4 and into VDC-OS.

Bill: I could not agree more, the building blocks are there. That is one of the use cases I would expect to kick in more meaningfully in the second half of FY09. I speak to customers like you are dealing with and they say hey I would like to virtualise desktops and in many cases those are desktops for people who don't work on site and they are actually remote workers.

Tom: We are not finding that. Customer are coming to us and asking us to take the machines off the desk, back into the data center, outside of the environment, effectively give me my green screen back.

Bill: Ok. Maybe this is just a subset audience which has been filtered through to me because I am working on the cloud stuff. I have been introduced to a fair number of folks who were using vCloud in many cases to bring off shore labor to bear on their core business processes but they were not prepared to take the data and actually provision and they did not want to distribute the data and wanted to keep that behind the firewall but make use of those off shore resources and because they were off shore the opportunity cost of whether its running in a secure dedicated cloud or whether it is internal to the organisation is not so critical. And in fact going out to a service provider who has much better Telco infrastructure might even be optimal.

Tom: That’s the way they are looking at it from their point of view. I think Rod books it as reverse cloud in that our clients are asking us to host their desktops and for them to be federated into our security environment and on failure of us it will fall back to their data centers.

Bill: That falls in line with what we are talking about here with the federated cloud model, which a lot of the investments we are working on right now are about how do you blend the infrastructure into the current existing IT operation framework so its non-disruptive and does not look super distinct, so it does not require radically different procedures and policies around it. That’s one of the themes we are working on at the networking layer and specifically on how we facilitate VPN access if need it.

Tom: These are the same questions we are asking at the moment so we seem to be moving in the same direction. I wonder if there is any way to bring it off line.

Bill: I would be more than glad to.

John: I will hook you guys up. Bill any final thoughts. Use cases out there. We appreciate you being here. We should have you back to talk about the virtual appliance market place and VMware studio.

Bill: We are trying to be very up front about where we are and where we are going. This is larger than here is an isolated product so we felt we needed to engage with people at this time versus what is normal in the VMware case where I can only come and talk to you about something on the eve of its, a week before the product is released. There is far more going on here so we wanted to engage at this point. A great time to get together is when the vCloud API is published. That’s when we have a step curve improvement in being able to talk about very specific use cases and what’s enable and what’s not enabled.

John: Do you think we are headed to a world of utility computing?

Tom: Yes.

Bill: Um .. I will say yes, but I will quickly follow up with we the trick on all these things is timing and there will be a step curve adoption and what is the critical mass when it really takes off, I would not even want to speculate on that.

John: That’s what I like about our cloud discussion, and here at VMware. Its grounded on what we can do now, and then extending that evolutionarily and revolutionarily build some other capabilities, it is really a pretty grounded discussion which I really appreciate. Trying to get my hands around the global cloud discussion which is usually not very grounded. Thanks everybody.

Cloud confusion and security concerns

Category : , 0

Looks like cloud computing is the hot topic around the water cooler over at CIO.com.au. 


Another article about cloud computing. This one is commentry on a poll about cloud computing. Top of the list of concerns was security, which the author then makes fun of. The example is given is if a CIO is happy to give their credit card number to Amazon or do business with Google for advertising then is it fair to not trust the security of the cloud?

Well I think this view may be taking light of the differences. Its one thing to order a book, its another thing to place your entire customer based details with an external party. If someone steels your credit card number the damage and liability is really not that great. What if somone had access to your entire customer base, that could cause an IT manager some real headaches. 

I think its fair that IT managers should be concerned about security with the cloud. If the concern is valid then this assists the service providers to build and explain the data protection polocies and practices in such a way that these concerns can be aleviated.

The enthusiasim of Lynch is great, I too see a large future of cloud computing in its various forms, however this quote felt a little shock jock to me.
The Web has profoundly changed the way we consume information and now, with Software as a Service, the way we contribute and update information as well. As such, I have to believe the days are numbered for the alarming 42 percent who checked off "no." It was even more alarming that 30 percent said cloud computing wasn't on their technology road map at all.
30% of CIOs have probably not got a good virus strategy in place, or a DR solution,. Yes, cloud computing may be a great way to achieve some of these with less pain. But a lot of organisations I meet with have some really large issues with applications and data. Even if we look at the SaaS version of the cloud computing. If you are a manufacturing company and your big issues is your custom application that runs your business, you are probably not going to find a SaaS offering that you can quickly transition your business rules and data to.

I really think we need to start with some more descriptive terms for cloud computing. Are we thinking SaaS, or compute Cloud, or hosting or middle layer services such as elastic storage from EC2 that do nothing unless you integrate it into your applications. 

Somehow I feel with all the hype on cloud at the moment the cloud is going to get gray for a while. However during 2009 I think the big gray cloud is going to break into a few seperate clouds which will each be all fluffy and white.

Gartner Validates Nick Carr

Wednesday, October 22, 2008 Category : , 0


I have spoken in September about Cloud and Nick Carr.

This recent article at CIO, "Blog: Cloud Computing: Gartner Validates Nick Carr -- At Least Partially" agrees with me along with Gartner. Always nice to be in good company.

Golden takes Carr to task though.

On the other hand, I think Carr is only half-validated by Gartner. He oversimplifies the world of IT when he compares it to snapping together online components to create an application.
However if we go back to the original ideas in his first book, I do think they align well. With the current cloud offering one needs to rearchitect your applications or do things in different ways. This is not a standardised approach which is what the utility of computing requires. After all you can plug into any electical point and things work. With virtualisation and the VDC-OS and vApp framework you will not need to change your work loads. This is where I think we will see the Carr vision turn into reality.

I suppose that puts me and Gartner a lot closer than Golden, he is being to focused on the cloud of today and not the storm of tomorrow!

Goodbye physcial Fibre Channel

Category : 0

If you are into storage then the article Goodbye physical Fibre Channel over at The Register is an interesting read. Especially as I got my hands on the CNA's the other day.

For example this coment on FC Fabric and the competition between Cisco and Brocade.
They are all pushing the story that Ethernet can be lossless and have predictable latency in its new Data Centre Ethernet (DCE) form, and that 10Gbit/sEthernet has plenty of speed to run the traffic from multi-core, multi-socket servers crammed with virtual machines demanding instant access to desktop boot images, database records, etc.
Give it a read, there is some good stuff in there. 

getVIRTUALnow

Sunday, October 19, 2008 Category : 0

Brian Madden did a recent post on Microsofts "getVIRTUALnow" roadshow. It revealed some interesting points in regards to VDI.

  • The speaker from Microsofts internal IT department talked about using Terminal services to provide desktops to users of companies taken over. Their problem was that without admin priv they could not install applications or personalise things such as backrounds. Now Brian freaks out that you can do personalisation in TS which is true but the point is still valid. Being able to provide a full desktop experience is important. VDI can deliver this and you can have one platform with varying degrees of access. 
  • The application and desktop virtualisation that Microsoft are doing align well with what VMware are doing. Thinapp, fusion, VDI, its all there.
Its a good review and worth the read. However Brian needs to start thinking about things broader than TS. VDI can achieve a wider set of use cases with a set of products, you don't have to do a one size fits all.

Citrix and VMware have BYOC

Thursday, October 16, 2008 0

I have written before about employee provisined laptops. Well today two articles popped up about Bring Your Own Computer (BYOC) programs at VMware and Citrix.

Our "corporate" image is simply a VM that you download off the corporate network and run on your laptop.
It saddens me to report that my employer has been unable to support my desire for an alternate laptop. All sorts of details about FBT and other things that are not really my concern, oh well, welcome to working for a big corporate. Of course, we can't sweat the little things.

There would probably be no problem if I payed for my own device. So I suppose we are more like VMware, although we can't download the SOE as a VM. Nothing a quick P2V would not fix. I like the Citrix plan where there is a budget for you and the conditions of having a support contract and anti-virus are more than reasonable.

I won't give up though. You never know, one day I may have enough money to achieve such things.

Believe it or not

Sunday, October 12, 2008 0

Well this one could not escape my comment.

Check out the story over at IT News about a company choosing Hyper-V versus VMware.

The privately-owned industrial and commercial developer engaged Thomas Duryea Consulting to perform an analysis of the suitability of its environment for virtualisation using VMware technology.

The analysis involved monitoring PacLib’s servers for a month, according to IT manager David Furey.

“They came back with a proposal of about $25,000 in installation costs and another $25,000 in software costs,” Furey told iTnews.

“You’ve got to question whether it’s worth paying $50,000 for that. I know the VMware camp go on about features like VMotion, but for $50,000 I could pay someone to move my virtual machines for me.”

Furey decided instead to look at Microsoft’s Hyper-V, then in beta.

“To us, it looked like we weren’t losing any performance or benefits of virtualisation but we were saving a lot of money,” Furey explained.

“It just didn’t make financial sense to spend all that money [on vmware], when if we want to add more Hyper-Vs, it’s $49 per server.”
Wow, they actually put that in writing. Now I have always considered the guys at Thomas Duryea worthy competiion. The people I know personally there are great people, even if we were/are competitors. It just makes me laugh that they have been named in this farse of a idea to go Hyper-V instead of VMware. If they went with VMware 3i it would be free and give them he same funtionality, well probably better.

Still don't think it will be the first fluff piece we see on the topic. I am sure there is more to the story and its just a good piece of news.

Toys for the next mainstream

0


Well I have completed the first week of my new job. Looks like there is some really interesting stuff to work on which is great.

Took delivery of some nice networking hardware this week for use at a show. Its a Cisco Nexus 5000 with some sweet Converged Network Adapter (CNA) cards.

What makes this so cool for a VMware geek is that you are looking at where the industry will be in 12 to 18 months time. Currently the networking fabrics in a virtual environment could do with some improvement. We can work with what we have but the VMware admins should not really be the network admin. The network admins don't like the VMware hosts because they all just look like trunk ports to switches, all their usual tools for configuration, monitoring, security and trouble shooting just don't work. An the data center or server admins love the fact that they can save power and space in their data center through consolidation yet they are getting bigger hosts with lots of IO addaptors to support different fabrics.

What is going to make all this look different over the coming year. With the Distribute Virtual Switch (DVS) in VMware 4 combined with Nexus hardware in the data center (such as this Nexus 5000) hooked into Nexus 1000V virtual switch in VMware all running over some nice 10G unified fabric ports we are going to see serious realigment in the big end of town.

Exciting times. Lets hope with the crash of all the financial markets people will have enough money to purchase all of this sweet gear!

Powered by Blogger.