> | > Transcript of "All about the vCloud" podcast from VMTN

Transcript of "All about the vCloud" podcast from VMTN

Posted on Sunday, October 26, 2008 | No Comments

This is the transcript of the latest VMTN Communities Roundtable podcast on vCloud. There was so much important and useful information for those people thinking about vCloud covered that I figured it was worth doing a transcript, so more people could have access to the information, as not everyone is going to listen to an hour of audio. 

Also having the discussion as text allows greater commentary and analysis as people can quote and refer to elements. 

After the large effort of transcribing, I will hopefully in the next day or so post my own analysis and summary of what I see as the interesting components.

Episode Number 22 with a topic of vCloud.

All about vCloud - Communities Roundtable podcast #22


John Troyer as the host.

Guest is Bill Shelton who runs the vCloud initiatives and virtual appliance market place.

There was a previous session on VDC-OS.

John reflected on the confusion of the world cloud with SaaS and other things.

John: So what is vCloud? Is it an API, a feature, a product or an eco system of service providers.

Bill: Agreed the hype is deafening.

At VMworld there was the announcement about the vCloud initiative. There are 4 components.

First there is a set of new technologies VMware are bringing to market which are above and beyond the current infrastructure stack. Some of these are the same functionality but made to scale in a better way. Some of them are new components. These fill the gap between the compute cloud out there and the virtualisation stack that VMware delivers. Some examples are enabling technologies which are below the water line such as charge back components or console proxies so you can get a console out on the public internet protocol more reliably and securely. It’s also above the water line where VMware are investing in some new services that would run on top of this infrastructure. Examples such as data center as a service so you can use easier web based interfaces to provision and manage hardware and even some hooks in the existing software stack that connect directly into a vCloud service.

Second is where is this technology going to be instantiated? VMwares approach is not to build their own data centers. Instead they are going to work with the service provider community. Though we have seen some large names from some service providers, some Telco’s, some Web2.0 players such as engine yard all signing up at a directional intent level to participate in this eco system as service providers that would stand up vCloud services.

Third, no platform is better than the content that runs on it. VMware are interested in running a very broad base of software on this cloud platform and instantiated on this echo system. vApps is the next generation of application container that is a new and improved way to describe applications. Its OVF based but has some additional hooks and policies that VMware think are important to make it so that applications can be moved, potentially between parties and into a cloud service or out of a cloud service.

The fourth part is the customer base. Customers participate by using software that has hooks into these vCloud services, as well as writing their own integration level solutions on the API that we are putting out as part of the technology stack that the service providers will have out there.

These are the four components. It’s a tough story as there are a lot of moving parts, partner echo systems and technologies. It’s hard to compress all of this into a press release. So VMware grabbed one use case to highlight the vision and that was the case of the flexible capacity so you can take an external service provider that has spare capacity, and take an alert on a on-premise work load and provision that to an external piece of capacity to balance the work load. This is sometimes called cloud bursting or flex capacity. This is what was highlighted on stage and the press release just to make it a bit more tangible and its one of the use cases that will come out of this vision.

John:  Raised a questions based on some of the discussion on Rods blog relating to Nick Carr and Gartner. It’s not really pieces and parts but more an extension of the enterprise things we are already doing and extending them improving their reliability and mobility. Is this how people are going to start or should we be thinking about it in a different way.

Bill:  This is a good point. Sifting between the hype and the reality there are some substantial blockers to cloud computing. What VMware are doing with their offering is starting to think about some of those blockers. One of them is if we look at some of the clouds looking at the Web2.0 community with great elastic characteristics and a light weight entry, a disposable infrastructure model, billed by a credit card. There have been some service level interruptions that have paused people from diving in a little deeper that they would otherwise. And then even when they are up and running in continuous form there are some intermittent service interruptions that don't get all the press but this needs to be addressed before we are going to see more critical work loads move from the enterprise into these compute clouds. We have made investments in that and that’s one of the reasons why we do very well in business oriented virtualisation computing because we have a very reliable platform. We would like to bring this into the compute cloud. What enables this are some of the investments in app monitoring and being able to report back on application performance. There are various ways to measure this, at the very least give someone really good visibility into the performance characteristics and uptime of their application whilst it’s in a compute cloud in a predictable way, so they have the trust that they know what’s going on. Another blocking VMware are working on is the app compatibility story. If you look at app engine from Google, very interesting model, obviously a price point that’s hard to compete with but they have limited themselves to people building new apps for that architecture. Still a problem but less so with EC2. We are looking to build much more of a bridge where you can take existing applications and put them into compute clouds. So when we look at vCloud and the technologies and the API which will be coming out at the first half of next year we should be assuming we are working with the same storage semantics that people are used to working with, file systems and the things of this nature.

John: Mentioning the APIs lets take this down to a more concrete level. What are the vCloud APIs and what can we see now and what will we be seeing?

Bill. One of the specific promises made at the announcement is that in the first half of next year publishing the vCloud API. Lets talk about what it is not, since most people will be familiar with the VIM APIs, this is not simply proxy-ing of VIM APIs. This is a very different approach. First of all it’s a RESTful [http://en.wikipedia.org/wiki/Representational_State_Transfer] implementation. For those used to some of the other compute clouds out there that make programmatic control of the infrastructure available, REST is, we think the best implementation to enable really easy, a low bar to entry of manipulating of objects, intuitively dealing with things. It deals with URIs and endless support across all different languages to manage URIs, make connections, we think its a good way to engage with the backend services. So that’s the choice of the actual technology. From a design perspective our intent is to be far more abstracted and far more simpler to deal with. So compared to the VIM APIs if you wanted to any type of rich operation, lets say provision a new machine, that could have four or five discrete steps to it, some of them could be synchronous, some could be asynchronous, you would have to coble together the work flow of things and map out the dependencies and trap the error conditions in case one of them didn't come through. What we have done is boil these things up to much simpler course grained operations so you can provision a machine through one call, sit on your response code and deal with an error code due to something such as a lack of billing information to make that provision or what ever else might be the case. But we are definitely trying to make it a much simpler way to very immediately pull together services that would sit on top of that infrastructure. The scope of the API at this point is working on making sure we cover all your basic infrastructure operations so provisioning of machines, all your basic state transitions, capturing inventory of what you have, we have added a couple of new containers that we think are helpful for people managing infrastructure, so that people can take a larger pool of infrastructure and chop it up, so if the case was you are a large company and you wanted to make a volume purchase of capacity you could then chunk it up and hand it off to business units for individual projects and then manage it in those individual containers. We will be pushing out the documentation for everybody to start engaging with and providing feedback at the end of the first quarter next year.

John: And that’s something that I could be interacting with in my own data center or at a service provider.

Bill: Correct.

John: On chat Rod asks is their anything in the API's on chargeback.

Bill: There is. There is an acknowledgement of chargeback and here is a model for one of the underlying pieces of plumbing with an ability to configure the service provider, which could be internal IT organisation who is standing up a service for a business unit or external Internet ready service provider. There is a new component which is part of the VDC-OS which is for charge back and you have a connection via a construct we call an account that then maps to pre allocated resources. We don't get into the details of how to described how someone is going to charge but we definitely give you the plumbing and relational mappings so that you can turn things over to people to prevision their own things and then let someone turn around and capture both in a pre-allocated lease type arrangement or after the fact usage based model, pin costs to specific accounts and even within that account sub domains so you can then have a larger account who then subdivides that into different projects and understands how the resources and the uses of those resources maps back to the different projects.

John: Will we be able to apply this to our current VI3 infrastructure or will this be for a later generation?

Bill: From a pragmatic answer. There is nothing about the API that would be to my knowledge specifically express something that could not be expressed in your current VI. Now as we all know there is an API and there is implemented API and what we are panning to folks as an implementation of that API to accelerate time to value and the use of that API there are going to be some dependencies there on the latest components that are part of the next version of our platform. So there are some option value there. It probably depends what calls we are making to the API. Its definitely our intent to make sure that someone can pick up the API and very quickly implement that API and make use of it. In that use case our latest version would probably be very helpful or may be required depending on the calls being made. But if there is some super hardened back compatibility reason with maybe a little more elbow grease that should be possible but don't quote me on that until we launch the final version of the API. At which point we will do the dependency graphing to make it clear for folks on what would possibly not run in the current version.

John: We have talked about provisioning. That immediately brings to mind use cases like DR and SRM and how storage is moving around. We have storage VMotion internally, how does this vCloud API and provisioning relate to these kinds of uses cases, DRS, SRM, moving storage around. 

Bill: You know we definitely see them as complimentary and that’s one of the interesting use cases so if we look at the disaster recovery eco system out there both from the service provider side of the camp as well as from the end customer and the software developer side of the camp, we definitely at this level see an opportunity to be able to move to more shared infrastructure types of back ends to DR. Right now we are seeing a lot of that in backups. So if we look at the adoption of say S3 as a backup target for the eco system right now, that has really validated that that is a relatively good work load to terminate at a really nice shared infrastructure storage cloud. If we apply that to compute cloud and disaster recovery we are bullish on that being a place where we can probably drive DR into some even more aggressive price points and maybe enable some lower end markets that currently aren't able to afford the fixed target and source target hardware and dedicated links and all this business. There might be more of an opportunity to do more of a poor mans DR where you are terminating VMs into a shared storage pool and its less of a scenario of where you have the big red button that fails everything over as SRM enables you to do with great precision its a little bit more of DR with backup kinds of workflows where you have to recover the VM and provision it. So we see them as intertwined and we are working with the SRM team right now on how the road maps of these initiatives complement each other and at what point. We are pretty excited about that, we are just not ready to lay out at what time of how and when those things come together and for what audiences.

Matthew: How do applications work on the cloud? We need to find a way to cut the underlying operating systems out of there so that it runs on one product instead of two products, that’s my base question.

Bill: It’s a good point. If we look at the space out there right now there is an enormous amount of affinity between the application stack and the application architecture and the cloud platforms themselves and in many cases they are kind of vertically integrated. So our first take is that we think we can enable a lot more by not going down that path and sticking to our roots which if we look at our heritage it was about ... one approach we are not going down should be clear is that we are going to create this new cloud computing platform and as soon as the ISV community adopts in and goes and reworks all their apps then everybody is going to capture that value. I think we have all seen that can be a pretty levered play but the change of actually getting it through can be difficult. So we are definitely going to start with a low barrier to entry, if you have got a VM you should be ready to start to participate and have a point of entry into these compute clouds. And that’s a VM that uses very standard storage semantics, its not just that you have packaged up a runtime in a VM but then the application architecture assumes totally different very proprietary external components. Instead relatively standard components should be able to run and our intention is that those will be directly transferable. But at the same time you are right, there is a lot of ... if we stop there that’s a great app compat story but it does not capture a lot of higher end things that can be done in a compute cloud around things like optimizing the OS and for that reason we have, you will see a lot more investment in our virtual appliance initiative and in this vApp container so that you can tune that application more and more for the vCloud. So here is an example, we will look at this and apply a more object oriented mind set to it. Its our belief that the more the application is a self contained object the more you are going to be able to make use of that cloud infrastructure, so examples would be one that we are working on right now that we call late bound infrastructure services is that, lets do a compare and contrast, lets say that you had a real basic backup policies in a work load in an enterprise a lot of those would be an agent in that guest then a lot of the policies would be sitting of in a backup server and when you picked up that workload and moved it into a compute cloud nothing would resolve and the agent would not know who to talk to, the policies would be lost and all that. So the idea here is that when we talk about policies being part of the solution is that in the actual application in the metadata which is the XML wrapper to a vApp we have given a vocabulary to express the data protection policies so that when the application gets picked up and moved into the vCloud that wrapper is interpreted and then depending on the policy that is set when its instantiated without any pre-authoring knowledge the backup service is bound to that application to then honor that policy at some level. Were not going to get too ahead of ourselves here getting lost in the weeds of policies around workloads we want to just kind of deal with the real basic policies of what are the data protection policies, does it have a requirement for high availability, basic building blocks like that we can then bound the resources that already exist in our platform to that workload. When it actually arrives there is a mechanism that the service provider can even calibrate their offering and have different price points on the services that then do get bound when that work load is passed in.

John: Is there a place where we can get more information of vApp or OVF? The standards has now been released, correct?

Bill: OVF as a standard is out there you will find developer tools. You will find some pointers from the virtual appliance area of the VMware website. Also on the DMTF as it’s an open standard. Now some of the gap between the vApp and just a regular OVF container is some of the policies we are talking about and at this point I would say you are not going to find the hardened documentation on that. That’s something we should be looking at to be coming to market in the first half of next year in conjunction with the APIs because obviously you can see that as we are describing these things these things are interplayed. One thing that should be noted from the vApp side of the camp, there are four ways you could arrive at a vApp, with the emphasis being choose the one that works for you, hopefully all of them are intended to be very light weight. We have a new authoring environment called studio which currently produces virtual appliances and the next wave of our platform coming to market that will be upgraded to also author vApps. Just straight within the VI Client, the next release of our platform, you have the ability to do a real quick drag and drop assembly of a vApp. It should be noted that one of the key attributes of a vApp here which is again one of these basic policies is just the way to assemble multiple VMs under one container. So instead of having to really operate on the VM level you can say I have the HR app and the HR app is made up of a three tier lamp stack and you can put all of those three VMs together and then operate as one unit, start and stop it and mange it more at that level versus the independent VM level. So drag and drop in the VI client. In the end we are talking about an XML meta data descriptor here so pop open vi or emacs, your text editor and operate on that. And then finally we are working on some plugins to the popular developer environments so that you have the ability to extend your developer environment, where it be Eclipse or some of the Visual Studio tools from Microsoft and I would not be surprised if we didn't see some third party authoring solutions that currently are focused on the virtual appliance space, if they were to roll over and embrace and expand their offerings to support vApps to.

Newt: Going down a bit lower in getting ready for the vCloud what should we do to get ready from a network perspective and setting up systems like an N tiered application that we wanted to prepare for the cloud next year, what are the things we need to consider now, what do we need to prepare for?

Bill: You are thinking about I am developing applications what could I do to make these applications to possibly reduce my switching cost to when this is available or are you think more from a service provider perspective of putting infrastructure in place to then or both.

Newt: A bit of both.

Bill: From an infrastructure perspective we do not have hard guidance on reference architectures at this point. you know, here is your reference architecture to setup a vCloud instance. We are just not in a position yet to do scale testing of a reference implementation then start to feed that back. But I will say with a lot of the large hardware vendors, both in the networking side and on the server side, that work has just kicked in and is progressing pretty well where ideally, you could imagine that the list of vendors here who are very excited about making sure that as the first half of next year as we are unrolling the software stack their ready to step in with reference guidance on hardware implementations. So I can't point you to anything right now from us that’s going to say hey here is, say you are a big Cisco fan, here is your reference implementation using Cisco. But I could tell you that we plan on the first half of next year in concert with the specification for the APIs becoming available to have more of that kind of guidance available.

Newt: Sounds like there is a lot of more work on VMware side to put together the details that we may need but it’s definitely getting there and it sounds like you guys are on the right path.

John: What are service providers, where that’s 3rd party providers or me as the data center guy have to do in terms of hardware. What about common chip set issues in the cloud. Most providers are at this point just buying what ever is cheapest or who knows what they may have, Intel and AMD chips in different families and they may have non 86. We paper over a lot of the hardware differences with extended VMotion capability and we paper over some others but how far away are we getting away from having to have the exact same configuration and hardware configuration everywhere?

Bill: That’s a good point. There are two dimensions to view this problem. What is the required commonality between, and the mobility of workload is the big theme that we emphasize in our cloud space, so depending on whether the cloud space you are setting up, whether the entire life time of the VM workload that you are instantiate is within that cloud or the cloud only represents only one phase of its life time this will change. But let’s assume that it’s the more complex model where the cloud needs to be somewhat compatible with an external environment. Right now the current implementation is that this is not truly what you would call a wide area vmotion, that would bring with it a bunch of hardware compatibility requirements, our model of taking a workload and moving it into the cloud, we are looking at some replication technology and some kind of push ahead of the migration technology approaches such that we are dealing at the raw VM level and you don't have a situation where you'd actually need the compatibility between the two chip sets and you should be pretty safe in that regard. And that goes for both the storage and for the server hardware. Now if we look within a compute cloud you have a different problem and a different and interesting thing here which is that compute clouds become very large and by definition you are trying to extract and roll up massive chunks of infrastructure under common interfaces and get rid of a lot of the plumbing details. And obviously plumbing details include pockets of new servers vs old servers and things of this nature. We have the ability, what we call the bin packing of work loads, the ability to segment gear such that the polices get applied to where the workload actually lands. Of course the model here for the placing of workloads is not as it might be in the traditional data center where really the admins choosing to a very specific level where they are placing that workload. Instead this is all taking place through workflows we are implementing and then you can customise, but we do give you a way to set it up so you can have some heterogeneous gear back there and make sure that common work loads that needs to co-exist on the same gear makes it to those buckets.

John: That works for me. People are familiar with the situation, people have homogeneous pools of machines as well so I think its the same kind of situation. At least we are familiar with the way it works today. Rod has a question about management, internal vs external, cloud vs VC.

Rod: One of the issues that everyone is experiencing in the large installations today is the unified view of all of their machines across their virtual centers. We are seeing that both VMware and third parties are bringing out technologies to show a unified view, and in version 4 we are going to have federated virtual centers, so one thing  that customers are going to be really interested in is that once I have some of my workload in the cloud how do I manage that and view that. Bill do you envision that in my virtual center I will be able to see my virtual machines and they will be identified as some in the cloud and some out of the cloud? I will be able to through virtual center and the API you are producing to restart machines, suspend them, initiate snapshots and have a unified view.

Bill: Let me give you the roadmap on how we are planning on implementing that. This brings us to a couple of different investments on the technology side so let’s place this. We talked about there is under the water plumbing and there is above the water line services and obviously one of those above the water line services is the way to capture the view of workloads that are running in a vCloud instance and that view in the short term, safe to say within the next year, we will see that as a plug that you can then add to VC via the extensibility mechanisms that currently exist in VC. So this gives us the ability to bring together under one pane of glass, one click away looking at infrastructure at different service providers along with looking at your internal infrastructure and give a common vocabulary to look at things like performance and uptime and all that business.  That’s kind of version one. The next version which is the next generation of that and it would be premature to give timing around that, but we indent on taking the next step and putting those resources that are off in the cloud and putting them into the native inventory, so that its not just a web page that is an extension or a new tab if you will it is fully integrated into the native inventory of VC.

Rod: That sounds good. Management in large VMware installations is already a challenge and we don't want to add to that so its good you are thinking through those things.

Bill: Let me highlight that. This is a super point and let me take it in another direction that I am sure is on some peoples minds which is VC scalability. Is anything we are doing here scoped to the current scale capabilities of VC and the answer is no.  What we have done and what we talk about when we talk about the under water plumbing that we have. We know that there is much more scale coming from VC, so I won't dismiss that but its not the type of scale that’s going to get you to the 50,000 physical box layer. And that’s where we have a story in the architecture, that we can get you up to these super high end size areas. That’s an architecture that is very common that we see out there in cloud computing today, we implement a message bus, a pub sub architecture, and we take VC and various other components and out them into highly reproducible cells that you can just scale out in a horizontal way that just sit and wait on the message queue and as the message queue starts to back up then more of these  cells become created and the message queue starts to empty then you collapse the cells. You have really taken VC and some of the other components that previously were the end all node and kind of create these as elements in an architecture that can scale out and have a very simple directory that then maps a given VM to a VC, so depending on you point of access you can always very quickly track down to which VC if you need to richer data that VC holds you could quickly go talk to that VC directly. There has been a fair amount of thinking in how we fundamentally change some of the limitations that might have existed previously in that architecture.

John: Wow, that’s pretty interesting. You heard it hear first. I don’t want to get down to far in the virtual appliances and the vApp and how you trust them, that’s kind of a whole separate conversation and which we have been having for a couple of years that is probably worth having at some point on this podcast. But Godfree does ask, especially with 3rd part virtual appliances and vApps, and maybe you can talk briefly what we are doing with the virtual appliance market place, how do we trust the workload bundles that are coming into our clouds.

Bill: It’s a great question. Let me give a couple of different areas of investment and let me dive into any one of them into more detail.  The first one which is meaningful and definitely customers have told us that this is meaningful which is if you go to our virtual appliance market place you have a whole bunch of different things up there. Some are very well thought out, ready for production use in very mission critical ways and some of them are kind of projects that people have thrown together and we purposely encourage both and we love the fact that the market can evolve those as they need. But what we did do is we added a tearing mechanism we called certified virtual appliances and now its call VMware Ready virtual appliances. You will find on the external web site the criteria for what it is to be VMware Ready. This is not just a light weight rubber stamp mechanism. We have a vetting process where the ISV submits the virtual appliance to us and we have a team that actually tears it down and does everything from inspecting it for any residual SSH keys that might have been left around, correct use of disk configuration, correct versioning on VMX files, permissions on critical resources, whether that be VNC passwords and things of this nature, to, I am not going to say address everything but everything we have ever learned about to pose an issue to someone we make sure we pull this back and its a closed loop system, so as the eco systems learns about building virtual appliances and securing them and making them more reliable we fold it into that program and customers seem to like that Monica and the ISVs find a tonne of value and they don't view it as a burden they view it as you are making me a much smarter virtualized vendor. So that’s one layer of involvement that people can look to depending on whether you are buying or whether you are making software that we think is useful and we will continue to evolve and we will definitely appreciate feedback in that regard. If you get down to some of the more technical, tactical elements of trust, the first key to security is really not to trust anything so what is the perfect level of trust. So for example some of the things we do is if you hand over a VMX file to us to load in, obviously VMX has a vulnerability that’s not a super trusted element, someone can readdress an element to go to some other point of storage so we are obviously translating that, scouring that off, making sure that what ever resources someone is pointing to in their VMX layer that they have ability to touch those resources and we have some isolation and some constraints there on. We go through about 50 different initiatives where we trap these things but the principle here is that we don't really trust anything that comes into the cloud. On another level network isolation. We have within the vCloud we talked about these different containers that we are enabling, one of the key benefits of those containers, we talked about them from a charge back perspective but one of the other things is that it gives you another container to create network isolation, such that there is isolation within these virtual data centers so that someone is not off listening on the TCP/IP stack they should not be. There is firewalls that have been integrated in, some of which will be visible to customers and they can configure in a managed way but elsewhere we just have firewalls that you don't even know they are there but they just create the need isolation.

John: You are not trusting the service provider. We are not just trusting the base functionality and trusting the service providers to do that kind of containment themselves, we are providing some capabilities?

Bill: We are and that’s a good way to characterize it. But at the same time there is going to be, or plenty of service providers out there, I will highlight Savis. This is a big theme of their business and this is one of the reasons we chose a partner eco system approach vs building our own and this is one of the critiques right now is that you build your own and you have got a one size fits all. And maybe how aggressive they are at implementing and locking down security would meet your needs  but maybe it won't. Our bet is that if we put the enabling tools and the philosophical design hooks in place these service providers will go off and differentiate themselves accordingly.

John: We only have a few minutes left. Service provides and partners, there is a listing on your web site. Partner eco system, is there anything we can do with them now or do we have to wait for this to roll out or are there current offerings.

Bill: There is value out there and there is more coming, so what you will see in the press announcement is that we basically view the partner eco system and we see it evolving in different stages and it will be up to the service providers to decide at what level they want to engage. The first one is the VMware Ready service providers out there, these are folks that use VMware today, there is no standard API, and things of that nature to engage with our services but you will find the worlds leading service providers up there and they are using our software and you can imagine it being more point to point in the arrangement and in the integration solution but obviously the fact that two parties are using VMware you can do some pretty creative things and that’s the first layer of engagement. The second one is when we talk about being optimized, that's where there VMware Ready optimized. These are folks that have now taken on the API and now the building blocks that enable to commonality of both the enterprise customer as for the small medium business and the service provider can be expressed in a more consistent way. And that’s where we get the API in there and that’s where all of a sudden that’s were we will be looking to work with, not just those two parties but with ISVs to build services on top and then the final one is where you have what we call VMware optimized cloud services and that goes all the way to the point where you will have the ability to blend their services right into our on premise software.

John: People may not realize that there are people offering VMware as a hosting service even today.

Bill: Absolutely and that’s where we want to highlight. There is value to be had and services to be consumed today. This is a roadmap that takes us to a much better level of integration to spawn what we think will be some pretty cool eco system plays. Where software vendors and cloud, ISVs writing to these platforms and getting a lot more tooling out there. Right now, if you did your own cloud infrastructure, a lot of the tooling would need to come from yourself as the service provider. As we evolve this there should be a 3rd party eco system that comes in to help with a lot of that tooling and really accelerate the time to value on any integration and service setup.

John: Let’s talk about some use cases. We talked about different kinds of fail over for DR. Rod had an idea that I actually blogged last night as I was promoting this podcast which was, if I can fail over to the cloud I can also fail over from the cloud, I can use their bandwidth and use their service levels, but like when gmail goes down it can fail over to my internally hosted data center which I may not have chosen to be my primary site but my backup site. That kind of turns things on its head and I hadn't thought about that. Are there other uses cases? I want to talk about VDI for a second. It seams like the VDI desktop is already living in the cloud. Toms a VDI guy and he had some questions or commentary on how VDI is our start of the cloud right now.

Tom: Some of the clients I am working for we are designing what we consider to be a fledging cloud, in that its a desktop service that can offer hosted desktop either at a data center or in their own environment, a single cloud, desktops on demand, ramped up ramped down. VMware have most of the production line already, DRS, HA, DPM, etc, they all add value to the cloud. Thinking about how that would move from what we have got now, through VI4 and into VDC-OS.

Bill: I could not agree more, the building blocks are there. That is one of the use cases I would expect to kick in more meaningfully in the second half of FY09. I speak to customers like you are dealing with and they say hey I would like to virtualise desktops and in many cases those are desktops for people who don't work on site and they are actually remote workers.

Tom: We are not finding that. Customer are coming to us and asking us to take the machines off the desk, back into the data center, outside of the environment, effectively give me my green screen back.

Bill: Ok. Maybe this is just a subset audience which has been filtered through to me because I am working on the cloud stuff. I have been introduced to a fair number of folks who were using vCloud in many cases to bring off shore labor to bear on their core business processes but they were not prepared to take the data and actually provision and they did not want to distribute the data and wanted to keep that behind the firewall but make use of those off shore resources and because they were off shore the opportunity cost of whether its running in a secure dedicated cloud or whether it is internal to the organisation is not so critical. And in fact going out to a service provider who has much better Telco infrastructure might even be optimal.

Tom: That’s the way they are looking at it from their point of view. I think Rod books it as reverse cloud in that our clients are asking us to host their desktops and for them to be federated into our security environment and on failure of us it will fall back to their data centers.

Bill: That falls in line with what we are talking about here with the federated cloud model, which a lot of the investments we are working on right now are about how do you blend the infrastructure into the current existing IT operation framework so its non-disruptive and does not look super distinct, so it does not require radically different procedures and policies around it. That’s one of the themes we are working on at the networking layer and specifically on how we facilitate VPN access if need it.

Tom: These are the same questions we are asking at the moment so we seem to be moving in the same direction. I wonder if there is any way to bring it off line.

Bill: I would be more than glad to.

John: I will hook you guys up. Bill any final thoughts. Use cases out there. We appreciate you being here. We should have you back to talk about the virtual appliance market place and VMware studio.

Bill: We are trying to be very up front about where we are and where we are going. This is larger than here is an isolated product so we felt we needed to engage with people at this time versus what is normal in the VMware case where I can only come and talk to you about something on the eve of its, a week before the product is released. There is far more going on here so we wanted to engage at this point. A great time to get together is when the vCloud API is published. That’s when we have a step curve improvement in being able to talk about very specific use cases and what’s enable and what’s not enabled.

John: Do you think we are headed to a world of utility computing?

Tom: Yes.

Bill: Um .. I will say yes, but I will quickly follow up with we the trick on all these things is timing and there will be a step curve adoption and what is the critical mass when it really takes off, I would not even want to speculate on that.

John: That’s what I like about our cloud discussion, and here at VMware. Its grounded on what we can do now, and then extending that evolutionarily and revolutionarily build some other capabilities, it is really a pretty grounded discussion which I really appreciate. Trying to get my hands around the global cloud discussion which is usually not very grounded. Thanks everybody.

Leave a Reply

Powered by Blogger.