Cloud is often characterised by a payment model of resource consumption, pay-as-you-go, only pay for what you use, CPU by the hour.
Over recent weeks I have been doing some thinking around cost and payment models for Cloud and two blog posts really caught my attention (happens when you are mulling over things).
Most of the applications running on virtual machines are just converted as part of physical machines to virtual machines, or installed and run in a way just as before. Essentially they are not much different from counterparts running on physical machines. We call these applications as “Application In the Cloud” (AIC).
Cloud environment brings in new opportunities and challenges for application development. Modern applications can, and should, be designed or re-factored to fully leverage cloud infrastructure. When that happens, we call these applications “Applications For the Cloud” (AFCs), versus AICs as described above.
Its a great summary, Applications
In the Cloud compared to Applications
For the Cloud. AFC are built to really take on the dynamic characteristics that Cloud can bring such as rapid provisioning, statelessness, JeOS.
Amidst all the disconnect at CloudConnect regarding standards and where “cloud” is going was an undercurrent of adoption of what most have come to refer to as a “hybrid cloud computing” model. This model essentially “extends” the data center into “the cloud” and takes advantage of less expensive compute resources on-demand. What’s interesting is that the use of this cheaper compute is the granularity of on-demand. The time interval for which resources are utilized is measured more in project timelines than in minutes or even hours. Organizations need additional compute for lab and quality assurance efforts, for certification testing, for production applications for which budget is limited. These are not snap decisions but rather methodically planned steps along the project management lifecycle. It is on-demand in the sense that it’s “when the organization needs it”, and in the sense that it’s certainly faster than the traditional compute resource acquisition process, which can take weeks or even months.
The granularity of demand is a real insight I believe, but only when you combine it with a concept of AIC versus AFC.
As I see it there are two major types of workloads that a organisation is going to want to run on an external Cloud. The new breed which are built for the new Cloud model. These will scale out, be light weight, stateless and fit perfectly with a consumption based model. Run a few small VM's, with low RAM and CPU consumption, as demand increases the application dynamically scales itself out to meet demand, whether that be user generated or application generated, such as a month end processing run. Paying for cycles in this model is great, as long as you can have some confidence that the resource will be available when you want it.
The challenge is that most organisations do not have applications which behave this way. There might be a few, but its not the majority. Certainly newer applications are being creating this way and people such as Steve Jin are promoting it, yet organisations take years to go through the expense and time to re-architect their systems. Likewise, ISV take time to the buy-vs-build applications which are much more attractive to organisations attempting to avoid internal development costs.
The Applications For the Cloud are our longer term game and its in these that the granularity of consumption is key, by the hour.
Yet the second type of workload, the traditional application, is the most predominant today, that is, Applications In the Cloud. These applications do not scale out well and they are typically very poor at being dynamic. Reconfiguring them for multiple instances is often a complex manual process, involving application and networking reconfiguration. Just look at Exchange, you don't just go and scale it out on the fly for a day. Organisations will look to Cloud in the short term to provide them some saving with traditional applications and the granularity of demand is much less. Changes are scheduled, expected, longer.
As Lori MacVittie has suggested, the workloads will be in project scale terms rather than by the minute. We need resource for the duration of our SAP upgrade, our merge and acquisition integration project, our Christmas period, for a semester of scheduled classes. Further, because these applications do not scale out easily the resources need to be reserved or guaranteed to deliver consistent performance results.
It's my view that larger, bulk reservations which are more project based for AIC are going to be predominant in 2010/11. Then as the weight shifts to AFC in the longer term much greater granularity in costing models will be dominant.
We must remember, not everyone has the luxury of a newly developed application which can burst dynamically into Amazon EC2.
Would love to hear peoples thoughts on this, so post in the comments.
Rodos
P.S. Its interesting to see the areas where VMware are expanding their portfolio and developing, lots around the AIC space with AppSpeed, SpringSource, Hyperic, Zimbra, Redis.