> | > The Granularity of On-Demand Cloud - Today vs Tomorrow

The Granularity of On-Demand Cloud - Today vs Tomorrow

Posted on Tuesday, March 23, 2010 | 4 Comments

Cloud is often characterised by a payment model of resource consumption, pay-as-you-go, only pay for what you use, CPU by the hour.

Over recent weeks I have been doing some thinking around cost and payment models for Cloud and two blog posts really caught my attention (happens when you are mulling over things).


The first was a series by Steve Jin from VMware who posted on Doublecloud.org detailing 10 best practices for architecting applications for VMware vCloud. Steve reminded me in a clear way the important distinctions between traditional vs applications built for the Cloud.
Most of the applications running on virtual machines are just converted as part of physical machines to virtual machines, or installed and run in a way just as before. Essentially they are not much different from counterparts running on physical machines. We call these applications as “Application In the Cloud” (AIC).

Cloud environment brings in new opportunities and challenges for application development. Modern applications can, and should, be designed or re-factored to fully leverage cloud infrastructure. When that happens, we call these applications “Applications For the Cloud” (AFCs), versus AICs as described above.
Its a great summary, Applications In the Cloud compared to Applications For the Cloud. AFC are built to really take on the dynamic characteristics that Cloud can bring such as rapid provisioning, statelessness, JeOS.

The second element was a comment by Lori MacVittie inside an post entitled "The Three Reasons Why Hybrid Clouds will Dominate".
Amidst all the disconnect at CloudConnect regarding standards and where “cloud” is going was an undercurrent of adoption of what most have come to refer to as a “hybrid cloud computing” model. This model essentially “extends” the data center into “the cloud” and takes advantage of less expensive compute resources on-demand. What’s interesting is that the use of this cheaper compute is the granularity of on-demand. The time interval for which resources are utilized is measured more in project timelines than in minutes or even hours. Organizations need additional compute for lab and quality assurance efforts, for certification testing, for production applications for which budget is limited. These are not snap decisions but rather methodically planned steps along the project management lifecycle. It is on-demand in the sense that it’s “when the organization needs it”, and in the sense that it’s certainly faster than the traditional compute resource acquisition process, which can take weeks or even months.
The granularity of demand is a real insight I believe, but only when you combine it with a concept of AIC versus AFC.

As I see it there are two major types of workloads that a organisation is going to want to run on an external Cloud. The new breed which are built for the new Cloud model. These will scale out, be light weight, stateless and fit perfectly with a consumption based model. Run a few small VM's, with low RAM and CPU consumption, as demand increases the application dynamically scales itself out to meet demand, whether that be user generated or application generated, such as a month end processing run. Paying for cycles in this model is great, as long as you can have some confidence that the resource will be available when you want it.

The challenge is that most organisations do not have applications which behave this way. There might be a few, but its not the majority. Certainly newer applications are being creating this way and people such as Steve Jin are promoting it, yet organisations take years to go through the expense and time to re-architect their systems. Likewise, ISV take time to the buy-vs-build applications which are much more attractive to organisations attempting to avoid internal development costs.

The Applications For the Cloud are our longer term game and its in these that the granularity of consumption is key, by the hour.

Yet the second type of workload, the traditional application, is the most predominant today, that is, Applications In the Cloud. These applications do not scale out well and they are typically very poor at being dynamic. Reconfiguring them for multiple instances is often a complex manual process, involving application and networking reconfiguration. Just look at Exchange, you don't just go and scale it out on the fly for a day. Organisations will look to Cloud in the short term to provide them some saving with traditional applications and the granularity of demand is much less. Changes are scheduled, expected, longer.

As Lori MacVittie has suggested, the workloads will be in project scale terms rather than by the minute. We need resource for the duration of our SAP upgrade, our merge and acquisition integration project, our Christmas period, for a semester of scheduled classes. Further, because these applications do not scale out easily the resources need to be reserved or guaranteed to deliver consistent performance results.

It's my view that larger, bulk reservations which are more project based for AIC are going to be predominant in 2010/11. Then as the weight shifts to AFC in the longer term much greater granularity in costing models will be dominant.

We must remember, not everyone has the luxury of a newly developed application which can burst dynamically into Amazon EC2.

Would love to hear peoples thoughts on this, so post in the comments.

Rodos

P.S. Its interesting to see the areas where VMware are expanding their portfolio and developing, lots around the AIC space with AppSpeed, SpringSource, Hyperic, Zimbra, Redis.

Comments:4

  1. nice post, mate. a strange phenomena I can sense but not see is waste in all of this. If you think about the massive amount of compute in just one UCS 8 blade system, never mind a cloud: 8 x 2 x 4 x 2.5 = 160Ghz CPU - but the problem is that it's not one big lump but slivers call cores (or even smaller HT threads)... and we know that few AICs can really benefit from this, whereas the stateless AFC should benefit from it: ergo, will AFC get 1/50/80% more value than AIC? is that the next push over the horizon?

    ReplyDelete
  2. Thanks for commenting Steve. I like to call them a slice. Yes, IMHO AFC are going to provide more value than AIC because you can scale them back and get that short granularity that can result in lower economic cost. They key is knowing that you can get those resources when you want them (just like if everyone calls a DR at once its first in, the tail enders may miss out). Typically you will also pay more for the on demand resource than a constant one (just look at Amazon).

    In terms of UCS being able to deliver great bang for buck, yes, you can run more slices per blade, but thats not really my point. The nice thing is that the more you aggregate the peaks in demand start to become less, so storms are not so much a problem, you just don't want a perfect storm.

    Okay, now I am just rabbiting on.

    ReplyDelete
  3. Anonymous4:08 pm

    My sense, from my newscale customers, is that the more we help existing applications run in the cloud and acquire behaviors of cloud (scalability, standardization, etc) the bigger the business for cloud providers.

    New apps for the cloud are a 10 year proposition. Heck we are still fighting w/ customer who want us to support IE 6!

    ReplyDelete
  4. Thanks Anonymous, I don't think its going to be 10 years but we will always be dragging some legacy!

    ReplyDelete

Powered by Blogger.