Musings on areas of technology that effect the Enterprise. Focus on Cloud, Virtualisation, Storage and Data Center.
Pages
Friday, November 27, 2009
Gartner on the place of Private Clouds
Tom thinks that services will ultimately move to service providers but that they are not ready yet. Hence to do something now, organisations are looking at virtualisation to create Private Clouds (Internal Private one would assume). Its predicted a lot of the money spent over the next few years will be put into these areas and not into utalisation of Service Provider clouds.
However, in Gartners view, Private Cloud computing is not the destination but a stop gap, near term step, until the services are more mature which may be six months for some services or ten years for others. Tom also talks about how creating Private Cloud can be a stepping stone to ease the migration to Service Providers in the future, "I don't want to build a dead end, I want to build a stepping stone".
Its only 90 seconds long, take a look.
Whilst I like the premise behind the message, I can't say I agree with all of it; although I am sure its hard to sum things up in 90 seconds. I think that Private Clouds will remain and it won't be an evacuation off to the Service Provider space. There will still be a place for Private Cloud. I can think of many reasons to maintain your smaller Private Cloud. If you have some good ideas of why you would maintain some Private Cloud post in the comments.
Rodos
Thursday, November 19, 2009
Interview - Chris Akerberg from Vizioncore
- What does it mean for Vizioncore to be owned by Quest?
It allows the backing of a large company which brings deeper pockets for investment. There is also the wider breadth of the Quest sales force. Lastly there is the sharing or blending of IP where Quest can bring their application and database management expertise.
- What is the reason behind having free and paid tools within the companies portfolio?
We are in a marathon of virtualisation adoption and we want to give back to the community free tools that will assist in that virtualisation adoption. These tools include a P2V product, V EcoShell and an Optimizer product for storage. The paid tools will evolve with the customer as their need for virtualisation increases.
- How do Vizioncore tools work within an organisation as they move along the maturity model to reach 80% virtual or above?
We have an analyze to automate story which matches the virtualisation adoption life cycle. Analyze your existing environment (vFogLight), take action on the results by converting the workloads (vConverter), then protect the workloads (vRangerPro and vReplicator), now monitor those workloads (vFogLight), then optimize the environment (vOptimizerPro and vFogLight) and lastly extend your ROI by doing automation (vControl). So analyze to convert to protect to monitor to optimize to automate. We can help customers out anywhere along the curve.
- What can you share about product updates next year? What's coming?
Not a wider portfolio, we have already moved from being a backup product to having a comprehensive portfolio. What you will see from Vizioncore is going deeper within those products. So vRangerPro being able to have visibility and control of the applications as well. vFogLight being able to look at applications. Bringing application awareness to the product line. Lastly having all our products being Cloud ready as we are supportive of customers moving into the managed services or Software as a Service industry.
- VMware are increasingly releasing products which overlap with functionality of 3rd party providers like yourself? What's your position on this?
We get this question quite a bit and are creating literature for our channel partners and customers about the differences between what VMware are offering and what Vizioncore do. What we do is communicate with VMware and understand where their roadmap is going and as they turn we will turn with them. Yet if they are going to put a significant investment in an area of platform, Vizioncore can choose to not spend money there and maybe go off in a different direction. Likewise where VMware might not be investing, Vizioncore can capitalize and add more value. Also we want people to think of investing in Vizioncore is investing in virtualisation management. We are going to to not only do this analyze to automate story for VMware but also for HyperV and Citrix. As customers decide what to use within their strategy they can still look to Vizioncore for supporting products.
Drobo configuration
Drobo utilizes the revolutionary BeyondRAID storage technology that protects data against a hard disk crash, yet is simple enough for anyone to use. As long as you have more than a single disk in Drobo, all data on Drobo is safe no matter which hard disk fails. There’s no need to worry about anything else.
Ocarina Networks
The analyst industry is telling us that unstructured data growth is going to outpace that of transactional based data. "While transactional data is still projected to grow at a compound annual growth rate of 21.8%, it’s far outpaced by a 61.7% CAGR predicted for unstructured data in traditional data centers." You don't have to look far past your own explosion of data consumption to realise this is becoming a large problem for IT departments. Combined with this growth is our desire to keep more aged data online, in order to provide much faster retrieval.
What is one to do? Well a company called Ocarina Networks says they "make free space on storage you already have" through some very clever content aware compression and de-duplication. The key element here is that it works on your online storage so the savings to save are multiplied as there are flow on effect to the transmission of your data over networks and to the amount of data that you need to backup. So even though companies like Netapp (which Ocarina say they are 57x better than) and DataDomain do de-dupe its only at the underlying storage without these possible secondary benefits.
A quick look at just three of the people involved in Ocarina gives you a good impression that they have the pedigree to achieve great things here. Their CEO, Murli Thirumale and CTO, Goutham Rao, hail from the same roles in the Citrix Advanced Solutions Group, where they led the SSL-VPN division (acquired via Net6). In those roles they took their technology to the number #1 unit in market share in eighteen months. The Chief Scientist, Dr Matt Mahoney is a thought leader in next generation data compression. Also as a company they have been very busy in creating some interesting patents.
Last week at the Gestalt IT Field Day I got some deep dive into the Ocarina technology. Here is a video I took of Goutham and Murli.
However the insights from these guys on the science of de-dupe and compression was very informative, so lets look at what they had to say in more detail.
There are two approaches to compressing data, either a dictionary or a statistical approach. A dictionary encoder approach, such as the LZ algorithm, "operate[s] by searching for matches between the text to be compressed and a set of strings contained in a data structure (called the 'dictionary') maintained by the encoder. When the encoder finds such a match, it substitutes a reference to the string's position in the data structure."
The statistical approach is much more interesting. If you can predict what is coming next in a data series, you don't need to record it, you only need to record the things you did not expect (this is what takes up the space). As long as you use the same algorithm to extract the exception data you get exactly the same data (or file) whilst only saving a very small part of it. You can also have a feedback loop from from the errors back into the input to improve the prediction. For example if you look at a photo of the room you are sitting in now, there are probably lots of boarders or edge framed objects or walls etc. If you turned all of these edges into axis's and you were to follow an axis of colour moving down the edge of the wall you can expect that the next element moving down will be more of that same edge, you only need to record something when its not. Complex but you can do some clever things with the right algorithms [more on that shortly].
Compression is something you can only do on a single file. As mentioned the key to compression is predicting what the next value is going to be in an incoming stream of data. The more data you have available in the incoming data steam the better you may be able to predict the next value. Also note that a lot of file types being generated today are already compress internally, such as JPEG images either by themselves or embedded inside other documents.
De-dupe is all about finding the similar chunks of data by comparing hash values or a fingerprint. The smaller the chunks you are comparing the better because it increases the likelihood of a match between the two. Dividing the data into fixed chunks will get you so far but unless you have really small chunk you can miss a match that might occur across the boundary of two chunks. Netapp de-dupe does it this way. To get maximum effect you need what is called a sliding chunk window, looking for a matching bit of data anywhere, yet this is expensive computationally as you have to calculate a lot more hash values. There is a risk that two different chunks may produce the same hash or fingerprint, a false positive. Typical hashing algorithms are MD5, which is very weak or SHA256 which is strong, but Rabin [http://en.wikipedia.org/wiki/Rabin_fingerprint] is most liked [its fast to implement in software and works well on sliding windows]. How does all this comparing of chunks of data save you data? When you find a duplicate chunk you don't need to save a second copy, you can just save a small reference to the original piece of data you already have. Some technologies, such as Microsoft Storage Server 2008 do single instance storage (de-dupe) by only comparing whole files, which is bit of a joke really, it not going to get you much saving, because these days we create so many copies of the same files which are only slightly different (we add a few words to a document but save it as a new file name) or there is a lot of repetitive elements across files (images and templates). Yet this technique is really easy to do. Lastly, not all data can be de-duped, some just has very little if any repetition.
Now it also matters what you are de-duping, is it a data moving over a network, a backup or your storage. Each of these has a different "window" of time that they are looking at. On a network transfer you don't have much of a window and the data in that short window may not be very repetitive, whereas a backup has a very long window with repeated cycles of data coming in that is probably very repetitive. These different characteristics of the data stream require different algorithms to achieve greatest efficiencies.
Compression does not preclude de-dupe but they do pull against one another. For example as mentioned earlier a lot of data is already compressed and compressed data removes just about any chance of finding duplicate chunks of data. If you are a photo storing site you probably want to turn de-dupe of and not waste all the effort. Likewise in a corporate environment you may have millions of occurrences of your company logo image but they are all compressed and embedded inside Word and Powerpoint files that are then also compressed. All that repetitive data has been obfuscated! Remember, all that growth in storage is in this unstructured data area.
Yet you want both de-dupe and compression, because there is always data you need save so compress it.
So given this primer what do Ocarina do? Well Ocarina find the optimal chunk size for everything, compression and de-dupe, by performing object chunking. If you take all of the data and break it into objects, so a zip file is broken down into its multiple files, a Word document may be broken down into images and text. Then the actions occur at the object level. Hence a jpeg would not be broken down into smaller chunks, as the best windows size to compress or de-dupe a jpeg is the whole image.
Going beyond the object based chunking Ocarina then use a neural network to determine what the best compression algorithm is for this particular type of chunk, in fact they have over 120 different algorithms. There are even different algorithms for variations of the same object, such as for a small versus a large jpeg. Their algorithms range from plain text to gene sequences. For images they have some very smart algorithms that perform spatial optimization or what can your eye see, based on chrominance and luminance. If you take a typical scenario it helps to understand the power of this. If you have the same photo at different sizes, or if you slightly adjust a photo (such as removing the red eye) the data on the disk is all very different and there is probably no repetition across them. However because Ocarina can "look" at the image it is able to determine that they are all in fact the same photo.
How does all of this work? Well an appliance accesses your storage and process the data. It breaks files down into their objects, weaves it magic and puts the smaller shrunk version back. This all occurs in RAM. To be safe, before it replaces the file it compares the original file with an expansion of the shrunk file to ensure they match exactly so there are no errors. Of course the files on the storage are now different, so you need to use the ECOreader (a file system filter driver) which expands the files in real time as they are read so you get them back in their original format. Of course sometimes you may want to read the shrunk file and not expand it, for example if you want to transmit it over a network (replication) or for backup. The software can be integrated into storage to make it all transparent to the user. Performance when reading and expanding is on par for de-dupe, for compression its dependent on the method but usually the same rate to uncompress as it was to compress it. Essentially you are performing an economic tradeoff of consuming compute cycles for disk capacity gains.
Having reviewed all of this organisations which are having to store, transmit and backup large amounts of unstructured data could benefit a lot from the Ocarina technologies. Especially those that the Ocarina algorithms work well. From speaking to them they are working hard on new and improved algorithms but just as importantly on how to make the technology solution work well.
You can find more details about the products on the web site http://www.ocarinanetworks.com/
Rodos
[Note : I attended the Field Days as a guest of Gestalt IT. Travel and accommodation was provided as part of the event. See the Field Day FAQ and my comments for details.]
Tuesday, November 17, 2009
Cisco UCS deploy and ESXi 4 install guides
Monday, November 16, 2009
Happy Days - Fusion and Shrink Disk
Saturday, November 14, 2009
Gestalt IT Field Days 2009
This unique event brings together innovative IT product vendors and independent thought leaders who have immense influence on the ways that products and companies are perceived and understood by the general public. The world of media has changed, with social media and blogging gaining special importance.
Our Field Day is an opportunity for tech companies and independent writers to get to know each other. Ultimately, we hope to provide a forum for engagement, education, hands-on experience, and feedback.
Gestalt IT Field Days 2009 Day 2
Friday, November 13, 2009
Gestalt IT Field Days 2009 Day 1
Here is a video of the days events where each of the vendors gives a little summary of their message.
Sunday, November 08, 2009
Gestalt IT Field Days - Cash for comment?
This unique event brings together innovative IT product vendors and independent thought leaders who have immense influence on the ways that products and companies are perceived and understood by the general public. The world of media has changed, with social media and blogging gaining special importance.Sounds reasonable. However for anyone from Australia it raises a suspicion of "cash for comment". Cash for comment was a scandal that occurred in Australian Radio where the two top "shock jocks" were giving positive comments regarding various big name companies during their broadcasts without making it clear that they were actually being paid to do so. Therefore when I received the invite to attend I was a little cautious. After all one thing I love about my blog is it gives me a place to ramble on about things I am interested it, whatever they may be. I don't have any sponsors although I do use Google Adwords which produces enough revenue for a cup of coffee once or twice a month. You can see that disclosure is something the blogging community is having too deal with given the updated guidelines released by the Federal Trade Commission in the US regarding bloggers and disclosure.
Our Field Day is an opportunity for tech companies and independent writers to get to know each other. Ultimately, we hope to provide a forum for engagement, education, hands-on experience, and feedback.
It did not take long for me to realise that the Field Days are a great idea, that they can be above board and that concerns around integrity can be handled appropriately, after all most of the issues are not new. As we continue to move into the new social media era these concerns will need to be worked out, just has they have been in the commercial media space. I think its good to be part of that evolution.
Also, who would pass up an opportunity for a few days geeking it up with some really great and smart people arguing over the ins and outs of products and the industry. Sounds like a lot of fun.
In summary I think this question from the the Gestalt IT FAQ states it well
Isn’t this just a paid vendor love fest?Lets see how it goes. My Cisco Flip may get bit of a workout!
If you know the folks we are bringing in to attend, you should know better than to throw rocks. These folks believe in tech and won’t hesitate to tell the truth, even if it hurts. They aren’t paid to attend (though their expenses are covered), and most are taking vacation days off from work. If you’re worried about payola, there are much better places to look.
Rod
Wednesday, November 04, 2009
Welcome to vBlock Type 2
I’m also glad finally to be able to start talking openly – you should have seen the edits that occured to the VMworld 2009 VMware/Cisco/EMC supersession (SS5240 – which you can watch here) to tiptoe around this (if you do watch it now knowing what we’ve been working on – it’s interesting).I remember seeing some of these edits and can attest to just how much care was given to this. Here is the slide from that VMworld session that also refers to our VCE customer reference story.
Sunday, November 01, 2009
VMware vForum Australia 2009
What were the highlights of the event?
- The PCoIP demonstration performed by David Wakeman in Steve Herrods keynote was by far the best. The demo compared a RDP session to a PCoIP session at both 1ms and 180ms latency, the differences between the two were amazing. You can read a news report that contains a video of it.
- An awards night was held with customers and partners receiving recognition across a number of categories. You can read the list of recipients and see some photos at the CRN article.
- It was mentioned that Australia is the most virtualised country in the world (per capita pop) more so than many other OECD country.
Welcome to Day 1 of #VMware #vForum in Sydney, Australia
VMware #vforum http://mypict.me/1cCfQ
Carl Eschenback presenting his keynote at #vForum
#vForum keynote is full, people registered and turned up.
Carl 3 steps to virt journy. IT Production, Business Production to Virt IT (virt first). Start with savings and move to bus agility #vForum
Carl on stage at #vForum http://mypict.me/1cDVx
Cloud bingo. Carl just said Cloud, he he #vForum
"Cloud is real and not just hype" Carl E, #vForum keynote
Don't fragment Ur DC with silos of diff hypervisors. Simplyfy. #vForum keynote
Carl tells how MelbIT upgraded to vSphere over lunch on the day of the launch. #vForum keynote
Is cloud evolutionary or revolutionary? Delivering ITaaS within your DC. Carl keynote #vForum
#vForum take the benefits of the cloud and use them in Ur DC. Build Ur private cloud. Then federate with providers. Carl.
Carl lists three local partners building the federated Cloud; Optus, MelbIT & Telstra. #vForum
Carl talking about CapacityIQ. Wonder if he knows it does not work with v4. #vForum keynote
Carl says the Cloud providers may integrate with VMware GO for cloud workload ingestion. #vForum keynote
Carl E, "2010 will be the tipping point for VDI". #vForum
perational cost savings. Carl keynote
#vForum Capital cost of VDI is getting lower but focus on the o
"Provision users not devices. Let the personality move in time and space." #vForum
Why Vmware will will in VDI? Platform, management and best user experience. Carl E #vForum
Carl now talking about SpringSource! Fantastic. IMHO spring is key for VMware long term future. PaaS #vForum keynote
Congratulations to my competitor Dimension Data for their win as regional partner of the year at #vForum. Well done.
Heading home from big day @ #vForum. Over 1100 people rego'd for my session tomrw! Where did they all get the imprsn it would be that good?
Welcome to day two of VMware #vForum ANZ. #vForumAust
Will be doing tweets from Steve @herrod 's keynote this morning at VMware #vForum
Lots of people entering the keynote at #vForum http://mypict.me/1dxJ2
#fail #vForumAust hash tag promoted at start of keynote. Do I keep boycotting and only use #vForum? I think so.
Herrod @ #vForum: CTO of the year now on stage
Herrod @ #vForum: doing desktop first with View. Mentions Win7
Herrod @ #vForum: Provision a person not a device. The term desktop does not make sense anymore.
Herrod @ #vForum: XP and W7 are tier one workloads that vSphere was desined for.
Herrod @ #vForum: intel 5500 does desktop workolads grt as does flash storage.
Herrod @ #vForum: Demo uf desktop use experience coming.
Herrod @ #vForum: Best experience to the type of device Ur on, even if offline. WAN, LAN, local. PCoIP and CVP
Herrod @ #vForum: PCoIP, pure software, bandwidth aware, lossless if required, screen aware. Build to scale through diff use cases.
Herrod @ #vForum: David Wakeman doing the View demo. Go buddy! Good luck.
Herrod @ #vForum: 1ms latency was good. Now doing 180ms. Amazing difference. PCoIP works great at high latency.
Herrod @ #vForum: why did they not do a demo of PCoIP like Wakemans at VMworld? Great
Herrod @ #vForum: Talking about BYOD with Fusion 3 and Workstation 7 or future bare metal hypervisor. Good for securty as well as img std.
Herrod @ #vForum: Pocket cloud from Wyse.
Herrod @ #vForum: Now moving to vSphere. 3m enginering hours.
Herrod @ #vForum: VMotion joke about saving mariages got a laugh just like it did at VMworld!
Herrod @ #vForum: Great flexability through storage and network vMotion. Plus partners working hard on VMotion between DCs
Herrod @ #vForum: "The myth that U can't ran DBs and high end workloads has now been dispelled."
Herrod @ #vForum: DRS being extended to include I/O
Herrod @ #vForum: DPM is server degrag!
Herrod @ #vForum: We forget in IT that nothing matters at the end appart from applications.
Herrod @ #vForum: Lab Manager, the signature avoidance tool.
Herrod @ #vForum: Sharing the size of the VMworld lab setup. What no photos or video?
Herrod @ #vForum: Cloud bingo. Now doing cloud information.
Herrod @ #vForum: Discussing challenges with Federation. Supporting the use case for internal cloud. Also federating Ur own multiple DCs.
Herrod @ #vForum: Follow the moon computing. Chase the cheapest elec around the world.
Herrod @ #vForum: vCloud being discussed. I love the API.
Herrod @ #vForum: Spring logo, I wonder if Spring will be discussed?
Herrod @ #vForum: Yes Spring is being discussed in relation to vApps. Trad vs future app architecture. Removing the tenticles.
Herrod @ #vForum: "Spring is an application framework" leading to PaaS discussion to support it.
Herrod @ #vForum: Love hearing @herrod talk on Spring. 1 of the times we see his true intellect and comp sci bckgrnd rathr than mrkting fig
#vForum MelbIT: Hosting is not Cloud, we know, we are the largest hosting provider in the country. (Here here!)
#vForum MelbIT: Glenn is having a customer talk about their experience.
#vForum MelbIT: vCloud Express. Customers like the quick or lack of sales cycle.
#vForum MelbIT: The developer on stage is manually scaling their enviro. IMHO in future with Spring &/or vCloud API they can automate this.
#vForum MelbIT: Glenn talking about upredictable billing. Mainly around intrnet bandwidth. (but this is not a nsrly prob with private cloud)
#vForum MelbIT: Going to offer cap plans or insurance plans to balance peak and non peak months.
#vForum MelbIT: Seeing lots of security incidents around these public Cloud internet facing VMs.
#vForum MelbIT: Melb Cup site used hybrid Cloud of AWS with MelbIT
#vForum MelbIT: Thinks people will use multiple Cloud providers of different features and specialities. (Agreed)
#vForum MelbIT: The chllnge. Get Ur app provdrs 2 build for horz scale that the cloud delivrs well. (Place for Spring and PaaS driving IaaS)
#vForum : @hartmant from VMware talking about vCloud
#vForum VMware talking about vClould API use cases, also the GUI and vCenter Plugin. Contrast to Express
#vForum VMware saying U may want to upgrade to vSphere top be ready to Federate with vCloud
#vForum VCloud GUI will be baed on Java rather than .net according to VMware
#vForum Everyone enjoying the networking detail slide of vCloud. Everyone leaning forward and squinting.
- Andre's Photos
- Short video of the main stage.