Check out Gartners view on private clouds in this video from Tom Bittman.
Tom thinks that services will ultimately move to service providers but that they are not ready yet. Hence to do something now, organisations are looking at virtualisation to create Private Clouds (Internal Private one would assume). Its predicted a lot of the money spent over the next few years will be put into these areas and not into utalisation of Service Provider clouds.
However, in Gartners view, Private Cloud computing is not the destination but a stop gap, near term step, until the services are more mature which may be six months for some services or ten years for others. Tom also talks about how creating Private Cloud can be a stepping stone to ease the migration to Service Providers in the future, "I don't want to build a dead end, I want to build a stepping stone".
Its only 90 seconds long, take a look.
Whilst I like the premise behind the message, I can't say I agree with all of it; although I am sure its hard to sum things up in 90 seconds. I think that Private Clouds will remain and it won't be an evacuation off to the Service Provider space. There will still be a place for Private Cloud. I can think of many reasons to maintain your smaller Private Cloud. If you have some good ideas of why you would maintain some Private Cloud post in the comments.
The President and Chief Operating Officer of Vizioncore, Chris Akerberg is currently in Sydney Australia so I look the opportunity to ask him some questions about the company and their products.
Chris has been with the company for a number of years, before the acquisition by Quest. Since becoming President and CCO he has been focusing on building out the company in order to sustain a much larger customer base, moving beyond the 20,000 customers they now have to the future 100,000 range.
The questions and a summary of the answers:
What does it mean for Vizioncore to be owned by Quest?
It allows the backing of a large company which brings deeper pockets for investment. There is also the wider breadth of the Quest sales force. Lastly there is the sharing or blending of IP where Quest can bring their application and database management expertise.
What is the reason behind having free and paid tools within the companies portfolio?
We are in a marathon of virtualisation adoption and we want to give back to the community free tools that will assist in that virtualisation adoption. These tools include a P2V product, V EcoShell and an Optimizer product for storage. The paid tools will evolve with the customer as their need for virtualisation increases.
How do Vizioncore tools work within an organisation as they move along the maturity model to reach 80% virtual or above?
We have an analyze to automate story which matches the virtualisation adoption life cycle. Analyze your existing environment (vFogLight), take action on the results by converting the workloads (vConverter), then protect the workloads (vRangerPro and vReplicator), now monitor those workloads (vFogLight), then optimize the environment (vOptimizerPro and vFogLight) and lastly extend your ROI by doing automation (vControl). So analyze to convert to protect to monitor to optimize to automate. We can help customers out anywhere along the curve.
What can you share about product updates next year? What's coming?
Not a wider portfolio, we have already moved from being a backup product to having a comprehensive portfolio. What you will see from Vizioncore is going deeper within those products. So vRangerPro being able to have visibility and control of the applications as well. vFogLight being able to look at applications. Bringing application awareness to the product line. Lastly having all our products being Cloud ready as we are supportive of customers moving into the managed services or Software as a Service industry.
VMware are increasingly releasing products which overlap with functionality of 3rd party providers like yourself? What's your position on this?
We get this question quite a bit and are creating literature for our channel partners and customers about the differences between what VMware are offering and what Vizioncore do. What we do is communicate with VMware and understand where their roadmap is going and as they turn we will turn with them. Yet if they are going to put a significant investment in an area of platform, Vizioncore can choose to not spend money there and maybe go off in a different direction. Likewise where VMware might not be investing, Vizioncore can capitalize and add more value. Also we want people to think of investing in Vizioncore is investing in virtualisation management. We are going to to not only do this analyze to automate story for VMware but also for HyperV and Citrix. As customers decide what to use within their strategy they can still look to Vizioncore for supporting products.
Of course what you really should do is watch the short video below and listen to Chris in his own words. He is much more articulate than my summary.
Have you ever wondered how long it would take for a 12 year old child to configure a Drobo storage device? Well you are about to find out. In this short time lapse film you can see it for yourself!
I was lucky enough to win this Drobo from Data Robotics at the Gestalt IT Field Day last week. It arrived today via Fedex. If Drobo is meant to be simple for home users than there is not really any point in me testing it out, I know a thing or two about computers. However my youngest son Tim does not.That's not quite true, he is bit of a geek and a whiz at using applications but he is only just starting to learn about computer technology itself like storage. What a great test of the simplicity of the device. Seriously, the hardest thing was removing all of the packaging.
Tim also managed to figure out how to create the partition and format it. The Mac was kind enough to pop up the right utility when the Drobo was plugged in. After the partition was formatted it auto mounted. His test was to then copy a video file into the Drobo and play it from there. This is all included in the time.
If you are wondering what a Drobo is check out their web site. Essentially is a desktop storage device that contains data protection.
Drobo utilizes the revolutionary BeyondRAID storage technology that protects data against a hard disk crash, yet is simple enough for anyone to use. As long as you have more than a single disk in Drobo, all data on Drobo is safe no matter which hard disk fails. There’s no need to worry about anything else.
Its technology lets you add disk drives, it will take up to four. If you run out of space you simply pop out the smallest drive (its okay your data is protected) and insert a larger one, you will then get more space and the data protection re-configures itself underneath. For large drives the re-configuration process can take quite some time.
If you enjoyed this, please post a comment, I am sure Tim would appreciate it.
Rodos
[Note : I attended the Gestalt IT Field Days as a guest of Gestalt IT. Travel and accommodation was provided as part of the event. See the Field Day FAQ and my comments for details.]
The analyst industry is telling us that unstructured data growth is going to outpace that of transactional based data. "While transactional data is still projected to grow at a compound annual growth rate of 21.8%, it’s far outpaced by a 61.7% CAGR predicted for unstructured data in traditional data centers." You don't have to look far past your own explosion of data consumption to realise this is becoming a large problem for IT departments. Combined with this growth is our desire to keep more aged data online, in order to provide much faster retrieval.
What is one to do? Well a company called Ocarina Networks says they "make free space on storage you already have" through some very clever content aware compression and de-duplication. The key element here is that it works on your online storage so the savings to save are multiplied as there are flow on effect to the transmission of your data over networks and to the amount of data that you need to backup. So even though companies like Netapp (which Ocarina say they are 57x better than) and DataDomain do de-dupe its only at the underlying storage without these possible secondary benefits.
A quick look at just three of the people involved in Ocarina gives you a good impression that they have the pedigree to achieve great things here. Their CEO, Murli Thirumale and CTO, Goutham Rao, hail from the same roles in the Citrix Advanced Solutions Group, where they led the SSL-VPN division (acquired via Net6). In those roles they took their technology to the number #1 unit in market share in eighteen months. The Chief Scientist, Dr Matt Mahoney is a thought leader in next generation data compression. Also as a company they have been very busy in creating some interesting patents.
Last week at the Gestalt IT Field Day I got some deep dive into the Ocarina technology. Here is a video I took of Goutham and Murli.
However the insights from these guys on the science of de-dupe and compression was very informative, so lets look at what they had to say in more detail.
There are two approaches to compressing data, either a dictionary or a statistical approach. A dictionary encoder approach, such as the LZ algorithm, "operate[s] by searching for matches between the text to be compressed and a set of strings contained in a data structure (called the 'dictionary') maintained by the encoder. When the encoder finds such a match, it substitutes a reference to the string's position in the data structure."
The statistical approach is much more interesting. If you can predict what is coming next in a data series, you don't need to record it, you only need to record the things you did not expect (this is what takes up the space). As long as you use the same algorithm to extract the exception data you get exactly the same data (or file) whilst only saving a very small part of it. You can also have a feedback loop from from the errors back into the input to improve the prediction. For example if you look at a photo of the room you are sitting in now, there are probably lots of boarders or edge framed objects or walls etc. If you turned all of these edges into axis's and you were to follow an axis of colour moving down the edge of the wall you can expect that the next element moving down will be more of that same edge, you only need to record something when its not. Complex but you can do some clever things with the right algorithms [more on that shortly].
Compression is something you can only do on a single file. As mentioned the key to compression is predicting what the next value is going to be in an incoming stream of data. The more data you have available in the incoming data steam the better you may be able to predict the next value. Also note that a lot of file types being generated today are already compress internally, such as JPEG images either by themselves or embedded inside other documents.
De-dupe is all about finding the similar chunks of data by comparing hash values or a fingerprint. The smaller the chunks you are comparing the better because it increases the likelihood of a match between the two. Dividing the data into fixed chunks will get you so far but unless you have really small chunk you can miss a match that might occur across the boundary of two chunks. Netapp de-dupe does it this way. To get maximum effect you need what is called a sliding chunk window, looking for a matching bit of data anywhere, yet this is expensive computationally as you have to calculate a lot more hash values. There is a risk that two different chunks may produce the same hash or fingerprint, a false positive. Typical hashing algorithms are MD5, which is very weak or SHA256 which is strong, but Rabin [http://en.wikipedia.org/wiki/Rabin_fingerprint] is most liked [its fast to implement in software and works well on sliding windows]. How does all this comparing of chunks of data save you data? When you find a duplicate chunk you don't need to save a second copy, you can just save a small reference to the original piece of data you already have. Some technologies, such as Microsoft Storage Server 2008 do single instance storage (de-dupe) by only comparing whole files, which is bit of a joke really, it not going to get you much saving, because these days we create so many copies of the same files which are only slightly different (we add a few words to a document but save it as a new file name) or there is a lot of repetitive elements across files (images and templates). Yet this technique is really easy to do. Lastly, not all data can be de-duped, some just has very little if any repetition.
Now it also matters what you are de-duping, is it a data moving over a network, a backup or your storage. Each of these has a different "window" of time that they are looking at. On a network transfer you don't have much of a window and the data in that short window may not be very repetitive, whereas a backup has a very long window with repeated cycles of data coming in that is probably very repetitive. These different characteristics of the data stream require different algorithms to achieve greatest efficiencies.
Compression does not preclude de-dupe but they do pull against one another. For example as mentioned earlier a lot of data is already compressed and compressed data removes just about any chance of finding duplicate chunks of data. If you are a photo storing site you probably want to turn de-dupe of and not waste all the effort. Likewise in a corporate environment you may have millions of occurrences of your company logo image but they are all compressed and embedded inside Word and Powerpoint files that are then also compressed. All that repetitive data has been obfuscated! Remember, all that growth in storage is in this unstructured data area.
Yet you want both de-dupe and compression, because there is always data you need save so compress it.
So given this primer what do Ocarina do? Well Ocarina find the optimal chunk size for everything, compression and de-dupe, by performing object chunking. If you take all of the data and break it into objects, so a zip file is broken down into its multiple files, a Word document may be broken down into images and text. Then the actions occur at the object level. Hence a jpeg would not be broken down into smaller chunks, as the best windows size to compress or de-dupe a jpeg is the whole image.
Going beyond the object based chunking Ocarina then use a neural network to determine what the best compression algorithm is for this particular type of chunk, in fact they have over 120 different algorithms. There are even different algorithms for variations of the same object, such as for a small versus a large jpeg. Their algorithms range from plain text to gene sequences. For images they have some very smart algorithms that perform spatial optimization or what can your eye see, based on chrominance and luminance. If you take a typical scenario it helps to understand the power of this. If you have the same photo at different sizes, or if you slightly adjust a photo (such as removing the red eye) the data on the disk is all very different and there is probably no repetition across them. However because Ocarina can "look" at the image it is able to determine that they are all in fact the same photo.
How does all of this work? Well an appliance accesses your storage and process the data. It breaks files down into their objects, weaves it magic and puts the smaller shrunk version back. This all occurs in RAM. To be safe, before it replaces the file it compares the original file with an expansion of the shrunk file to ensure they match exactly so there are no errors. Of course the files on the storage are now different, so you need to use the ECOreader (a file system filter driver) which expands the files in real time as they are read so you get them back in their original format. Of course sometimes you may want to read the shrunk file and not expand it, for example if you want to transmit it over a network (replication) or for backup. The software can be integrated into storage to make it all transparent to the user. Performance when reading and expanding is on par for de-dupe, for compression its dependent on the method but usually the same rate to uncompress as it was to compress it. Essentially you are performing an economic tradeoff of consuming compute cycles for disk capacity gains.
Having reviewed all of this organisations which are having to store, transmit and backup large amounts of unstructured data could benefit a lot from the Ocarina technologies. Especially those that the Ocarina algorithms work well. From speaking to them they are working hard on new and improved algorithms but just as importantly on how to make the technology solution work well.
[Note : I attended the Field Days as a guest of Gestalt IT. Travel and accommodation was provided as part of the event. See the Field Day FAQ and my comments for details.]
The documents include many screen shots and may serve as a good primer for those wanting to investigate UCS.
I do think that the Fabric Interconnect setup section in the first document should have the CLI method included as well. Likewise as most implementations will require the setting up of a pair of F-I's this part should be there too, after all its not difficult.
The VMware ESXi install instructions talk about changing the boot order in the blades BIOS, however this should really be done from within UCSM. It also covers local installation and not boot from SAN which should be the typical deployment.
With over 20 years working in the IT industry I have had varied sub careers. My first decade was as a programmer, developing applications whilst working and living in Asia. There was the obligatory dotcom involvement in a fun start up. Working in the SI space I loved being able to work at integrating many various technologies and solving a wide variety of IT problems.
Falling in love with server virtualization caused me to become involved in Cloud Computing which became a great passion due to how much it could help IT do greater things.
Today I spend my time assisting a large team of Solutions Architects across A/NZ at Amazon Web Services. Just like everyone at Amazon I enjoy working hard, try to have some fun and hope to be a small part of making history.