First vendor off the rank at Virtualisation Field Day # 2 was Symantec. It was an early start as we were having breakfast there.
It was an interesting start as things took a while to get organised and the opening question was, who uses backup. Given you have a room full of top virtualisation bloggers I figure they can all be dangerous on the topic of backup. We also hear that Symantec is the #1 in VMware backup and they have been working with VMware for 12 years now. GSX and ESX were released to market in 2001 so they must have been there right from the very first day.
First up was NetBackup.
George Winter, Technical Product Manager, presented on NetBackup.
Some general notes, assume these relate to NetBack but some refer to Symantec products in general.
- They don't support VCB anymore as of the current version.
- On the topic of passing off VMware snapshotting to the array, they don't do anything today but in the next release (by end of 2012) this will be provided through something called Replication Director.
- They have their own VSS provider for application quiescence which you can use to replace the VMware one. This is free of charge and included in the distribution.
- We spent a while looking at dedupe and the different ways that you can do it with Symantec products. You have all sorts of ways of doing this from source based in the agent to hardware appliances that can replicated to each other across sites.
- In regards to the lifecycle of retention policies you can have local copies, replicate to another site using depuce and might even also destine a copy to "the Cloud". There was little detail about what "the Cloud" means apart from a list of providers that are supported such as Nirvanix, AT&T, Rackspace or Amazon. No details were provided to on the protocols that are supported, I am sure that can be sourced in the product information. Data destined to the Cloud is encrypted and the keys are stored on the local media server. In destining to Clouds it supports cataloging, expiring and full control of data that might be destined there.
- They have an accelerator client that rather than doing source based dedupe do a changed block technique so they only send a small amount of data without the load of source dedupe. Symantec claim they are the only people that do this and its new in the latest 7.5 release.
- For VMDK backups the files are cataloged at ingestion so when you need to do a file level restore you can search for where that file might be, you don't need to know which VM or VMDK it might have been in in the first place. When data is being stored, the files and their mapped blocks are recorded. So at restore time for a file they only need to pull the blocks for the file back in, you don't have to restore the entire VMDK which saves a lot of time, space etc.
- Integration with vCenter. Backup events can be sent to the vCenter events for a VM and custom attributes can be updated with date of last backup etc. There is no plugin available but there is one coming but no details provided on this.
There were some specific topics that sparked my interest.
vCloud Director
I am keeping my eye out for things around vCloud Director over the two days. Mike Laverick got the vCloud question in before I got the chance, asking what their NetBackup support was. They don't have anything today but have been working on it since it was first released. The good news is that this work is about to released this year. It always hard to get details about products that are not released but I tried to dig some sort of feature list out. It was revealed that there would be support for per tenant restore and it sounded like the tenant would be able to do this themselves. Going to be very interesting to see what features and functions this is going to really have. This should get some real attention as over the next 12 months I believe we are going to see many vendors start releasing support for vCloud Director.
VMware Intelligent policy (VIP).
One of the challenges about backup in a dynamic virtual environment is the effort to apply your policies to your workloads. To ease this pain VIP give you VMware protection on auto-pilot. It is an alternative method of selecting machines where new and moved VM's are automatically detected and protected. You specify a criteria which might match a particular VM which is based on 30 vCenter based definitions. These definitions can include things such as vApp details or even custom attributes. Its designed to help in the dynamic environments with have VMotion, Storage VMotion, DRS and Storage DRS. When you have this "rule based" matching one thing I am always concerned about is the hierarchy of rules as it can be very easy to have multiple rules that might match a machine. If multiple rules match it will apply both and do multiple backup of the machine. You can't set a hierarchy so have things like a default and then have an override for a more specific rule. I think this would be a great feature and suspect there might even be a way to do it, it might just have been my interoperation of the answer.
Another element of VIP is apply thresholds. One issue in vSphere backup environments is that your backup load can effect the performance of production by causing an impactful load on elements of your infrastructure. NetBackup can "automatically balance backups" across entire vSphere environment (fibre or network), physical location (host, Datastore or cluster) or logical attributes (vCenter Folder, resource pool, attribute).
Resource limits to throttle the number of backups which execute concurrently can be set based on elements such as vCenter, snapshots, cluster, esxserver and lots of different datastore elements. So for example you can set a resource limit such as no more than 1 active backup per datastore with no more than 2 active backups per ESX. A problems is that this is a global setting and that its fixed. It does not interact with the metrics from vSphere so it does not adjust and its for everything. I can see that you might want different values for different parts of your environment and for it to adjust based on load. This is the first release of this functionality so we should see this functionality build out in future versions.
Next we had the Backup exec guys.
Kelly Smith & Gareth Fraser-King
Some general notes
- Specific packaged solutions for virtualised solutions, targeted to the SMB.
- Showed the new GUI (pictures below) which will be released next month. Looks very slick with lots of wizards.
- You can visually/graphically see the stages of protection for a workload. For example the backup, followed by the replication etc. When you go back and look at the machine you see the types of jobs associated with the machine, what they do and when they are scheduled. It gives you a workflow centric view.
- Symantec are adding a backup option which can be destined to a Cloud provider (partnered with Doyenz.com) at which you can do a restore in the event of a disaster. I really would have liked to see this demo'd.
Here are some other thoughts from the session.
So why two backup products? We hear for example about the fact that there is no vSphere plugin for NetBackup but their is for BackupExec. Yes we know that there are historical factors, but if Symantec were to start again, why for technical reasons would you create two products? Its hard to summarise the answer as the conversation went around a little (maybe watch the video) but essentially their answers was because there are two markets, the big enterprise and the SMB to medium enterprise. Creating products, licensing and features sets that go across that entire spectrum of use cases is to hard, Symantec felt they really needed to have products target to the two different markets. I understand this argument, but as the audience are IT technical people, it would have been nice to hear about the technical aspects behind this. Maybe something about scaling catalog databases and how its hard to create a scaled down version or something. I did not really get why they needed two products (apart from history). However it was discussed that there are many techniques that are use by both products such as a lot of the dedupe functions.
In regards to the execution to be honest I would have expected something a little more polished from a vendor such as Symantec. We spent a bit of time learning 101 about VMware backups, but given that the audience are bloggers and specialists of Virtualisation, this could probably be considered assumed knowledge. Maybe this was included for the remote audience, as the sessions were being recorded and broadcast. The format was also looking at some quite simple customer use cases, which I did not feel added much to explaining the value of Symantec products over other vendors. Also some of the explanations were inaccurate, such as talking about redo logs. Once we got into some of the cool things that Symantec do, and what they are doing different to others, it got a lot more interesting. Also we can be a prickly bunch so you need to know how to do objection handle really well. I noticed this improved during the morning.
Lastly a presenter needs to be flexible in their delivery. The NetBackup team insisted on finishing their slides and talking through the last 5 so fast no one could really listen to what was being said. We had very little time from the BackupExec team who I think had some really interesting stuff and way to long on NetBackup. I think the imbalance did not help Symantec overall.
Thanks to Symantec. It was a really interesting morning and we learnt a few things.
Rodos
P.S. Note that I am at this event at the invite of GestaltIT and that flights and expenses are provided. There is also the occasional swag gift from the vendors. However I write what I want, and only if I feel like it. I write nice things and critical things when I feel it is warranted.