Home > February 2012

February 2012

VFD2 - Xangati

Friday, February 24, 2012 Category : , , 0


Third and last on the first day of Virtualisation Field Day #2 was Xangati

There was great food presented before the start of the session, ice-cream and a variety of bacon in different flavours. The idea was not to mix them (hey they do weird things with food in America) but to provide choice. The bacon was very well received by the delegates. For some reason bacon is very popular.



Jagan Jagannathan, the founder and CTO grabbed the whiteboard pen and started to explain things. It worth repeating (and it gets repeated a lot), this is exactly the type of engagement and insight that delegates at TFDs respect and value. You can tell the difference. The room is quiet, the questions and interruptions are minimal, people listen intensity. I have seen this again and again at these days. It staggers me how some vendors do it and succeed and other ignore the advise and struggle. In a similar way, Xangati provided a list to each person of who was present, their names, titles and twitter handles. When you are writing up notes and posting on twitter, this easy access to info is so very helpful. You would think that the PR people at Xangati had not only read the advise given them (http://techfieldday.com/sponsors/presenting-engineers/) but they actually attempted to put it into practice!

I was engaged with what was being presented and ended up not taking many notes. 



Some of the interesting things discussed was how in the performance monitoring world you have triage vs postmortem. For triage you need real time, at this minute information, anything older than that and its not as useful. Older data such as five minutes later, thats used for postmortem analysis.

One of the key things that Xangati does is take all of the incoming data and process/analyse it in memory, rather than writing it to a database for analysis. This allows them to give very timely and detailed information in their UI and alarming. The interface has a slider and you can wind back the clock a little and see what was just happening prior to now. You can also record the detailed real time information you are looking at for later analysis. This recording links in with their alerting. That is when an alert is created it records the associated real time info for that short time period so you can see what was happening. Of course the data, is also written to database in a  summarised form for later analysis. This uses a reporting interface that is not as nice or as interactive as the real time interface. I would like to see the two much more similar, its feels a little strange to have them so different. However given that they work of different data models and server different purposes you can see the reasons why. 

They have self calculating thresholds but you can create your own. 

Xangati have been doing a lot in the VDI monitoring space but they were keen to point out that they are not a VDI monitoring company, they do straight virtualistion too. I think they don't like being tarred to much with the pure VDI brush.

The do have some great VDI features though. If you are looking into desktop performance you can launch into a WMI viewer as well as a PCoIP or Citrix HDX viewer to see a lot more detail about whats going on inside the desktop and the display protocols. They even have a neat feature where and end user can self service a recording of their performance for a help desk to analyse. The user can go to a form and request a recording for their environment, it records the previous 1 minute before the submission. Thats nice.

Here is a look a the demo environment I interacted with.


Where there are reports that have sub elements (such as a protocol list) you can drill down to those. At first I thought the reports were not interactive, but I was wrong about that and shown the error of my ways.
It was a good session. I certainly got the impression that for real time performance trouble shooting Xangati is a real player worth investigating. I did not get enough of a chance to look at the product or discuss with them its suitability as an comprehensive monitoring solution. I think there are a few things that an overall monitoring solution requires that I did not see in the product, for example inventory data. Maybe a more in-depth look at the features and functions would help nut this out more. Hard to do in our limited time. They do have the free version which is popular and evils are available, so its easy to check these out for yourself.

Well its been a long day, lots to see and think about. Looking forward some brief sleep before another day of it all tomorrow.

Rodos

P.S. Note that I am at this event at the invite of GestaltIT and that flights and expenses are provided. There is also the occasional swag gift from the vendors. However I write what I want, and only if I feel like it. I write nice things and critical things when I feel it is warranted. 

VFD2 - Zerto

Category : , , , , , 2


The second vendor for day 1 of Virtualisation Tech Field Day #2 was Zerto

The session was held back at the hotel. The camera crew did not come back and this event was not broadcast over the Internet. To be honest I was a little confused about the what you can say, and what you can't say, discussion. There was going to be mention of some things which are coming out in the next version as well as some customer names. Both of these might be things that should not be in the public domain until appropriate. To be honest, this is difficult as a blogger and I thought there was some "understanding" with these events that everything was public, no NDAs. Whilst I respect a vendor wants to keep things private whilst balancing giving us the greatest insights and information, it makes it really difficult for me to navigate what I can and can't write about. So if I make a bad mistake treading that line I am sure someone will let me know in the morning and I will be editing this post very fast.

Our presenting were Gil Levonai who is the VP Marketing and Products and Oded Kedem, CTO & Co-founder. So we were really getting things from the experts.


Zerto do disaster recovery solutions for the VMware environment. Their target customers are the Enterprise and now the Service or Cloud providers. Having spent quite a few years working in the DR and recently the Cloud space I was very keen to hear what Zerto had to say.

Here is my summary notes for the session.
  • The founders were also the founders of Kashya Inc which was sold to EMC. After being sold to EMC Kashya turned into RecoverPoint which is one of the mainstream replication technologies for Continuous Data Protection (CDP) based DR today. 
  • They are more after the Enterprise market and not the SMB players. I have no idea what their pricing is like, I wonder if their pricing matches that market segmentation?
  • Replication of a workload can be of a number of scenarios. One is between internal sites within the same Enterprise. Alternatively you can go from within an Enterprise to an external Cloud provider. There is a third use case (which is very similar to the first) where a Cloud provide could us Zerto to replicate between their internal sites.
  • The fundamental principal for Zerto is moving the replication from the storage layer up to the hypervisor without loosing functionality. Essentially it is a CDP product in the nature of RecoverPoint or FalconStore Continuous Data Protector, but rather than being done at the storage or fabric layer it utilises the VMware SCSI Filter Driver (as detailed by Texiwill) to get up and close to the Virtual Machine. This means that Zerto can be totally agnostic to the physical storage that might be being used which is a great feature. This is important in the Cloud realm were the consumer and the provider might be running very different storage systems.
  • The goal of Zerto is to still keeping all of the Enterprise class functions such as consistency groups, point in time recovery, low RPOs and RTOs. The only obvious high end feature that I saw was lacking was synchronous replication. This question was asked and Gil responded that they felt that this was not really that much of a requirement these days and synchronous might not be required. I think their is still a case of needing synchronous but Zerto just does not seam to be wanting to go after it, which is fair enough.

  • There are two components (shown above). The Zerto Virtual Manager that sits with vCenter. This is the only thing that you interact with. It is provided as a windows package that you need to deploy on a server and it integrates to vCenter as a plugin.
  • Then there is the Zerto Virtual Replication Appliance (Linux based) which is required on each host. This is deployed by the manager. 
  • Some of the features of Zerto are :
    • Replication from anything to anything, its not reliant on any hardware layers, just VMware
    • Its highly scalable, being software based
    • It has an RPO in seconds, near sync (but not sync)
    • It has bandwidth optimisation and WAN resiliency. Built in WAN compression and throttling.
    • Built-in CDP which is journal based. 
    • It is policy based and understands consistency groups. You can set CDP timelines for retention in a intelligent way. 
    • If it gets behind and can't keep up it will drop from a send every write mode to a block change algorithm and drop writes in order to catch up. This catchup mode is only used if the replication can't keep up for some reason (lack of bandwidth, higher priority servers to be replicated. What I would like to see is for this to be a feature you can turn on. So rather than CDP you can pick a number of points in time that you want and writes between these are not replicated. This would emulate what occurs with SAN snapshots. Yes its not as protection but for lower tier workloads you might want to save the bandwidth, you can match what you might be doing with  SAN snapshots but can do it across vendor. Gil did not think this was a great idea but I think their is real merit to it, but I would, being my idea. 
  • Often people want to replicate the same workload to multiple sites. Sometime the same machine two different sites from the primary one (call this A to B and A to C), or from the primary to a secondary site and then a replication from the secondary site to a third site (A to B to C). You can't do either of these modes at the moment but watch this space. 
  • There is a concept of Virtual protection groups. VM and VMDK level consistency groups. This is very important for some applications which need to have data synchronised across multiple disks or across systems, its great to see this supported. 
  • Support for VMotion, Storage VMotion, HA, vApp. 
  • There are check points in the CDP stream every few second and you can set a time for doing a special VSS check point. This is excellent. 
  • Its vApp awareness is very good. If you add a VM to a vApp it will start replicating it. It also knows things like startup order within the vApp and retains that information for recovery at the other site. This is better then VMware Site Recovery Manager (SRM).
  • You can denote a volume as swap or scratch so its not replicated. It does replicate it once, just so it has the disk to be able to mount up to the OS. Once once replicated it does not send any writes made to the disk. This way you get a valid disk that will mount fine at the destination with the initial swap or scratch state. This is a great feature.
  • They will be able to pre-seed the destination disk at the other site to speed up the synchronisation, a big need in the DR space when you are dealing of very large amounts of bandwidth down restricted bandwidth pipes. 
  • There is no need for a shadow VM at the destination site. They are created on recovery or failover. At the failover the VMs are created and disks connected to it.
  • Failback is supported.
  • Test failover is provided and it can have read write capability. Replication continues to run while the test recovery is taking place (you always need to be protected). The test can't run longer than your CDP journal size. The test recovery is very efficient in storage size as it sources the reads from the replica journal and it does not have to create a full copy of the disk, so only your writes to the test copy take up additional space.
  • For the recovery migration you can do a move instead of a failover which does a shutdown of the VM first to give constancy.
  • For the failover you can choose the network to connect each nic to at the other site. You can specify different NICs for actual failover versus a test failover. It can also re-ip address the machine if required. 
  • Support the Nexus 1KV but as port groups. I don't think it can orchestrate network creation on the N1K.
  • Pre and post recovery scripts can be configured to run, so you can script actions to want ever you want, such as updating DNS entires etc.
  • Now the really really really nice thing is that you can destine to a VMware vCloud implementation. When you target to a vCloud you select which of your available organisation VDCs you want to recover to. Then, when you are selecting you networking, it presents the organisational Networks as your choices. Very nice. A demo was done or doing a failover to a VCD environment and it worked very nicely. I was quite impressed. I discussed with Oded how all of the provider side was handled, the multi-tennacy, security etc, just about everything had been covered an quickly explained. This showed to me that this stuff is very real and they have thought about this a lot. I see a lot of potential solutions for this that might work in a Enterprise space but that have no chance in the service provider space, but from what I could see I think Zerto gets it.
  • When you need to do a failover , what happens if the source site no longer exists. Well you go to the vCenter on the destination site and do it their. This is a problem in the Cloud space as the customers are not going to have access to the vCenter, only the provider. Today the provider is going to have to do the recovery for you if you your site is gone. Their is an API for the provider to use with their own portal. Ultimately Zerto are saying they will provide a stand alone interface to do this function. 
I really enjoyed the presentation for Gil and Oded. Not to many slides, a great demo, lots of explanation and really showing of what was unique about their offering. I am looking forward to learning more about what they are doing and in seeing their functionality grow, I think they have many things right in this new hybrid Cloud world.

Rodos

P.S. Note that I am at this event at the invite of GestaltIT and that flights and expenses are provided. There is also the occasional swag gift from the vendors. However I write what I want, and only if I feel like it. I write nice things and critical things when I feel it is warranted. 

VFD2 - Symantec

Category : , 0


First vendor off the rank at Virtualisation Field Day # 2 was Symantec. It was an early start as we were having breakfast there. 

It was an interesting start as things took a while to get organised and the opening question was, who uses backup. Given you have a room full of top virtualisation bloggers I figure they can all be dangerous on the topic of backup. We also hear that Symantec is the #1 in VMware backup and they have been working with VMware for 12 years now. GSX and ESX were released to market in 2001 so they must have been there right from the very first day. 

First up was NetBackup.

George Winter, Technical Product Manager, presented on NetBackup.

Some general notes, assume these relate to NetBack but some refer to Symantec products in general.
  • They don't support VCB anymore as of the current version. 
  • On the topic of passing off VMware snapshotting to the array, they don't do anything today but in the next release (by end of 2012) this will be provided through something called Replication Director.
  • They have their own VSS provider for application quiescence which you can use to replace the VMware one. This is free of charge and included in the distribution.
  • We spent a while looking at dedupe and the different ways that you can do it with Symantec products. You have all sorts of ways of doing this from source based in the agent to hardware appliances that can replicated to each other across sites.
  • In regards to the lifecycle of retention policies you can have local copies, replicate to another site using depuce and might even also destine a copy to "the Cloud". There was little detail about what "the Cloud" means apart from a list of providers that are supported such as Nirvanix, AT&T, Rackspace or Amazon.  No details were provided to on the protocols that are supported, I am sure that can be sourced in the product information. Data destined to the Cloud is encrypted and the keys are stored on the local media server. In destining to Clouds it supports cataloging, expiring and full control of data that might be destined there.
  • They have an accelerator client that rather than doing source based dedupe do a changed block technique so they only send a small amount of data without the load of source dedupe. Symantec claim they are the only people that do this and its new in the latest 7.5 release.
  • For VMDK backups the files are cataloged at ingestion so when you need to do a file level restore you can search for where that file might be, you don't need to know which VM or VMDK it might have been in in the first place. When data is being stored, the files and their mapped blocks are recorded. So at restore time for a file they only need to pull the blocks for the file back in, you don't have to restore the entire VMDK which saves a lot of time, space etc.
  • Integration with vCenter. Backup events can be sent to the vCenter events for a VM and custom attributes can be updated with date of last backup etc. There is no plugin available but there is one coming but no details provided on this. 
There were some specific topics that sparked my interest.

vCloud Director

I am keeping my eye out for things around vCloud Director over the two days. Mike Laverick got the vCloud question in before I got the chance, asking what their NetBackup support was. They don't have anything today but have been working on it since it was first released. The good news is that this work is about to released this year. It always hard to get details about products that are not released but I tried to dig some sort of feature list out. It was revealed that there would be support for per tenant restore and it sounded like the tenant would be able to do this themselves. Going to be very interesting to see what features and functions this is going to really have. This should get some real attention as over the next 12 months I believe we are going to see many vendors start releasing support for vCloud Director.

VMware Intelligent policy (VIP)

One of the challenges about backup in a dynamic virtual environment is the effort to apply your policies to your workloads. To ease this pain VIP give you VMware protection on auto-pilot. It is an alternative method of selecting machines where new and moved VM's are automatically detected and protected. You specify a criteria which might match a particular VM which is based on 30 vCenter based definitions. These definitions can include things such as vApp details or even custom attributes. Its designed to help in the dynamic environments with have VMotion, Storage VMotion, DRS and Storage DRS. When you have this "rule based" matching one thing I am always concerned about is the hierarchy of rules as it can be very easy to have multiple rules that might match a machine. If multiple rules match it will apply both and do multiple backup of the machine. You can't set a hierarchy so have things like a default and then have an override for a more specific rule. I think this would be a great feature and suspect there might even be a way to do it, it might just have been my interoperation of the answer. 

Another element of VIP is apply thresholds. One issue in vSphere backup environments is that your backup load can effect the performance of production by causing an impactful load on elements of your infrastructure. NetBackup can "automatically balance backups" across entire vSphere environment (fibre or network), physical location (host, Datastore or cluster) or logical attributes (vCenter Folder, resource pool, attribute). 


Resource limits to throttle the number of backups which execute concurrently can be set based on elements such as vCenter, snapshots, cluster, esxserver and lots of different datastore elements. So for example you can set a resource limit such as no more than 1 active backup per datastore with no more than 2 active backups per ESX. A problems is that this is a global setting and that its fixed. It does not interact with the metrics from vSphere so it does not adjust and its for everything. I can see that you might want different values for different parts of your environment and for it to adjust based on load. This is the first release of this functionality so we should see this functionality build out in future versions. 

Next we had the Backup exec guys.

Kelly Smith & Gareth Fraser-King

Some general notes
  • Specific packaged solutions for virtualised solutions, targeted to the SMB.
  • Showed the new GUI (pictures below) which will be released next month. Looks very slick with lots of wizards. 
  • You can visually/graphically see the stages of protection for a workload. For example the backup, followed by the replication etc. When you go back and look at the machine you see the types of jobs associated with the machine, what they do and when they are scheduled. It gives you a workflow centric view.
  • Symantec are adding a backup option which can be destined to a Cloud provider (partnered with Doyenz.com) at which you can do a restore in the event of a disaster. I really would have liked to see this demo'd.



Here are some other thoughts from the session.

So why two backup products? We hear for example about the fact that there is no vSphere plugin for NetBackup but their is for BackupExec. Yes we know that there are historical factors, but if Symantec were to start again, why for technical reasons would you create two products? Its hard to summarise the answer as the conversation went around a little (maybe watch the video) but essentially their answers was because there are two markets, the big enterprise and the SMB to medium enterprise. Creating products, licensing and features sets that go across that entire spectrum of use cases is to hard, Symantec felt they really needed to have products target to the two different markets. I understand this argument, but as the audience are IT technical people, it would have been nice to hear about the technical aspects behind this. Maybe something about scaling catalog databases and how its hard to create a scaled down version or something. I did not really get why they needed two products (apart from history). However it was discussed that there are many techniques that are use by both products such as a lot of the dedupe functions. 

In regards to the execution to be honest I would have expected something a little more polished from a vendor such as Symantec. We spent a bit of time learning 101 about VMware backups, but given that the audience are bloggers and specialists of Virtualisation, this could probably be considered assumed knowledge. Maybe this was included for the remote audience, as the sessions were being recorded and broadcast. The format was also looking at some quite simple customer use cases, which I did not feel added much to explaining the value of Symantec products over other vendors. Also some of the explanations were inaccurate, such as talking about redo logs. Once we got into some of the cool things that Symantec do, and what they are doing different to others, it got a lot more interesting. Also we can be a prickly bunch so you need to know how to do objection handle really well. I noticed this improved during the morning.

Lastly a presenter needs to be flexible in their delivery. The NetBackup team insisted on finishing their slides and talking through the last 5 so fast no one could really listen to what was being said. We had very little time from the BackupExec team who I think had some really interesting stuff and way to long on NetBackup. I think the imbalance did not help Symantec overall.

Thanks to Symantec. It was a really interesting morning and we learnt a few things.

Rodos

P.S. Note that I am at this event at the invite of GestaltIT and that flights and expenses are provided. There is also the occasional swag gift from the vendors. However I write what I want, and only if I feel like it. I write nice things and critical things when I feel it is warranted.  

Virtualization Field Day #2 / Silicon Valley- pre event

Wednesday, February 22, 2012 Category : , , , , 2

Well I have escaped from the wet and dreary shores of Sydney to spend some time geeking it up with the crew for the Virtualization Field Day #2. Having been to one of these events before I know just how much hard work and fun it can be. Its so great to hang out with people so smart in their field, plus to hear direct from the best people within the presenting vendors.

The activities start Wednesday night with a get together dinner. Thursday and Friday are all of the vendor presentations. I arrived this last Sunday to do a few days work of meeting before the event. Of course I had to do a bit of the usual Silicon Valley shop hop around some of the favourite haunts for all things geek.

One place I went today, that I had never thought of before, was to Apple HQ. Here is some guy who was wearing a suit, who wears a suit in the valley, only me!

Who's the stiff in the suit!
The cool thing is that there is a company store there. Its not like an Apple store. It has a lot more Apple merchandise. It also has a t-shirt that can only be purchased from the Apple campus store. Of course I had to get one.

Apple Company Store
Of course I also had to do a trip to Fry's and pick up something. Ended up getting a 4 port 1G switch for the home office. I am sick of 100Mb transfer speed between me and the Drobo storage device (which hangs off a Mac mini). Also some of those nice little pop up speakers for use in hotel rooms etc. This is on top of the other stuff I pre-shipped to my hotel, none of which has arrive yet. I pre shipped a bunch of t-shirts from ThinkGeek for the kids and a SSD drive for me.

One place I have never been to here in the US is In-N-Out burger. My American friends rave about it. So I had to check it out.
The back wall of In-N-Out burger, the view from the car park.
I had to go the whole hog and get a burger, fries and a shake. I am told the way to order your burger is "animal" style, which means it comes with (I think) sautéed onions and chilli. The person I was with sort of made a mistake and somehow also ordered their fries done "animal" style. Can you believe it, they actually do that. Here is what it looks like.


After eating mostly healthy food for about a year it was great to chow down on great fast food. This stuff is fresh, you have to wait for it to be cooked. The fries are cut from whole potatoes just before they are cooked. However my stomach rebelled about half an hour later, the temple had been defiled! But was worth it. Repeat after me, "In-N-Out is occasional food!".

But what is going to be really fun this week is hanging out with the old friends plus some new people at the field day. The attendees this year are Edward Haletky, Bill Hill, Mike Laverick, Dwayne Lessner, Scott Lowe, Roger Lund, Robert Novak, David Owen, Brandon Riley, Todd Scalzott, Rick Schlander and Chris Wahl. Some real who's who of virtualisation thinkers. 

The vendors this event are interesting, we have Symantec, Zerto, Xangati, PureStorage, Truebit.tv and Pivot3. Some big names their, some interesting new ones and great to see that I will get to hear the thoughtful words of Mr Backup himself, aka Mr W. Curtis Preston again. 

The only vendor I will call out specifically as sparking some very high interest from me pre event is Zerto. They have DR capabilities with full integration to VMware vCloud Director. As I deal daily with one of the leading deployments of vCloud Director in the service provider space this really gets my brain juices flowing. There is big interest in this topic and I am really keen to see exactly what these guys have. I want to separate the hype from the reality and really hope that the reality is an exciting story.

You can see the details of the whole event over at the Field Day site http://techfieldday.com/, the links page really gives you all the resources you need.  The sessions will be broadcast online and you can follow the tweet stream via the hashtag #VFD2. 

More updates as the events unfold.

Rodos

Powered by Blogger.