Home > 2010


Migrating servers to the Cloud with SPLA

Wednesday, December 22, 2010 Category : , 0

Some areas of cloud are either ambiguous or present as an elephant in the room. Certainly a big one is licensing when it comes to Microsoft. The SPLA program is one complicated beast and its new to many people.

Turns out Microsoft made a nice tweak to SPLA at their Partner Developer Conference last month (Nov 2010).

In summary Microsoft now support you moving your existing virtualised workload into a multi-tennant Cloud which uses SPLA.

Of course you still need to PAY for the OS in the SPLA world, well someone does, either the provider covers it or you are on charged. The critical thing is that its now explicitly allowed.

There are, I believe, a few restrictions around what the machine image may contain, in that it can't contain something like Exchange. You can leave your old license key in the machine, you don't have go replace it with the providers OS SPLA key.

This is a good clarification and continued opening up by Microsoft. Of course the reason for them to do this (just like many of the other changes they have made to use rights over the years), is because they want to do this themselves. With Azure you can move a VHD (virtual hard disk in) that you build yourself on-premise into the Azure Cloud and execute.

I am trying to get hold of some further details and official references on this. The only reference I can find at this stage is this statement at http://www.microsoft.com/windowsazure/compute/

A Virtual Machine (VM) role that runs an image—a virtual hard disk (VHD)—of a Windows Server 2008 R2 virtual machine. This VHD is created using an on-premises Windows Server machine, then uploaded to Windows Azure. Once it’s stored in the cloud, the VHD can be loaded on demand into a VM role and executed. Customers can configure and maintain the OS and use Windows Services, scheduled tasks etc. in the VM role.

As more information comes to hand I will update this post.

[Update 23/Dec] Have a reference with some more details.

Looks like its called "re-imaging rights". You can use your volume licensing key. It does not amount to a transfer of the license (just the machine image) and the SPLA fee needs to still be paid by someone. You can do this with ANY provider, not just Azure.

Also note that this is being trailed with MSDN, hopefully that will be opened up to other providers after the six months.


SNIA Blogfest Interview with NetApp

Sunday, November 07, 2010 Category : , , 2

At the SNIA blogfest in Australia, John Martin presented for NetApp.

I caught up with John and recorded this quick interview afterwards to get a summary.



Saturday, November 06, 2010 Category : , , , 4

I posted the interview with Craig McKenna from IBM on XIV. Here is more of the details from the SNIA Blogfest event this past week.

First off Craig went through the XIV, which there has been a bit of talk about in the industry. Here is his slide on the specs.

The details from my notes were :
  • It comes in its own rack.
  • It is built up from modules, each which hold 12 SATA drives.
  • 6 modules also contain the FC (4G) and iSCSI (1G) interfaces.
  • Internally the backplane uses 1G Ethernet. Each module has four connections.
  • You can start with 6 modules, which gives you 72 disks. Maxed out at 15 modules you will have 180 drives.
  • You need to pick a standard drive size across the entire rack, it only has one tier, full stop. Either 1TB or 2TB drives. You can't match drives, if you do the additional space in the larger drives is not used. Once you have all of the drives in at the larger size, you do get the space as it rebuilds/re-levels. I don't think you can mix drives sizes within a module. Over the lifetime of the machine I wonder if customers are going to want to get the benefits of larger drive sizes. Hopefully they will have a great relationship with their IBM sales rep and can get them to trade in their old drives. Its a result of the architecture but if an XIV was the only storage on your floor it might not be flexible enough for you.
  • With the smallest drive and smallest number of modules you starting point is 27TB. The largest capacity you can go to is 161TB. These figures are for usable space after loss of data protection (mirroring) and sparing (not disk based) is factored in.
  • The read architecture makes the SATA drives perform close to FC speeds.
  • The controllers use a grid architecture, all can access and service data at the same time.
  • The cache is 240G (depending on number of modules).
  • It is always doing thin provisioning but you don't have to over provision.
  • You can put your XIV in front of your existing storage (disruptive to get it into the data path) and then get it to ingest existing data to conduct data migration.
  • Redirect on write is used used for snapshots, similar to Netapp but unlike Netapp the snap data is independent and resides outside of the volume. You can do up to 16000 snaps.
  • Async Replication is based on snapshots (not my favourite method).
  • In the future you will be able to connect multiple frames (racks) together and these could have different drive sizes. Infiniband will be used for the interconnection.
  • Data is broken into 1MB chunks and these are pseudo-random distributed across all resources in the frame as well as being mirrored. This is called RAID-X or mirrored protection.
  • The mirror of a chunk never resides on the same module. Chunks that are on one disk are not mirrored to a matching disk in another module (like a RAID mirror) but rather spread across all the other drives in the system. There is potential for data loss if two disks fail but clever maths and some other techniques are used to make this risk very low. Across the 300 [correction:3000] installations world wide there have been no double drive failures. Of course your traditional RAID systems are at risk from a double drive failure in a set too.
  • Of course with XIV the failure domain is wider if two drives were to fail. This is where rebuild speed comes in. If a disk fails only the 1MB chunks it contained need to be re-mirrored. So if the drive was only half full thats half the data to process than a more traditional RAID rebuild. As the data that has to re-mirrored is spread across all the drives in the system, as is the destination of the re-mirroed chunks, all the disks are involved in the read. This means that a re-mirror is really fast. A 1TB drive can be rebuilt in 30 minutes this way, as opposed to sometime up to 24 hours in traditional systems. The bigger your XIV system (more drives) the faster the re-mirror will be.
  • This great rebuild performance is a key advantage to RAID-X as disk drives continue to get larger.
  • No need in XIV to worry about hardware RAID or hot spare drive management. Operation is very simple, the systems takes care of it for you.
  • Licensing for all functions is included up front.
Whats my take on XIV :
  • You can't discount it. IBM acquired the technology from startup headed by Moshe Yanai who is known as the father of EMC's Symmetrix disk system.
  • Most of the vendors are moving to this commodity hardware and operational simplicity that XIV offers. The smarts is in the software and not the tin or brown spinning stuff. We are seeing more of these grid architectures and chunking of data. Traditional vendors are back filling this into their existing systems, XIV had the luxury of doing it fresh from the get go.
  • XIV looks like storage that does what it does well, but it only does one thing. The nerd knobs don't exist. I suspect that companies that uses XIV are going to be large and that it won't be the only storage sitting on their floor. At an entry point of 27TB usable its no small entry point, so there is going to be some big storage needs. Companies with this amount of data are probably going to have a wider variety of storage requirements, that XIV may not yet handle.
  • RAID-X sounds lovely but it has two drawbacks. First has the most expensive protection level, mirroring. The price is going to have to be right to compensate for the high overhead. Second, that large failure domain means you are only going to be using this for either scratch data or something you have backed up somewhere else. Yes a single drive can rebuild real fast. But IF (and its a long if) that was to happen, because the chunks are so wide spread you loose more than just the data you might on a single traditional RAID group, or none at all if the second disk was in a different RAID set. With RAID-X you may loose a bit of data from everything across the system. Thats going to be a hard one to recover from, and restoring between 27 to 162 TB of data is not going to be fast.
Would like to hear your thoughts on the XIV, post in the comments. Below is the video of Craig taking us through all of this.

Craig then went on to go through the new Sorwize V7000. They have taken the best of SVC, added RAID functionality from the DS8000 box, basically merging the two product lines to deliver a new mid-range controller.

I won't go into all the details of this. Here is the slide and its covered at the end of the video above, after XIV, if you want to watch it.

[Edit : Please see the comments for a response from Craig and some good links with further detail.]


Interview with Craig McKenna of IBM on XIV

Category : , , , 2

Craig McKenna gave some details on XIV at the SNIA Australia Blogfest this last week. I caught up with Craig to get a quick run down on some of the things he talked about including the value prop, how data is stored via RAID-X and the technique for rebuild of failed drives.

I have the video of the actual presentation so I will process that and put it up so you can see the whole thing.


Video from SNIA Blogfest - IBM Storage Strategy

Friday, November 05, 2010 Category : , , 0

At the SNIA Australia Storage Blogfest, Anna Wells presented the IBM strategy for storage. I previously posted the brief interview we did afterwards.

Here is the full video of the presentation, which I thought was very good. I am bit of fan of white boarding a presentation myself so it was great to see someone else doing it. The structure really flowed.

Here is the final image drawn up.

Here is the version handed out afterwards.

I thought it was good that IBM found in necessary to first present their overall strategy as a place to then reference their products into. In fact a single product might address multiple areas of the strategy. It shows a good consultative approach from a vendor, not all speeds and feeds.


SingTel on Cloud and the Australian market

Category : , 0

Bill Chang, Executive Vice President of Business at SingTel talks in this video on the SingTel Cloud vision along with his views on the Cloud market and NBN in Australia.

Worth a watch if you are tracking Cloud in our region.


Disclaimer : I earn a living working for one of the subsidiaries of SingTel.

SNIA Blogfest Interview with IBM

Category : , , 0

At the SNIA blogfest in Australia, Anna Wells presented the IBM strategy for storage. Anna is the Lead for IBM storage across A/NZ.

I caught up with Anna and recorded this quick interview afterwards to get a summary.

I will post up later the full video of the presentation.


SNIA Blogfest Australia participants

Tuesday, November 02, 2010 Category : , , , , 1

This coming Thursday is the SNIA blogger event which will cover EMC, HDS, IBM and NetApp .

Bloggers who are attending are

Ben Di Qual : @bendiq
Graeme Elliott : http://itknowledgeexchange.techtarget.com/art-of-storage/ @GraemeElliott
The people presenting for the vendors are

EMC : Clive Gold @clivegold & Mark Oakey @maoakey
HDS : Adrian De Luca
IBM : Joe Cho
NetApp : John Martin @life_no_borders

Of course the host is Paul Talbut of SNIA Australia @SNIA_ANZ

Expect to see some tweets, possibly live blogging and some great blog write ups, photos and videos of the event.

I have created a twitter list of all the twitter handles that I could find, so you can easily follow the group activity. http://twitter.com/rodos/snia-blogfest-2010

Update : Justin came up with a sweet hashtag #sniafest, love it.


vForum LiveBlog for Day 2 Keynote

Wednesday, October 27, 2010 Category : , 0

If the technology comes together I hope to live blog the keynote from Day 2 at Sydney vForum.

Feel free to join the conversation.


vForum Sydney Solution Exchange Booth Awards

Tuesday, October 26, 2010 Category : , , , , 0

In the tradition of social media we bring you the first ever (and possibly last) vForum booth awards. Success is based on a top secret criteria of awarding points in several random and sarcastic categories, as well as a little honest opinion.

Rodney Haywood and Alastair Cooke (@DemitasseNZ /www.demitasse.co.nz ) have dedicated minutes of their time to bringing you the best and worst of the booths, saving your the arduous journey through the crowds.

In no particular order, our awardees are:

Dedication to Booth Duty Award

Not a booth award, this is a personal award for the person who has shown outstanding effort above and beyond the call of duty.

David Caddick of Quest Software wins this award for still coming to, and standing, at the Quest booth all day, after requiring four stitches in his leg this morning. David’s enthusiasm for an early jog lead him to a lacerating encounter with the back stairs.

Well done David for not giving up and being there.

The Is There Anybody Home Award

This award recognises the booth that is there but fails to deliver a visible presence of life. Is there a positive message from having a booth with nothing to deliver your message?

Charles Sturt university wins this award for the apparent absence of life or anything informative (apart from a mobile Esky). The presence of cold beer in said Esky would have removed this booth from eligibility for this award, alas it was a case of a pub with no beer.

Thank you, come again.

The Puritan’s Booth Babe Award.

This award is inspired by Thomas Dureya’s annual effort to decrease the amount of clothing worn by vForum Booth Babes. Our puritan family values cannot condone the use of the scantily clad female form in a male dominated event as a marketing tool, hence our award goes to the Booth Babes presenting the height of puritan values.

The award goes to the VCE booth where the Booth Babes were covered from ankle to neck, along with modest head coverings. This is an excellent representation of Slip, Slop, Slap that would survive even a summers day on Bondi Beach.

We are very pleased to see a number of women on booths with excellent technical knowledge, not simply Booth Babes.

Show Me The Hardware Award

This award is to recognise the hardware vendor who has failed to show their product. Attendees are all interested in viewing and discussing your wares, which is hard if there is only a brochure.

The award goes to Dell, one of the largest server and storage hardware vendors, who only had a few laptops on their stand showing powerpoint. We were not the only critics to notice the lack of tin.

Maybe it’s all in the cloud.

Bravery Award

This award goes to a vendor that has gone the furthest and taken the risks. The unkind could call them cowboys, but we call them courageous.

The award goes to the brave boys at CoRAID, a new entrant into the storage market in Australia. With little local sales presence and only imminent VMware HCL support they nonetheless braved the big money vendors and brought actual hardware with blinking lights. Great to see them giving it a go in this crowded space.

Bigger and better next year.

Interactive Playground Award

This award is for the booth where the geeks got to play. vForum is for engaging and learning, this award celebrates booths that do this well.

The award goes to Cisco whose booth had a structured schedule of open briefings on their products, with excellent giveaways. There were a number of blades to touch and look at their insides, the hardware wasn’t just there it was there to be investigated. There were numerous technical people who sought questions to answer.

Geeks delivering to Geeks.

Buffet Award

This award recognises the booth with the broadest range of products and solutions visible. There needs to be something for everyone and the whole family should go home having had their fill.

The award goes to VMware who, despite being their event, showed an enormous range of products at a detailed level, every product was actually there to be touched and used. More than a dozen products were identifiable from distance and there were experts on them all.

No one trick Pony.

Most Appealing Booth Award

This award recognises the booth that stands out and draws you to it, with so many booths it becomes a blur. Something different must greet the eye.

The award goes to Trend Micro for their appealing Carnival theme, from a distance you could see it wasn’t your ordinary booth. With the space limitations and sundry restrictions placed by event organisers it takes effort to stand out.

Who says conferences are all a circus?

Thanks to all the exhibitors who make the event so valuable. Congratulations to the winners. All those that didn’t make this years list should be planning for next year’s awards.

Rodos and Alastair

Australian Storage Blogfest

Friday, October 15, 2010 Category : , , , , , 1

Are you a blogger in Australia who covers a bit of storage? If so then you will want to know that SNIA Australia is hosting a blogfest with a range of storage vendors on Thursday November the 4th in Sydney.

Its a one day tour, face to face, with the major storage vendors IBM, HDS, EMC and Netapp.

Here is what SNIA have to say about the event :
  • Bloggers benefit from a direct engagement with the key storage vendors to learn more about their technology and product strategy. With the increasing influence that bloggers have with customers it is important that authors are technically accurate and have a broad perspective of technology and the product/feature sets that differentiate the vendors. This is a unique opportunity to do a direct comparison across four major vendors in one day, at no cost other than an investment of your time.
  • Sponsors benefit from a direct engagement with key influencers about their technology and product direction/strategy. Their participation in this session can ensure their technology is well understood by those who have the influence over customer perception. This is a rare opportunity to interact directly in an open, friendly environment with these key authors.
  • The presentation sessions will be focused on technology themes determined by the bloggers themselves. The Blogfest is a one-day event with four 1.5-hour sessions. Each session includes time for technical presentation, hands-on activities and delegate discussion. We expect direct questions and feedback in return.
This should not be a dump of the latest product releases from each vendor. It will hopefully be a good discussion on storage technologies and the market. Moving from one vendor to another you can really compare and contrast on where each stand and their strengths and weaknesses.

Here is my personal recommendation for the topic for the day (which has not been accepted yet, its just an example).
“What new technologies in primary storage do you believe really gives customers better bang for their buck in the long term? What has your organisation seen and done in these areas in the last 12 months? Which new technologies do you think might not give customers the return they may think or are to risky to adopt in primary storage and explain your reasoning.”
I think there are many ways to answer that and it will be great to hear what IBM, HDS, EMC and Netapp's views are. Could be very telling. If you have an idea of a better theme or topic let me know.

The draw backs is that you can't work for a storage manufacture, bloggers must all be recognised independent writers on data storage related subjects working in Australia or New Zealand. You will need to take the day of work and if you are not in Sydney you will have to cover your own travel costs to the event (worth a flight I say). Travel throughout the day and meals are covered.

We really need to ramp up the amount of storage blogging happening in Australia and credit to SNIA and the vendors for giving this a go. Hopefully if this is successful we can get them to cover some travel costs next time around or hold it in a different city so that more people can be involved.

If you are interested, even if you are not a blogger but a person of influence in a community, then contact Paul Talbut, General Manager of SNIA ANZ with your details via paul.talbut@evito.net (or me). You never know, you may just get an invite!

I hope to see many people there!


Automating the vCloud API with F5

Thursday, September 23, 2010 Category : , , 3

Now that VMware vCloud Director is out in the wild we are going to see lots of clever use of the vCloud API.

An obvious and very natural use case is Cloud bursting which really shows the power of hybrid Cloud. F5 Networks have some great technologies to stitch all of this together.

Here is a video of using F5 technologies to burst in and out of the Cloud using the vCloud API. If your application can support this horizontal scaling, its a great use case.



Tuesday, September 07, 2010 Category : , , 1

You meet some interesting vendors on the show floor at VMworld. One I ran into was CORAID, who have an interesting idea of not using Fibre Channel or iSCSI as the transport for storage connectivity, instead they use ATA over Ethernet (AoE). Certainly something I had not heard of before, but with all the noise around FCoE and FCoTR why not AoE.

By using commodity hardware and lightweight Ethernet vendors like CORAID can reduce the cost of deploying storage arrays. They have a range of hardware, although they don't have certification on the VMware HCL yet. I believe certification is coming soon.

CORAID have some interesting founders who have come from such places as VMware and Kidaro.

The main reason I wanted to mention them was because they are going to be present at VMware vForum in Sydney Australia on October 26th to 27th. If you are going along why don't you check out CORAID are all about. Its great to see these vendors which you come across in the US starting to take interest in Australia and attending the local conferences.


Maritz on vCloud Datacenter

Wednesday, September 01, 2010 Category : , , 1

At VMworld yesterday Paul Maritz hosted a supper session after the keynote to explain more about the VMware vCloud Datacenter program.

Here are the session details.

Session Title:Public Cloud Computing Gets Real: Announcing New Enterprise-Class Service that Delivers on the Promise of Cloud Computing
Schedule Information: Tuesday, 12:30 PM (Room: Moscone North Room 134)
US Speaker:Kiran Sanghi Virtualization Strategist
Teradata Corporation
Bob Evans Senior Vice President
Paul Maritz President and Chief Executive Officer
VMware, Inc.
Bill Chang Executive Vice President of Business
Kerry Bailey Chief Marketing Officer
Verizon Business
Bates Turpen Senior Vice President, Technical Operations – Global Technology
InterContinental Hotels Group
James Johnson Senior Vice President, Global Technology Services
Length:60 minutes

Abstract:VMware CEO, Paul Maritz, will be discussing VMware’s newly announced cloud technology and how it’s being used as the foundation for a secure, high-performance vCloud service that is being introduced by leading service providers around the globe. Paul will be joined by key service provider partners that are delivering this new class of service, as well as enterprise customers who are at the forefront in leveraging public cloud services. In this session, you’ll learn how cloud computing is evolving traditional IT environments, making them more agile, secure and flexible.

It was a great session it was good to hear from the providers and customers on the program. Here is a video which includes most of the session.


Disclaimer : I work for a subsidiary of SingTel who were one of the panel participants. Either way, it was still a good session with great information from speakers.

VMware vCloud Director

Category : , 0

Well today was a big day at VMworld as well as for my own activities. After much speculation and waiting VMware have now released a Cloud product, yes project Redwood has gone GA.

Like me you may remember VMworld 2008 where VMware first put out their vision for Cloud. I have been blogging about it ever since my first post on the topic on Sept 23rd 2008. Like me you probably remember being disappointed that in 2009 the only thing released was vCloud Express which was not really a product but a program.

Today all of that has changed both for VMware and me. VMware actually launched a product, VMware vCloud Director (VCD) plus a new program called vCloud Datacenter.

First the product, VMware vCloud Director.

I have been working with VCD since before the beta. In fact, as far as I know, we were the first people on the planet to install VCD outside of VMware (we just beat one other company who had the alpha code). Since then we have been through the beta program and have been keenly awaiting the released version. Its been a long road but it means that we are very intimate with the product and have been able to gain lots of operational experience. We have been running some beta trials with some of our own customers on top of VCD to get some field experience for ourselves as well as VMware.

Over the next weeks you will probably see information coming out about VCD, for the moment check out the product pages on the VMware website and I would recommend looking at some of the "Meet the Engineers" videos in the resources page. There are some great sessions on VCD at VMworld alongside the mystery Lab 13 which will show you how to do a quick installation of the product.

Second the program, vCloud Datacenter.

Here is how the VMware press release describes the program.
While public cloud services have created an alternative for delivering compute capacity in a self-service, pay-per-use model, security concerns, uncertain SLAs, lack of compliance and fears of lock-in have limited enterprise adoption. VMware vCloud Datacenter Services provide a way for enterprises to extend their datacenters to external clouds, while preserving security, compliance and quality of service. Delivered by some of the world’s leading service providers, including Bluelock, Colt, SingTel, Terremark and Verizon, VMware vCloud Datacenter Services will use globally consistent infrastructure, management and security models to make it possible for enterprise customers to move computing workloads from internal virtualized infrastructure to an external cloud and back.
You can see that there are five providers that cross major parts of the globe. This includes SingTel, which is the parent company of Optus who owns my company Alphawest. It is great to be part of the program. However I prefer how it was quoted in the press.
Balkansky expects only a "few dozen" partners globally would be certified as 'vCloud Datacenter Service Providers'.

"We needed to have a best of the best, a quality set of partners we could go with jointly to our customers," said VMware CEO Paul Maritz.
So what this means is that VCD is a product from VMware. Anyone can go and purchase it to build an internal or a public Cloud (subject to licensing programs of course). But VMware have worked very closely with a select few providers, especially in this early stage of the product to provide some consistency and assurance around the services that are offered for external Cloud.

As the Principal Architect for a major deployment of VCD in Australia which will be launched sometime in the future (I am of course not allowed to say when) I am very pleased to have the product and the program now publicly available. I have had a few people comment about my lack of blog posts over the last year and I can tell you its been quite frustrating. Much of what people blog about is what you interact with on a day to day basis and for so long everything I have been doing has been under heavy NDA so I just have not been able to talk about it. Today is not the day to be doing deep dives into VCD, I will do that over the coming weeks as I share my likes and dislikes of the product.

Essentially today has been a day to celebrate the end of a long journey for many. It was good to see Eddie Dinel, the product manager for VCD on stage during the keynote. I managed to catch up with the development team as they celebrated tonight the end of their hard work. Here are some photos from the event.

Left to right are some very important people that I have worked with closely on all of this.

Rodos (thats me), Principal Architect - Datacenter and Cloud, Alphawest.
Eddie Dinel, Product Manager for VCD, VMware. Eddie was on stage during the keynote.
Mike DiPetrillo, Principal Systems Engineer, Global Cloud Architect, VMware.
Phil Weiss, vCloud Solutions Architect APAC, VMware.

I also got to meet some of the developers who were let out of their cages. Below is a photo of a team member who worked on the UI experience and a programer who worked on the UI plus a number of other areas.

Now its time for some sleep. VMworld has two more days. I look forward to VCD getting out in the wild, its a good thing for the Cloud market to have some new options and player coming to market.


P.S. I don't speak for my employer, this is my personal views and comments. If you are press and you want any comments, contact the media department at my company. If you are a blogger, a virtualisation geek or someone thinking of deploying VCD as a private or public cloud I would be more than happy to discuss with you my experiences of building and deploying one of the first global implementations.

VMworld - Future of Networking

Tuesday, August 31, 2010 Category : , 2

First session this morning at VMworld was by Howie Xu, R&D Director, Virtualisation and Cloud Platform at VMware. Howie is the networking futures guys. There was much expectation for this session with speculation around its content.

Here are some of the items that Howie talked about.
  • The lines between servers and networking are being lost. The two are blending. The network needs to be extracted from the workload. Bu the rate of change in virtual environments now at the networking layer is high and companies can't fund the staff to keep up with these tasks, which are generally quite standardised.
  • The different networking services from layer 2 to 7 are a headache to manage and co-ordinate. As we head for the Cloud this is going to get worse.
  • Moving beyond the Distributed Virtual Switch we need to move to the "Distributed Virtual Network". We need to be able to do networking with anything, anytime, anywhere at any scale. We need a standard network management layer (either physical or virtual)
  • Much of the problems can be solved through virtualisation, that is having a first layer of abstraction. but still keep functions, such as separation of duty.
  • The network must be made transparent with the same services whilst being able to scale out on demand.
  • A new vision for a vChassis which contains a data management and control plane that is a "session centric" virtual platform.
  • Todays networking is based on discovering things, such as addressing via DHCP, learning MAC addresses. Yet in this new world the virtualisation layer can be authoritative, it knows all of the details and does not need to learn them.
  • a vChassis should talk virtual 3rd party line cards that provide services, such as IDS. These need to be able to interact with hardware in some cases for offload, for example SSL.
  • There are problems with doing networking today, the IP address is used for identity and location, VLANs lack features like a hierarchy. You have to pre-provision VLANs to get around things but its a little messy. We need a virtualised a layer 2 . Mention of vShield Zone/App, expect to see more of this detailed and discussed this week.
  • A mock-up screen was shown of what this may look like (see picture above).
Being one of the first sessions before the announcements were made I think some of the details which may have been discussed were left out. Hence it was a good session showing where VMware are going but it lacked that little bit of detail which gets your brain really thinking. Great to see that VMware are dealing with the management problems and including facility for the 3rd party vendors to integrate.

Hopefully it will be a little clearer for everyone by the end of the week.


Dell acquiring 3PAR

Monday, August 16, 2010 Category : , , , 0

Twitter is all awash with news that Dell have acquired 3Par.

There have been a few who have been suggesting that 3Par was next on the acquisition trail. They have some good technology and certainly a more modern architecture (love those chunklets and what you can do with them).

What is also interesting is that Dell only recently went out and purchased another storage player, Ocarina. I wrote about Ocarina last year when we visited them as part of TechFieldDay. What does Ocarina do, compression for storage. Now 3Par does great thin storage but don't do compression. One wonders what of the Ocarina technologies Dell might be planning on integrating into their new purchase.

Certainly interesting times in the storage industry. The stack wars are heating up maybe?


VMworld 2010 Backpacks for Charity - Australian Version

Friday, August 13, 2010 2

Inspired by the idea from Kevin Houston and the Gestalt IT VMworld giveaway we am running a program for Australians to donate their VMworld swag to charity.

As Kevin has encouraged
This year, IF you receive a bag or backpack that you just don’t want, please don’t throw it away, but instead take it home, go to the dollar store and fill the backpack with pencils, crayons, paper and erasers and donate it to your local school system. You would be AMAZED to find out the numbers of children who don’t get backpacks and whose familes can not afford the costly school supplies that are required each year. You will be making some family happy and you’ll get the name “VMware” marketed throughout the schools, getting the next generation of techno geeks ready to learn all about virtualization.
So that is what we are doing in a more organised way.

We are looking for 20 people attending from Australia who are willing to donate their unused VMworld 2010 bag to the Salvation Army Community Center in Dubbo. They have a great need, especially at the start of the school year for school equipment such as this. The Salvo's are a much respected and loved charity in Australia who do a lot of community work and provide many services to those people in need. Dubbo is a regional down in NSW which has a high indigenous population.

The details :
  • We want 20 people to pre-register that they want to participate. If we get more the Salvos have said there is no such thing as to many.
  • You need to get your bag in great unused condition to Sydney. If that is a problem I am probably arrange to get it here if you can drop it off at an Alphawest office which is most capital cities.
  • You don't have to worry about the stationary, you guys have to pay for it. We will be doing and inventory of materials for each bag and buying in bulk. My wife is a teacher and will assist with the list. The budget will be around $30 to $40 each and you need to be able to put this into a Pay Pal account when needed.
  • It would be good to get some "suitable" additional materials to include. Maybe your company can donate something. Something suitable would be a good size USB thumb drive, a piece of sporting equipment or a drink bottle. Being branded is no issue, especially if its a well know brand (like Optus who I am trying to stitch up). Whats not appropriate, junk; such as old 64Mb memory sticks or old marketing junk you just can't get off your hands.
  • We will arrange to get them to Dubbo either directly or through the Salvos. However if you are interested in a awesome Road-Trip to deliver them in person let us know.
If you are not going to VMworld or can't participate please do what you can to spread the word.

To express your interest you can go to the form directly or use the embedded version below.

Thanks for your interest and support!


CardScan for Mac Review

Friday, August 06, 2010 3

Over the last year I have been transitioning my work off my PC SOE to my MacBook Pro. Until today one of the few applications I still had to run via my VMware Fusion Virtual Machine was my business card software.

The business card scanner is one of my critical business applications, its from CardScan. I meet a lot of people and therefore collect a lot of business cards. To save time I simply throw them through the neat colour scanner which OCRs the text and populates fields. It then syncs with my mail and hence Blackberry so every contact is always at hand.

Earlier this week I noticed that a Mac version was available and I was keen to give it a try. The fantastic thing is that CardScan don't do what you might expect and charge you all over again for the new software. If you own the scanner you can download the Mac version at no cost. Fantastic!

The first hurdle was getting access to the software. For some reason many of the links on the support website don't seam to work from Safari. I had to contact support to get the actual download URL. Once I had the software and got it installed I was stuck on the serial number entry page. Turns out you need an updated serial number. Support sent me to another web page that would not work in Safari. I tried it from IE and it worked! You are required to enter your existing Windows software serial number and the serial number of the scanner. Now the software would pass validation and load up, happy days!

The first very terrific thing about the application is the database support from the Windows version. I simply opened up the existing .cdb and all of my data was there. No data migration or transition. Well done CardScan!

I scanned some new cards. First thing I noticed was it scanned a card and then processed it straight away, a much slower process. In the windows version it does not process a card until you ask it to, so you can batch scan real fast. Felt strange just because of the way I was used to do scanning but then I noticed that there is still a batch scan button. Great.

The online help was good. I was wondering of how I would sync with my messaging application so jumped into help. Very quickly I was able to figure out that through preferences you can configure auto sync with your Contacts. The sync does not appear to be as sophisticated as it is in the Windows version but maybe it does not need to be, it just appeared fancy previously due to the difficulties of working with Outlook.

The Mac version is exactly what I want, the same familiar features and facilities. Everything has worked just great once past the download and serial number issues.

If as a Mac user you are after a powerful business card scanner, then check out CardScan. In my experience you won't be disappointed.


Australian Bloggers

Thursday, August 05, 2010 4

Okay here is a list of bloggers in Australia who cover the IT Infrastructure, Virtualisation, Data Centre, Storage and possibly virtualised Cloud space.

You will see that the list is short at the moment, as I am trying to create it. Please post a comment or send me a tweet if you know an active blogger in Australia who covers any of these areas in IT.

In alphabetical order.
You know we need more good tech bloggers in this country!


Note : I have put the employer of each author not to make any statement but to simply blog with integrity. As I am pointing people to these sites others may not be aware of each persons situation. I am sure that all these people write their own views not their employers. If you are listed and you want your employer removed and I will delete the entry.

P.S. To get added you have to have had more than a few posts in the last few month, be on topic and live in Australia. Posting on Apple products all the time won't make the grade. Apologies if you are an awesome Australian blogger and I don't have you there yet, especially if we are friends, innocent mistake, let me know!

Get yourself to VMworld FREE

Category : , , 1

Want to get yourself to VMworld San Francisco but can't convince your boss or afford to drag yourself there under your own steam?

Well here is your chance to win a free conference ticket, plus the accommodation and airfare. The airfare can be international so all you Australians out there, this one is for you too!

The prize is being organised by Gestalt IT and sponsorship is being provided by Xsigo and Symantec!

All the details are over at the post "Announcing the Gestalt IT “Get Away to VMworld” Contest!".

The winner will be picked by a group of judges (I am one) based on how well you plan to "pay it forward" from your win. Describe how you will share your enthusiasm for VMware, virtualisation or whatever its is you are into as a result of your visit to VMworld.

This is my 5th year in a row at VMworld so I can attest its the biggest geek fest and party of the year. So get your entry in and I might just see you there!

Good luck.


Monitoring your UCS faults with syslog

Wednesday, July 28, 2010 Category : , 2

When you deploy your UCS environment once of the first things you will want to do is integrate it into your monitoring system. One way is through integrating with syslog. Here are some notes and tips.

When problems occur in your UCS environment they will appear as Faults inside the Administration area. Click on the screen shot below to see some.

One thing to know is this page only shows you the current alerts, once they clear they disappear.

Here is an example alert exported from my system.
Severity | Code | ID | Affected object | Cause | Last Transition | Description
major | F0207 | 225741 | sys/chassis-1/blade-4/adaptor-1/host-fc-2/fault-F0207 | link-down | 2010-07-28T12:18:59 | Adapter host interface 1/4/1/2 link state: down

One of the key bits of information you are looking for is the fault code, in the example above its F0207. With that code you can look it up in the Cisco UCS Fault Reference.

If you search the reference for that code here is the details presented.


Fault Code:F0207


Adapter [transport] host interface [chassisId]/[slotId]/[id]/[id] link state: [linkState]


This fault typically occurs as a result of one of the following issues:

The fabric interconnect is in End-Host mode, and all uplink ports failed.

The server port to which the adapter is pinned failed.

A transient error that caused the link to fail.

Recommended Action

If you see this fault, take the following actions:

Step 1 If an uplink port is disabled, enable the port.

Step 2 If the server port to which the adapter is pinned is disabled, enable that port.

Step 3 Reacknowledge the server with the adapter that has the failed link.

Step 4 If the above actions did not resolve the issue, execute the show tech-support command and contact Cisco technical support.

Fault Details

Severity: major  
Cause: link-down  
mibFaultCode: 207  
mibFaultName: fltAdaptorHostIfLinkDown  
moClass: adaptor:HostIf  
Type: network
All codes are listed and the fault reference may be a valuable reference for you the first time you come across and error.

For here you will typically you will want send these alerts to your management platform for automated monitoring. A great way to do this is via syslog. Cisco have a good guide "Set up Syslog for Cisco UCS" you can follow for doing the configuration. Here is a shot of the page where you set it up.

Now once this is configure the alerts will appear in your syslog server.

Here is what our example above looks like as a syslog entry.
Jul 26 01:05:01 : 2010 Jul 26 01:08:54 EST: %LOCAL0-3-SYSTEM_MSG: [F0207][major][link-down][sys/chassis-1/blade-4/adaptor-1/host-fc-1] Adapter  host interface 1/4/1/1 link state: down - svc_sam_dme[3250]
Jul 26 01:05:14 : 2010 Jul 26 01:09:07 EST: %LOCAL0-3-SYSTEM_MSG: [F0207][cleared][link-down][sys/chassis-1/blade-4/adaptor-1/host-fc-1] Adapter host interface 1/4/1/1 link state: down - svc_sam_dme[3250]
You can see that fault ID F0207 which you an use as a reference. But also notice I have copied in two entries. One is the first event where the fault occurred and the severity level "major" and then there is another entry which states "cleared". You will want to filter out the cleared ones or if you have a smart system get it to match the two so you know which events have been resolved.

Hopefully the examples assist some people.


UCS Platform Emulator

Tuesday, July 27, 2010 Category : , 11

Cisco have released an emulator for the Unified Computing System (UCS). If you are working with UCS you can now run UCSM from your desktop without needing hardware, making training, testing and documentation much easier.

To get started go to the download page at http://developer.cisco.com/web/unifiedcomputing/start and complete the registration form [update : the form validation is very painful, just keep trying, ensure you fill out all fields and maybe put in a valid phone number format]. You can then download the virtual machine which runs the emulated environment. The download is 2.16Gb.

Open up the VMX file in your favourite VMware software (I use Fusion on my MacBook) and it will boot giving itself an IP address. It only uses a single vCPU, 1G of RAM and close to 6GB of disk.

Most of your activity will be via the GUI but you can change what your emulated UCS environment looks like via the console of the machine. Login with the username "config" and password "config" and you are presented with a simple menu.

Its handy being able to set the number of chassis and blades. You don't have a lot of flexibility, for example all chassis have the same number of blades and you can't have 4 uplinks, only 1 or 2.

Once you have configured up your environment point your web browser to the allocated IP address. Click the "launch" button to load the Java management GUI.

Once you log in you have the standard interface and can interact with many of the elements.

Of course as its an emulated platform so some things don't work such as the no data path, no SNMP, no KVM, no Telnet/SSH, no CLI, no RBAC and limited HA functions. Also the the VMware tools in the machine is out of date. Sounds like a lot but its still quite functional.

If you or others in your company need to work with UCSM I recommend you check the emulator out.


Gestalt IT TechFieldDay Seattle - Nimble Storage

Tuesday, July 20, 2010 Category : , , 0

Take a pile of smart just with backgrounds from Sun, Netapp and Data Domain, throw in a few PHDs (I assume) and see what falls out; thats Nimble Storage who launched at Gestalt IT TechFieldDay Seattle.

The company was formed in 2008, based in San Jose. The two founders are

  • Varun Mehta (Sun, NetApp, Data Domain)
  • Umesh Maheshwari (Data Domain)
They have some interesting people on their board of directors as well
  • Suresh Vasudevan (Omneon, NetApp, McKinsey)
  • Kirk Bowman (Equallogic, VMware, Inktomi)
Nimble call their technology game-changing, taking what was available in separate products and putting it all into one. Nimble coverage of iSCSI primary storage, backup storage and disaster recovery in a new architecture that combines FLASH and high capacity low cost SATA in a new way.

This brings FLASH into the range of many enterprises who would like to use it for more common workloads like Exchange, SQL and VMware. Their target is for organisations with 200 to 2000 employees.

Nimbles competition in the iSCSI market with market sizes (from IDC) are Equallogic who have 35%, EMC 15%, HP and Netapp are around 10% each.

Nimble have done the brave thing and started with a clean sheet of paper to try and create something that no one else can deliver.

The problems they are trying to solve are delivering fast performance without all those expensive disks and how to efficiently back it all up plus replicate that data to a second site for continuity purposes.

Techniques include
  • capacity optimised snapshots rather than backups
  • FLASH is used to give great performance
  • replication that is efficient and based on the primary information so that the time to recover and use that data is very quick, you don't need to wait for a resto
A key think that Nimble bring is their CASL architecture, it provides the following :
  • Inline Compression. A real time compression engine as data comes in. On primary datasets they are seeing about a 2:1 saving and on things like databases a 4:1 saving. Blocks are variable in sizes and Nimble take advantage of the current state of multi-core processors having a highly threaded software architecture.

  • Large Adaptive Flash Cache. Flash as a caching layer, starting at 3/4 of a TB for the entry box. They store a copy of all frequently access data, but all data is also storage on the cheaper SATA storage as well.

  • High-Capacity Disk storage. Using large SATA drives.

  • Integrated Backup. 60 to 90 days worth of "delta compressed incremental snapshots" can be stored on the system. They have put a lot of work into integration with Microsoft applications, integrating the VSS for ensuring consistency. The snapshot efficiency should remove the requirement for a secondary backup system outside of the primary storage. Combine this with replication to a remote site and you have a protected system.

    Nimble showed the results of some testing they performed on a Exchange 2010 19GB database running snaps over 10 days, the other vendor (Equallogic at a guess) consumed over 100GB of data whereas Nimble only consumed 3GB. A 35x improvement was claimed. This then results in less to replicate. Its suspected that the reason for this difference is the smaller and variable blocksize that Nimble can use, the competitor has a large blocksize.

  • Replication. The replication is point in time snapshot replication. Once nice thing that you can do is maintain different retention periods at each site. For example you might want to maintain a much higher frequency of snaps locally and a less frequent but longer tail of snaps over at DR, very nice. They have a VMware Site Recovery Manager (SRM) plugin in development but it has not been certified yet. Today you can't cascade replication but it will be coming in a future release. Cascade my be important for people who want to use the Nimble for backup, replicate locally and then offsite.
The befits that result from CASL are :
  • Enhanced enterprise application performance
  • Instant local backups and restores with fast offsite DR
  • Eliminates high RPM drives, EFDs, separate disk-based backup solution
  • 60%+ lower costs than existing solutions
When you create volumes they can be tuned for various application types, tweaking such things as page size or if it should be cached. The Nimble ships with a set of predefined templates for popular appellations. The same for snapshot policies which can be templates and a predefined set are provided.

The pricing estimates they have done is at under $3 per Gb for primary storage at an entry price of around $50K.

Here is the specs of the units.

There is no 10GB interface option yet but it will be considered on customer demand. The same goes for having a Fiber Channel interface. The controllers are active, passive on a system (not LUN) basis.

They currently have 10 to 12 beta accounts.

Umesh Maheshwari then have some further details on the technology behind Nimble. A great discussion from someone who knows the industry and the technologies, as you would expect.

Nimble is all about having the
  • capacity to store backups (through hi-capacity disks, compression and block sharing) along with
  • random IO performance for primary storage (through Flash cache for random reads and sequentialized random writes)
This technique of sequentialized was developed by Mendel Rosenblum in his PHD thesis in 1991 (see paper). If you don't remember Mendel was one of the founding brains behind VMware so his ideas have a good track record. Its called a Log Structured File System.

So why has this been done before, well it took technology a while to catch up to the idea. The original concept relies on the assumption that files are cached in main memory and that increasing memory sizes will make the caches more and more effective at satisfying read requests, hence the disk traffic will become dominated by writes. With only small amounts of RAM available it was a problem. Secondly the process requires a background job to do garbage collection.

Nimble have created CASL, an implementation of the log based file system. It utilises a large amount of FLASH for the cache and its integrated closely into the disk based file system. The index or metadata of the system is cached in the Flash and therefore the garbage collection can now work efficiently. Of course cache is bit of a simple word for what it does, its not a LRU, there is some complex meta data being tracked for performance.

The second element is the sequential layout of the data on the disks. How you store data on disk could be categorised into 3 different techniques.

1. Write in place. eg. EMC, EqualLogic
  • its a very simple layout, you don't need lots of indexes.
  • reads can go quite well
  • poor at random writes
  • parity RAID makes it worse
2. Write anywhere. eg. Netapp WAFL (write anywhere file layout)
  • more write optimised
  • between full stripes and random writes
  • its write a sequence of writes wherever there is free space. So when you starts is sequential but after a while the spaces that are free will be fragmented so you end up doing random writes
3. Write sequently. eg DataDomain, Nimble CASL
  • most write optimised
  • always do you writes in full stripes
  • good when writing to RAID
  • the blocks can now be variable size which is very efficient but it has a secondary effect that you now have room to store some metadata about the block such as a checksum
  • this requires the garbage collection process which runs in idle times to always ensure there is space available for writing full stripes, what makes this work is that the index is in Flash and the power of the current set of processors
  • the difference between what DataDomain do and CASL is that DD do their sharing based on hashes and CASL does it based on snapshots
Of course this makes you wonder whats the difference between the CASL cache and what many other providers are doing with a Tier of Flash?
  • Because the cache is backed by disk (the data is in the cache and on the disk) you don't need to protect the data on the disk. This means you can use cheaper flash drives and you don't need to do any parity or mirroring giving you saving of 1.3 to 2 times.
  • Its much easier to evict or throw away data in the cache than it is to demote data out of a Flash tier into a lower one, you don't have to copy any data.
  • You don't have to be so careful about putting things in cache as its not an expensive operation so all writes or reads can be put in cache for fast access if you need it again and of course cache is a lot more effort to integrate into your file system than tiering so if you are dealing with legacy its much harder then when you are starting from scratch like Nimble have.

I really got the feeling that Nimble are not trying to be everything to everyone. They are focused on a particular market segment, hitting their pain points and attempting to do it better than the incumbents are.

They have a few things to deliver in my opinion to reach the goal, such as
  • cascaded replication to offer true local and remote data protection
  • get the SRM module for VMware certified
  • its looks hard to scale out if you just need some further storage as you can't add disk shelves, you get what you get. Yet their is nothing in their architecture to preclude some changes here which is good.
The big question will be is it different enough to the competitors for them to get into the market. If you only difference is doing something better (no matter how clever it is under the hood) how easy is it for your competitors to be "good enough" or a much better price point. Some good marketing, sales force and channel are going to be key.

With CASL, Nimble certainly have some very nice technology, but nice technology does not always win in the market. Its certainly going to be great to see how their early adopters go and how they adjust the hardware range and feature set over the next 12 months!

Note that its not available in Australia or EMEA yet.


Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want and how I want.

GestaltIT TechFieldDay Seattle - F5

Monday, July 19, 2010 Category : 1

A big vendor in the networking and Internet market is F5. We visited them on the Gestalt IT TechFieldDay Seattle.

As you can see the room was full of people.


Kirby Wadsworth (VP of Global Marketing) did a who F5 are and what they do. F5 see themselves as the strategic point of control in your data center architecture optimising the relationships between users and the applications and the data that they need.

F5 have 44% of the general application controller delivery market which includes things such as load balancing and some minor layer 2 to 7 functions. In the advanced market where you go beyond layer 4 load balancing and taking advantage of caching, rate shaping and other elements the share in higher.

F5 have a broad set of products most of which a run from their BigIP, which is the hardware platform. The BigIP runs the TMOS OS. These products plugin or layer onto TMOS. The core business is certainly around Local Traffic Manager, where connections are balanced across servers. Global traffic manager does this across data centers. There are many products in the range :
  • Local Traffic Manager (LTM)
  • Global Traffic Manager (GTM)
  • Link Controller (LC)
  • Application Security Manager (ASM)
  • WebAccelerator (WA)
  • Edge Gateway
  • WAN Optimization Module (WOM)
  • Access Policy Manager (APM)
To me one of the most exciting things is that earlier this year F5 released a virtual edition of their Big-IP Local Traffic Manager. The LTM is a great device to run as a virtual machine and thankfully its not limited in terms of features. Great to see vendors starting to deliver choice to customers in how they would like to run vendors software! F5 did not make much of a deal about this, especially considering there were some virtualisation people attending. However there is probably not much you can say about it.

Long Distance VMotion

Next we had a demonstration of long distance VMotion. A really interesting part of this was that they use vOrchestrator to control the Big-IPs and the VMware tasks. It was great to see automation being done through Orchestrator workflows. It also shows the power of what you can do with F5 products when you start to pull multiple together and automate them.

I have seen this before at VMworld and its a little difficult to describe it in great detail. If you are interested in it seek out F5 at VMworld or look for the videos of the event which will come online at GestaltIT later. There are multiple elements at work including adjusting the load balancing pools, performing layer 2 over layer 3 tunnels and acceleration of traffic, which is what makes the storage VMotion work in a much faster and more reliable way. The workflow did some nice things such as when starting, first waiting for the number of connections to the server being moved to clear after it had been removed from the balancing pool.


Next we had Joe Pruitt (Sr. Strategic Architect, @joepruitt) do a great talk on automation and control through the APIs of F5 technologies. They were very early to support SOAP and cover a lot of languages as you can see below.

We looked at what the APIs covered, which is just about everything you could ever imagine doing. A number of examples were walked through which shows both the simplicity alongside the power of what you can achieve. They are split between iControl which covers all of the admin style process and iRule which is the rules for the traffic.

My only issue was that the code examples were not quite real as they contained comments, who comments their code in the real world!

Joe was one of the most enthusiastic presenters across the two days and his passion and joy for the technology really showed, it was great!

Remote Access

We then had a demo of joining some of the F5 products together to provide a bigger and more complex solution, being a global deployment of accelerated remote access. Using the global traffic director they could detect where the user was accessing from, align then with the appropriate entry point into the network (such as the local country) and then accelerate the resulting traffic. Its was good example of if you tie all these things together you can do much more.


Next was looking at some storage technologies, being ARX. Data is growing and file servers need to become building blocks where you can have policies to place data. ARX does this through open standards, being NFS and CIFS. The ARX is a device that acts as an enterprise class proxy file system. The diagram shown shows the structure.

You can take any storage you want with the characteristics you want and then use policies to move the data around those as required. This is achieved by placing the ARX device in front as a proxy. The ARX appliance looks like a standard client to the lower tiers so will work with many storage systems. The example included Cloud storage but in my opinion this was a little bit of Cloudwashing. Sure the use case was there but it relied on you using a Cloud provider who presented CIFS/NFS locally to your site, its not that the ARX could transpose its requests to talk to a Cloud based service (such as S3) directly. It was not an invalid example, but it does rely on a specific bit of technology that is not part of ARX.

The way ARX works is to place out a namespace across all of your tiers, tracks which bit of data (file) is where, route/proxy the requests accordingly and move the data around the tiers as required. The databases for routing the requests in real time is a non-trivial problem to solve according to F5, their namespace can contain a billion objects.

Curtis Preston discussed the issues around backup and restore with the way the data was laid out. The tiers supporting ARX is where you will probably need to backup and it does not have all the knowledge. Backup is probably going to be okay but restore is going to be hard and its not fully baked. If you need to restore something you are going to have to go and ask the ARX where to put the restored file or where was it previously so you can go and find it in your backup set.

F5 think the difference with ARX is that you can use multi-vendors on the backend and you are not having execute do a stub based solution like some of the alternative technologies.

An interesting last thought on this was the prediction that in a year data traffic management will be better understood, data will be considered another piece of traffic and managed accordingly.


F5 have a well kitted lab with lots of their equipment along with specialist device such as networking emulation and testing devices. People enjoyed getting back into a server room after a long day.


F5 did a good job, they had some demos and the right technical people presenting who knew their stuff. There might have been a few too many F5 staff filling the room but when TechFieldDay is in the building no one once to miss out right!

The core F5 technology is good and mature, this came through in the earlier presentations. You also got to see how the different products could be combined together. The interesting part was the ARX. I am sure it is a difficult problem to solve at the scales they discussed. However my feeling was it could do with its own interface into some Cloud APIs, maybe they are waiting for further standardisation. The backup and restore is a realistic problem and people will want to have resolved how they might handle it in their environment. Because they are integrating with the tiers as a client the ability to leverage any great features of those Tiers is abstracted or lost (but could be handled directly at that tear). I wonder if there would be any advantage for the ARX to be aware of certain elements to optimise its use of a particular Tiers vendor implementation, for example if its doing proxy for a DataDomain device it may use a more efficient method or interface (not having a good example for what one might be). The ARX from what I could see only added the large name space and tiering to the market. I am sure its not an inexpensive solution but I wonder if its need some more tricks up its sleeve than those two to get some key adoption. Certainly something to keep an eye on.

Thanks F5 for an interesting and fruitful few hours.


Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want and how I want.

Powered by Blogger.