The year of virtual storage

As many IT departments have already learnt first-hand, adopting server virtualisation has a dramatic impact on storage infrastructure. For one thing, many of the systems management manoeuvres that server virtualisation allows, such as server migration, demand storage resources that are shared between servers.

According to IT market watcher IDC, the adoption of server virtualisation has triggered a spike in sales of shared storage technologies. In the third quarter of 2010, the market for network-attached storage (NAS) grew by 50% year-on-year, and for iSCSI storage area network (SAN) technology by 42%, a recent IDC report found.

But optimising storage infrastructure to work well with a virtualised server environment is not simply a matter of adopting shared resources. This is because virtualisation allows servers to be provisioned, scaled and moved around in new and complex ways, introducing patterns of data consumption for which traditional storage systems were simply not designed.

According to Ratmir Timashev, CEO of virtual systems management tools vendor Veeam, this situation is further complicated by a division that exists in many enterprise IT organisations between the experts in virtualisation and the experts in storage.

“For the storage guys, physical and virtual systems are the same because from the outside they look like they are the same,” he argues. This attitude, he claims, prevents organisations from taking advantage of the functional benefits of server virtualisation and can even result in virtual systems working less effectively than unvirtualised environments.

The widespread use of server virtualisation has therefore provided extra impetus for storage virtualisation adoption. Because while storage virtualisation technologies – and there are many varieties – offer usage and efficiency benefits on their own, they are especially useful when used in conjunction with server virtualisation.

Deduplication, for example, which removes repeated instances of the same data, is particularly powerful in a virtual server environment where multiple copies of the same disk image are created. Other examples are thin provisioning, which ‘tricks’ a system into thinking it has the storage capacity allocation that it wants while only giving it what it needs, and storage ‘tiering’, whereby data is moved to appropriate disk media according to how often it is used.

Each of these technologies has been on the horizon for some years, but as the latest Effective IT Survey found, 2010 was the year storage virtualisation came into its own.

With 45.4% of respondents having used it, storage virtualisation was the ninth most adopted IT strategy, the survey found, up from 13th last year. More impressively, it was ranked fifth most effective, up from eleventh last year. Half of the respondents that had adopted storage virtualisation said that it has deliver the expected return on investment, with a further 37.8% reporting that it was too soon to tell.

That the techonlogy was in high demand could also be seen in the lengths that mainstream IT providers went to get their hands on it. Like data deduplication pioneer Data Domain in 2009, ‘virtual era’ storage vendor 3PAR was the subject of a fierce bidding war, in this case between Dell and Hewlett-Packard. HP was the eventual victor, agreeing in September 2010 to pay $2.4 billion for the company, more than twice Dell’s initial offer and a 160% premium on 3PAR’s market capitalisation. Thwarted, Dell went on to acquire Compellent, whose storage systems support thin provisioning and storage tiering. Storage market leader EMC, meanwhile, paid $2.2 billion for Isilon, which describes its scalable storage technology as “scale-out NAS”.

In practice

There was no shortage of end-user ogranisations willing to share their experiences. The London Borough of Hillingdon, for example, adopted storage technology from Compellent shortly after virtualising its server farm in 2006. Roger Bearpark, head of ICT at the council, told Information Age in November 2010 that storage tiering has helped it to improve database input/output speeds 13 times over, while thin provisioning has allowed it to cut storage costs despite growing data volumes.

Hedge fund Thames River Capital, meanwhile, chose 3PAR to support its storage area network, in part due to its integration with VMware’s virtualisation platform. The company says it has achieved a 40% improvement in the performance of virtual machines as a result. Because it can use individual servers more efficiently, it has been able to reduce the number of physical servers it needs by 60%, it claims.

But these are flagship case studies for the vendors, and other companies have found that storage virtualisation technologies can introduce complexities of their own.

One example is Baron Funds, a New York-based investment company. When the firm experienced a drastic deterioration in the performance of its email and database systems, both of which were based on a virtualised server platform, it eventually traced the issue to a tiered storage system.

Usually, the storage tiering system would allocate the highest-performing storage media to the email and database servers, as they are the most frequently accessed. But in this instance they had been switched to cheaper disks, slowing down performance.

The problem, Baron Funds’ director of network technology told Information Age in July 2010, was that an employee working at the weekend had been repeatedly searching the entire network infrastructure for a particular file. The search engine therefore accessed all the available files over and over again until the email and database systems were no longer the most frequently used. The storage tiering system duly demoted them to cheaper disks.

It was a simple enough issue to resolve, but the diagnostic tools from both the storage vendor and virtualisation software supplier failed to identify it.

Baron Funds’ example provides a glimpse of the complexity that storage virtualisation can introduce to the IT infrastructure, and the problems that can arise as a result. Given that server virtualisation itself is making life more complicated as it is, this might be an unwelcome development for many IT departments.

So, it seems, while the complexity that arises from introducing server virtualisation can in part be relieved by storage virtualisation, in practice that technology can bring its own complexity too.

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics