The virtual impact on storage

Pick any business or government organisation at random and there’s a good chance that they are saving money by using server virtualization. However, as is now becoming clear, there is also a chance that they are blowing those savings on extra storage capacity as a result.

In a January 2010 report from US-based investment house William Blair & Company, analysts Jason Ader and Dmitry Netis estimated that, for every $1 spent on server virtualization, customers spend between $2 and $3 on storage. No wonder, then, that they described server virtualization as “the key technology development that will shape the future of the storage industry”. 

That is good news for storage vendors, but bad news for cost-conscious customers. In some cases, the incremental spend on storage is understandable: in order to unlock the full potential of server virtualization technology, and in particular the ability to move virtual machines (VMs) between physical hosts, networked storage is required.

For companies just starting out on virtualization, that means implementing a storage area network (SAN). For many others, it demands an upgrade of the existing SAN environment. But that is not the end of the story. Even those organisations that have invested significantly in their storage infrastructures in recent years are running into problems with server virtualization.   

Trouble ahead

First, they are encountering big issues with storage utilisation, according to Adam Stringer, an expert in storage systems at management consultancy PA Consulting. “There’s a great deal of wasted capacity in many virtualized server estates,” he says.  

In part, that is because virtualization allows VMs – and the disk storage they require – to be deployed rapidly, leading to the situation commonly referred to as ‘virtual sprawl’.  “A VM that is no longer needed but remains on a host system in the server estate has storage allocated to it that could otherwise be deployed elsewhere,” he points out.

But the situation is further exacerbated by the fact that most VMs are created from a template, a so-called ‘gold image’, in order to speed up deployment. When creating these templates, most IT teams set the default size for the virtual machine disk image (VMDK) at a relatively high number – on average, between 50 and 100 gigabytes – in order to accommodate the operating system, application and data storage required.

The problem here is that the space required by different VMs in fact varies greatly. By setting the default VMDK high enough for an organisation’s more disk-hungry applications, systems administrators may well be lavishly over-provisioning others.

According to Keith Inight, technology strategy director for global managed operations at IT services company Atos Origin, this over-provisioning is often seen as “the better of two evils, since most applications will crash ungracefully once they run out of available disk space”.

Continued…

Page 2 of 3

With a single operating system running on a single machine, input/output (I/O) should, in most situations, be smooth and relatively sequential. But as more VMs are added to a physical host and start competing to send their I/O streams to the hypervisor for processing, these streams are unable to flow smoothly and performance drops, sometimes catastrophically.

For that reason, Jig Patel, virtual infrastructure manager at Arup, is very choosy about which applications he is prepared to place on the engineering consultancy company’s virtual infrastructure. “It’s been a rocky year and half, and things are just calming down now regarding our virtual infrastructure,” he says. “The storage performance requirements of virtual machines are quite high, and on top of that, you also need to consider disaster recovery and back-up needs.” 

Right now, Arup has around 400 VMs running “quite happily together” on its virtual infrastructure, but he has decided against adding a high-performance, SQL Server-based system with high disk I/O to that mix. “Although the ESX servers [virtualized using VMware software] could cope with the CPU and RAM requirements, the back-end storage would also need to be able to cope with the I/O requirements.”

Just ten VMs running SQL could make the other 400 VMs suffer, he says, so he has commissioned an entirely new virtual environment with NetApp storage specifically for the SQL environment.   

The third problem with providing virtual machines with sufficient storage relates to storage management. Because storage allocation is managed through the virtualization layer, typically the domain of the server management team, their colleagues on the storage management team often lose sight of what is happening within the environment, says Inight of Atos Origin.

“Most IT teams still operate in silos, but virtualisation muddies the demarcations between servers, storage and networking, and that can introduce difficulties, especially when something goes wrong,” he says. 

It can also make storage management processes, such as performing backups, extremely difficult to manage. As Andrew Reichman, an analyst at Forrester Research, pointed out in a 2009 report, “Server virtualisation can create virtual sprawl, and backing up all these new images in a timely manner can be difficult. Plus, as IT increases the ratio of VMs to physical hosts, completing backups within defined maintenance windows is difficult.”

Plus, many IT directors say that in order to back up a VM that may be moved around a virtualized environment on a regular basis, you have got to first figure out where in that environment it is currently located.

Continued…

Page 3 of 3

These three problems are leaving many IT teams feeling forced to choose between one of two strategies, says Stringer of PA Consulting.

The first is to be less aggressive in adopting virtualization, by leaving more applications (especially mission-critical ones) running on their own dedicated, physical servers and curbing the creation of new VMs.

The second option is to continue to over-provision storage, which seems to be the preferred choice of many organisations, he says.

Industry responses – Thin Provisioning and SRM

A third option, namely thin provisioning, has been touted by suppliers since at least 2003, but uptake has so far been muted.

Initially pioneered by smaller storage vendors, it is now receiving more attention, thanks to Hewlett-Packard and Dell’s protracted bidding war for one of those pioneers, 3PAR. At the same time, storage vendors such as IBM and EMC have been developing their own thin provisioning technologies, and virtualisation leader VMware has built these kinds of capabilities into its software. 

In essence, thin provisioning enables a storage array to allocate storage to an application but release that storage space only when the capacity is required. When utilisation of the allocated storage approaches a predetermined threshold, the array automatically expands the storage volume on a just-enough, just-in-time basis, without involving a storage administrator. In this way, the technology boosts storage utilisation by eliminating the need to install physical disk capacity that goes unused.  

The technology has real appeal for some IT directors. Gary Hird, technology strategy manager at retail company John Lewis Partnership, says, “We’ve boosted our storage virtualisation rates significantly using thin provisioning, because we no longer end up with chunks of capacity that are allocated but unused. It’s definitely something we’re looking to take further.”

If thin provisioning can help companies tackle utilisation rates, a new breed of storage resource management (SRM) tools may be their best hope for tackling the challenges of performance and storage management, according to Gartner analyst Valdis Filks: “There are many automated storage management tools that can interrogate the virtualised environment and provide storage reporting. But these tools are not commonly implemented because IT departments tend to report storage manually, using in-house developed spreadsheets, process and programs.”

That needs to change, he claims, because companies with formal storage reporting, monitoring and capacity planning processes in place will be able to reduce storage costs in virtualised environments by 20% to 30%. “As the direct links between an application and a storage device are often removed by virtualisation, [it reduces] the visibility and accountability for storage usage in the IT department,” he says, a situation that calls for more sophisticated reporting tools.

Sample SRM vendor products include EMC Ionix, HP Storage Essentials, IBM TPC, NetApp SANscreen and Symantec Command Central. Additionally, some vendors of storage arrays are increasingly including tools for their own environments. These include EMC Celera Manager, NetApp File Storage Monitor and Sun Storage 7000 Analytics.

New technologies, tools and approaches all come with their own costs, of course. And at a time when disk storage has never been more economical, many companies may for now decide to stick with the tried-and-tested strategy of over-provisioning and limiting virtualisation to tactical ‘quick fixes’, applying it only to applications with a relatively small and predictable storage footprint.

But if they do choose virtualization stall over virtualization sprawl, they should be prepared for their server consolidation efforts to hit a brick wall at some point.

Related Topics