Storage managers have a balancing act to perform in 2008: they need to continue to satisfy rapacious demand for storage capacity, while meeting intense pressures to make the use of storage resources much more efficient. Indeed, with the average expected uplift in storage budgets sitting at just 5% for 2008, the only way managers are going to be able to fulfil demand for storage is by making better use of their existing environments.
According to IT industry analyst group IDC, requirements for storage capacity are growing at a rate of nearly 60%, a pace being driven by applications such as email, data warehousing, voice over IP and RFID.
But, says IDC, during 2008 the industry will see significant shifts in the way data is stored, managed and protected: “The overarching theme of storage efficiency will intensify throughout 2008.”
That quest for greater efficiency is not just about cutting costs. Storage devices are major consumers of energy – much of it wasted. The growth in storage capacity has also put pressure on space within the data centre at a time when many organisations are struggling to source adequate facilities. Lastly, regulatory compliance issues are forcing organisations to retain more data for longer periods, and to ensure it can be retrieved rapidly and reliably.
The upshot of all these factors is a focus on several innovations that promise greater efficiency: virtualisation, thin provisioning, single-instance storage/de-duplication and low-cost disk systems.
Virtualisation – which provides a logical view of stored data rather than a view of the way it is actually stored – is already a key part of many storage strategies. And its benefits are not in doubt.
This year’s Effective IT Survey highlights widespread enthusiasm for virtualisation. Almost a third of respondents said they had adopted storage virtualisation and another 23% said they planned to do so this year.
Take-up of virtualisation had a clearly positive impact. Of those who adopted the technology, 48% viewed it as effective and 37% declared it very effective.
One of the chief benefits organisations are seeing is in the reduced burden on storage management. “Storage virtualisation equates to further reductions in the storage management gap,” writes Fred Moore, head of storage consultancy Horison Information Strategies, in his book Storage: New Horizons. “With virtualisation techniques, the user does not need to know how storage devices are configured, where they are located or their capacity limits. By separating logical and physical characteristics, physical devices can be added, upgraded or replaced without disrupting the application or server availability.”
Virtualisation is helping organisations to make more efficient use of their storage resources. But it is by no means the only mechanism.
Many businesses have struggled when they need to scale up their storage capacity, says Andrew Manners, UK head of storage at systems giant Hewlett-Packard. They have also struggled to get their utilisation levels for many disk systems above 50%.
Historically, storage has been assigned based on a theoretical assumption about the maximum capacity a given application might need – physical storage is allocated to a logical volume when the volume is created, which in effect means businesses have been ‘over-provisioning upfront’.
One key solution to that is thin provisioning. As networked storage company EqualLogic outlines: “Thin provisioning is a forward planning tool for storage allocation in which all the storage an application will need is allocated upfront, eliminating the trauma of expanding available storage in systems that do not support online expansion. Because the administrator initially provisions the application with all the storage it will need, repeated data growth operations are avoided.”
A key result is improved utilisation of physical storage resources, says EqualLogic: “To avoid the inefficiency of over-provisioning, thin provisioning allows the administrator to limit the actual physical storage resource allocation to what is needed now, and enables the automatic addition of storage resources online as the application grows.”
Early customer enthusiasm for thin provisioning is justified, notes Stanley Zoffos, an analyst with IT advisory group Gartner: its impact, he says, will be “profound”. But because the technology relies on systems that support virtualisation of the back-end storage disk, the technology “has yet to be retrofitted into older storage systems”.
The savings of thin provisioning certainly look impressive. According to Zoffos’s calculations, as well as providing a cost-effective method to scale storage resources, thin provisioning should improve device utilisation rates, reducing the number of physical disks required to support a given workload.
However, much of the need for such scalability stems from the fact that organisations are holding multiple copies of the same data: for example, different users often archive the same received emails, attachments, PowerPoints and so on.
That situation has been evident for several years, but companies now have the tools to do something about it and improve their storage efficiency. According to a recent survey of 660 storage decision-makers by specialist website SearchStorage.com, interest in ‘single-instance storage’ is rising fast. A fifth of respondents said they are planning to deploy de-duplication technology by the end of 2008, compared with just 12% in the early part of 2007.
Storage managers are also looking for more cost-effective targets for that data. They want to set a lower priority on certain types of data and move it to disk systems based on iSCSI technology. Such environments employ low-cost SATA disks drives that, while not as reliable as the specialist drives used on high-end arrays, are adequate for data that is not mission critical. They also use the Internet protocol for storage networking, a much cheaper alternative to the high-end fibre channel standard in terms of both upfront and operational costs.
Companies are putting such disk systems in departments and subsidiary offices, and using them for email storage, backup and non-critical applications.
IDC thinks there is even greater opportunity for iSCSI. Indeed, it suggests that virtual servers such as VMware will emerge as “the killer application for iSCSI”.
There is further help on the way for companies trying to meet the twin goals of capacity and efficiency. IDC predicts that in 2008, online storage services (storage-as-a-service) for activities such as online backup, archiving and replication will be “accepted as a viable option”.
Another newly emerging option will be solid state disks. “These will become more viable for mainstream storage solutions as a result of declining price points,” the analyst group says. And with no spinning platters or other moving parts, there is a clear opportunity for more efficient power usage.
As such initiatives show, storage efficiency needs to be attacked on multiple fronts – there is also as much to be gained from educating users about stored efficiency as employing the latest technologies.
But when the typical large enterprise is preparing to deal with an extra 40 terabytes of data in 2008, inaction is not an option.