Storage inflection point

 
 

 

IT decision-makers have reached breaking point over storage.

The volume of data that users require stored and managed grows unabated. In 2002, says investment bank Lehman Brothers, users will store 56% more data than in 2001.

Even though the physical cost of storing that data has dropped in half during the past four years from 15 cents per megabyte to about 6 cents, as IT spend has come under pressure storage has consumed an ever-larger proportion of a shrinking pie.

Against that backdrop some hard facts have emerged about the storage resources that are in place – facts that make finance directors’ blood boil. On average 40% to 50% of the capacity of deployed disk systems lies unused. In most cases, data is available only to servers directly attached to the device on which it sits. Organisations are locked into storage architectures because certain vendors have resisted standards initiatives. Lastly, for every dollar or pound spent on storage hardware, it costs an organisation five more for the administration of the data stored on that device.

There have been good reasons for such ‘over-provisioning’, of course. Aside from keep up with the general growth in user data, organisations have bought storage resources to meet the peak requirements of specific applications as these are rolled out, creating multiple, discrete islands of storage. The upshot in many cases, according to Steve Murphy, CEO of Fujitsu Softek, is that “chief financial officers have taken the cheque book out of the hands of application [and storage] managers.”

The storage crisis has certainly spurred many organisations to take action. And it has triggered some radical changes in strategy among vendors.

But they have come to the same conclusion. That to optimise utilisation, make data broadly available from any point in the enterprise, to cut the huge costs of storage administration through centralisation and automation, the focus has to be moved to the network.

While less than half of the world’s 500 largest companies have centralised, networked storage architecture in place, the prediction is for a rapid move to that a networked model. Says EMC CEO Joe Tucci: “We expect that more than two-thirds of the world’s storage configurations will be networked by 2005.”

Network now

Already, many organisations have invested heavily in storage networks -in storage area networks (SANs) that consolidate data onto a single, dedicated fibre channel (FC) network and in network-attached storage (NAS), where devices are hooked into existing networks based on the Internet Protocol. According to

 

Market snapshot: the storage systems and software market

Market metrics: Investment bank Lehman Brothers estimate the storage market (products and services) in 2001 was worth $56.4 billion, 2% down on the previous year. Gartner says organisations worldwide spent a total of $24 billion on disk storage systems (two-thirds on direct attached storage devices, and the remainder also on network attached storage and storage area networks), while buying $6 billion of storage management software.

Key suppliers: Computer Associates, Dell, EMC, Fujitsu-Softek, Hewlett-Packard, Hitachi Data Systems, IBM, Network Appliance, Quantum, StorageTek, Sun Microsystems, Veritas

Product segmentation: Lehman Brothers divides the storage management market into five key areas: direct attached storage; network-attached storage; storage area networks; back-up and archiving; and storage management software.

Sector buzzwords: Bluefin, Capacity on demand, DAS, Fibre channel, JBOD, InfiniBand, iSCSI, NAS, RAID, SAN, SNIA, storage pool, storage over IP, storage service provider, virtualisation

 

 
 

Forrester Research, 38% of the world’s largest 3,500 companies have already completed or are in the process of rolling out a SAN in some area of their operations.

This significant shift towards storage networks has largely been at the expense of traditional direct attached storage (DAS), where each device is directly connected to an individual server. According to Lehman Brothers, spending on DAS will fall from $18.8 billion in 2001 to $14.9 billion in 2004, while SAN sales will rise from $6.2 billion to $8.9 billion and NAS from $1.8 billion to $3.5 billion over the same period.

Organisations are already eulogising about their return on investment from networked storage. South Yorkshire Police, for example, claims it will save £1.1 million (EU1.7m) over five years by implementing a SAN across two data centres using midrange storage systems. This calculation is based mainly on the cost of upgrading previous DAS devices.

Many suppliers of storage technology are aware they need to follow that move towards storage networking. With consolidation and better utilisation through load balancing organisations will need to buy fewer storage devices, at least for the next couple of years. That means companies like EMC, IBM and Hitachi Data Systems (HDS) are emphasising their focus on the network and the software that controls that network.

However, for most organisations implementing a SAN is certainly not a trivial step. In order to manage their storage resources and data as a shared, multi-tiered environment organisations need to establish a network backbone into which they can plug new and existing disk systems, tape drives and switches. The problem today: building a network from heterogeneous equipment is difficult if not impossible, as most vendors’ products do not interoperate.

Customers in handcuffs

This lack of interoperability means that organisations have to manage different domains across a storage network separately. “Storage hardware vendors have got their customers in handcuffs,” says Fujitsu Softek’s Murphy. By this, he means that some hardware suppliers are in a position to dictate which storage devices will operate within a SAN and which storage software is need to run management it.

Two distinct battle lines have emerged around the issue of interoperability. First, individual suppliers have got together to cross-licensing their devices’ application programming interfaces (APIs). Along side that, there is an open-standards initiative.

Storage giants such as IBM, Hewlett-Packard and EMC are handpicking suppliers they want to share APIs with. For example, EMC and HP both agreed to share APIs in July 2002; that was followed a few weeks later by a similar partnership between IBM and HP. But talks between EMC and HDS over API sharing have resulted in lawsuits not interoperability that would allow organisations to use devices from both companies in the same network.

The Storage Industry Networking Industry Association (SNIA) is trying to overcome these piecemeal efforts by developing an open-standard. The idea is to enable organisations to use storage management software APIs, regardless of the vendor’s platform they use. Ironically, alongside their API-swapping efforts, most large storage suppliers are also working with SNIA to develop the Bluefin specification designed to bring about this interoperability.

That underscores the perceived shift in the dynamics of storage – the power will shift from the device to the software that controls the network of devices. Recognising that companies’ suppliers are investing heavily in software. For example, EMC is sinking 70% of its $800 million research and development budget into software. A key part of

The bottom line is that organisations have to start planning for a new approach to enterprise storage. “With storage requirements growing in excess of 50% annually and budget cuts already kicking in, something has to give,” says Galen Schreck, an analyst at Forrester Research.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics