Storage economics

When the University of Stirling issued a tender for a new storage management infrastructure, it was surprised by the overwhelming number of responses. In almost every case, though, the cost of the proposed solutions was prohibitive.

 
 

Hands on: University of Stirling

As with many organisations, the University of Stirling’s IT infrastructure grew at a helter-skelter pace during the 1990s. But by the turn of the millennium, managing that infrastructure had become a round-the-clock headache.

Backups consumed 100 tapes every week and a full tape back-up of all 40 servers in the university often took more than a weekend. The option of archiving during the day was also a problem, as it sapped the performance of the university’s network.

Furthermore, the university’s ageing Microsoft Windows NT-based file servers were not only running out of capacity, but falling over as often as once a month, making the data restore process a perpetual challenge. “Tracking which tape had been used for which back-up was becoming a nightmare,” says Bullen.

The university needed to radically rethink how to deal with this growing storage problem while also upgrading its network to gigabit speeds and implementing a server consolidate program planned to run over six years.

Although the university’s budget was modest, its requirements were not. Any proposed solutions from vendors had to meet high thresholds of reliability, availability and scalability, allow the university to easily scale its storage capacity, and still not involve high capital and running costs. “The problem was that our budget wasn’t up to some of the solutions,” says Bullen.

After debating the relative merits of network attached storage (NAS) versus storage area network (SAN), Bullen’s team opted for fibre channel-based SAN equipment from Hewlett-Packard (HP) and its partners.

To save money, Bullen decided to eschew up-front consultancy, to go for the simplest possible configuration, and to installed the system using internal skills. Some aspects proved more straightforward than anticipated. “Fibre channel is incredibly easy to set-up, so we were able to do it ourselves. The SAN switch itself just worked out of the box,” says Bullen.

Next, a new Ultrium standard tape back-up system was plugged into the network to replace the many four millimetre digital audio tape (DAT) drives used for backup. Finally, two HP storage arrays – an HP VA-7100, followed by a VA-7400 – were added and users’ NetBIOS names mapped to the two arrays so that users would not even know that their storage had been switched.

But while the benefits are clear in terms of the improved reliability and manageability of the new storage environment, calculating the cost savings has not been so easy, partly because the SAN implementation was part of a wider project. One metric does stand out, though, says Bullen. The amount of time spent dealing with back-up and recovery has been cut in half.

 

 

“The problem was that our budget was not up to many of the solutions,” says Brian Bullen, Unix systems specialist at the university.

Like many organisations, the university’s IT infrastructure had reached saturation point. Back-up windows were being smashed because of data overload and the university’s Microsoft Windows NT-based file servers were reaching a capacity that could only be topped-up by purchasing more servers — thereby compounding the back-up problem.

Such challenges are common to most major organisations across the world. Quite simply, managing ever-increasing volumes of storage is eating large holes in IT budgets at a time when these are frozen or shrinking. As many are finding, solving the underlying problem requires some radical re-architecting — often by networking the storage capacity — a move that involves the kind of upfront investment that many chief financial officers baulk at.

Multipliers

For a start, today’s storage cost profile is multi-dimensional. According to analysts at research group Gartner, organisations can spend several times the initial purchase cost of their storage devices in administration costs. Gartner calculates that for every $1 spent on mainframe storage, an organisation will spend $3 on management. For Unix-based storage, it suggests that the multiple is seven times, and for Windows NT/2000, the management costs rises to fifteen times. But what lies behind such high overheads and what can be done to dramatically reduce them?

At the heart of the problem is the traditional direct attached storage (DAS) model, wherein a server interacts with its dedicated storage resources, either within the same cabinet or directly connected.

As storage volumes have proliferated, four main drawbacks with DAS have emerged, says Steve Duplessie, a senior analyst at sector consultancy the Enterprise Storage Group. First is poor utilisation of storage, as the system can only access the storage that is directly attached to it and therefore cannot share its resources with any other device that is becoming overtaxed. Second, is the problem of scaling up the available resources, and the inevitable downtime that is associated with adding new storage.

Third, says Duplessie, is reliability. “There is only one way to access the critical information on the storage device in a DAS world. If the server goes down, then there is no other means of accessing the data,” he says.

Finally, there is the question of economies of scale — in both staff and equipment. “Since talented storage personnel are scarce and budgets are tight, networked storage is the primary way to scale people. We estimate that a systems administrator could handle between five and ten times the amount of capacity under management in a networked environment compared to DAS,” says Duplessie.

Many DAS environments have grown large by accident — for example, by the development of an application that has proved more popular than first anticipated. “Organisations buy a system with a server, then find that the application and the database has grown bigger than the [internal] disk that came with the server. What do they do then? They buy a new server and they have the same problem six months later,” says Mark Ellery, business development manager at Hitachi Data Systems (HDS).

Yet if IT managers are to unlock the funds to invest in the new networked storage infrastructures that could help solve some of these problems, they will have to do more than simply quote high cost-of-management figures to a sceptical finance director.

In the current environment — and not wanting to be in a situation where the point is proved by a series of severe outages — IT managers need to formulate cast-iron return on investment (ROI) and total cost of ownership (TCO) calculations to underscore the urgency of the situation.

Audit and calculate

Before calling in vendors Krischer says it is essential storage decision-makers know where they stand. They need to conduct an internal audit that defines in detail the existing infrastructure, its associated problems and what is causing them.

The difficulty with such studies is that many of the costs associated with storage can be difficult to quantify. For example, it is easy to list the list price of disk arrays, but trickier to cost the amount of man-hours needed to manage it and its stored data.

Furthermore, Duplessie suggests that some of the storage problems that organisations are grappling with may have been generated in-house — either by ad hoc solutions implemented to solve short-term requirements or by incompetence.

The next step is to define the underlying goal. “Is it a reduction in overall storage costs? Increased availability? Better utilisation? Users have to establish the target,” says Krischer.

That requires a set of metrics — covering hardware, maintenance, administration, energy costs, floor space costs, training, co-location of data, and so on — that illustrate how well the storage infrastructure is holding up.

 

SAN costs and savings

  • Maintenance. In many cases, building a storage area network – either based on fiber channel or iSCSI – will go hand-in-hand with a program of server consolidation.

  • Licence fees. As a result of concurrent server consolidation, organisations are often able to slash their licensing costs because they are running packaged software over fewer servers.

  • Management. By centralising storage in just a few locations, organisations can cut the amount of time and money spent on management.

  • Implementation. A storage area network necessitates the installation of a separate network to connect servers and storage devices. The more servers in more locations, the more expensive the roll-out will be.

  • Vendor lock-in. Because of the proprietary approach of many vendors, early adopters of storage area networks have found themselves locked in to buying storage arrays from a single vendor and some vendors have exploited this vendor lock-in to overcharge, say analysts.

     

  •  
     

    “They should be able to capture some sort of disk utilisation or even tape utilisation statistics, such as hours of use in a 24 hour cycle or the amount of gigabytes backed up over a certain period of time,” says Steve O’Brian, senior product manager at storage area networking device vendor McData.

    Aiding that process and ensuring that they have not left anything out, organisations can use some of the online ROI calculators offered by vendors, although they should take the resulting figures with a pinch of salt, advises Duplessie.

    SAN arise

    But while assessing the costs of direct attached storage may be relatively straight forward, the ROI and TCO calculations for networked storage architectures are far from simple.

    Currently, most mid-sized and large organisations have a hybrid topology of direct-attached, network attached storage, and in some cases, storage area networks, depending on the applications end-users are running and the legacy of their storage infrastructure.

    While network attached storage (NAS) is regarded as an easy and cheap way to augment storage capacity, especially for file serving, the real means to the end rests with storage area network (SANs). By building dedicated backbone networks that treat all storage resources as a single pool of capacity, organisations can expect higher utilisations, lower cost management and greater flexibility, scalability and availability — at least that is the claim of vendors and some analysts.

    One of the key benefits is a higher utilisation of storage assets, boosting the figure from under 30% in a predominantly DAS environment, to more than 70% in a networked environment. “The essential aspect is that if a server needs storage, you can allocate from the reserve. You don’t have to keep a reserve for each one of the servers,” says Krischer.

    SANs are typically based on a fibre channel network or, alternatively on an Internet Protocol-based iSCSI (Internet Small Computer System Interface) network. The main differences between the two are that while fibre channel offers higher performance at a relatively high cost, iSCSI or IP-SANs offer lower cost, but are slower and still evolving technologically. In time, analysts expect to see the IP-SANs driving down costs, improving standardisation and interoperability – a sore point with many users.

     
     

    SAN target applications

  • Database systems

  • Data warehousing

  • Graphics

  • Video editing

  • Post production environments

    Source: Procom Technology

     

  •  

    At present, there are vested interests among storage vendors that resist standardisation, making the task of attaching storage platforms from multiple vendors to a single network problematic and expensive.

    Indeed, storage area networking often brings with it the risk of proprietary vendor lock-in. “One of the biggest problems with SANs is the potential that vendors will use the change to lock in the customer. Some of the vendors will not misuse that. Some of them will see it as an opportunity,” says Gartner’s Krischer. With a lack of standards, interoperability is still piecemeal, and vendors often argue that customers can only ensure their SAN works efficiently if they attach devices from a single source.

    For example, Josh Krischer, an enterprise storage analyst at reseach group Gartner, cites the example of a major British bank that was forced to pay five times the market price for storage arrays for its new SAN. That represented ten times the lowest price that could have been negotiated by an aggressive buyer, says Krischer.

    When it comes to roll out, both analysts and vendors advise against any kind of Big Bang approach in which the whole organisation’s storage capacity is wired together. By rolling out a SAN in a piecemeal fashion, IT will be able to prove the benefits, as well as helping the organisation to spread the high cost of implementation.

     

    NAS target applications

  • ISPs

  • Web and email servers

  • Data warehousing

  • Computer aided design environments

  • Graphics/imaging

  • Medical libraries

    Source: Procom Technology

     

  •  
     

    SAN structures, of course, require more than just wiring. Companies need to purchase SAN switches, specialist software for management and ‘virtualisation’, and often new SAN-enabled storage arrays. But such centralisation can generate significant savings. For example, Bullen suggests that in Stirling university’s modest environment, staffing levels to support functions such as tape backup and troubleshooting have been cut in half.

    Nonetheless, reliable ROI and TCO figures from users that have already implemented various networked storage strategies are not widely available, and some claims seem too good to be true. For example, McData claims a payback period of just four months for an implementation at American Electric Power, a SAN project that it says generated cost savings of 30% in the first year alone.

    The reaction of analysts is mixed. Krischer is sceptical that rewards are as big as vendors suggest, particularly because of the issue of vendor lock-in. Others are more convinced. “Payback can be as soon as a day in the case of the added uptime and flexibility the SAN provides, or up to a year. But we haven’t seen any SAN implementations where the payback has taken more than a year,” says Duplessie.

    Such differences of perception underline why organisations need to conduct their own in-depth studies long before they get bombarded with vendor proposals.

    Avatar photo

    Ben Rossi

    Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

    Related Topics