SANs are now beginning to fill one of their biggest promises

The centre of the storage universe has shifted. In the last three years, storage systems vendors have moved their emphasis away from the hardware platforms that have provided their main differentiation, and they have embraced the software systems that control those devices. The driver: budget-cutting customers are demanding better value for money from their storage investments.

“People are now buying storage for board-level reasons – cost reduction and employee efficiency. And software is the key to delivering that business benefit,” says Chris Atkins, storage product marketing manager at Sun Microsystems. That sense is echoed elsewhere. “Storage buyers are much more fiscally aware and business-oriented: they are looking to turn the IT cost centre into a business value centre,” adds Chris Boorman, director of marketing at storage software company Veritas.

The irony is that outside of the mainframe world, storage software hardly existed five years ago. What there was consisted largely of back-up and recovery software, plus some limited disk volume management capabilities – areas that provided the launchpad for Veritas and one of its chief rivals, Legato.

Historically, the storage devices themselves contained some rudimentary management software, typically written in microcode. But it was invariably proprietary, and only allowed administrators to manage the box that it came with.

Mainframe roots

In the mainframe arena, there were two separate streams of development. Server-based software started to emerge in the late 1970s, when utilisation of mainframe disks was typically 45% to 50%. Customers were pushing vendors to deliver better value from their storage investments, while IBM was worried that such inefficient usage was slowing processor sales. The result was a suite of software products under the label SMS (System Managed Storage), a development that introduced the whole concept of HSM (hierarchical storage management) to IT.

Exactly the same thing is happening today in the open systems world, with most people achieving 40% to 50% utilisation of disk storage – at best, says IBM storage software sales manager Steve Cliff. But HSM software products are now emerging on non-mainframe systems, helping to channel data to the most cost-efficient platform, and thus cut costs, says Hewlett-Packard’s storage business director Russ Logan.

Another trend that IBM started in the late 1970s was the addition of microcoded functionality to disk controllers, adding capabilities such as remote data mirroring. Storage Technology took that concept a step further in the early 1980s with the first virtual disk product, Iceberg, a product that allowed the taking of point-in-time copies or snapshots. In the 1990s, storage systems giant EMC became the most commercially successful company with this approach, selling thousands of copies of its SRDF (for remote mirroring) and TimeFinder (for snapshots) products.

But it was not until the rapid Internet-fuelled growth of the late 1990s that storage networking was born. This meant that users were acquiring more and more storage devices – invariably directly attached to a single server – making the control of both costs and usage-efficiency ever more difficult.

The first approach to addressing that growth in islands of storage was to consolidate devices. That brought direct cost savings simply from economies of scale, and indirect cost savings from more efficient use of both disks and servers.

Consolidated storage also promised easier management, although, at that stage, it was little more than a promise. A number of technical issues made the creation of storage networks very complex – technical issues that slowed the early take-up of storage area networks (SANs).

Three branches

Against that backdrop, three main areas of storage management software have emerged. First, what is sometimes called SAM – storage area management – was made necessary by the fact that storage had evolved from being a subsystem to being a system. So the equivalent of systems management software, such as CA Unicenter and HP OpenView, was needed to monitor the operation of the storage network and the storage devices attached to it – to report on both actual and impending problems, and to take automated action to avoid downtime in line with predetermined policies.

The second area to emerge was SRM, or storage resource management. This product category makes sure each application has the storage resources it needs, both for direct online use and for back-up. The allocation of resources, or ‘provisioning’ as it is known, represents a major challenge for most organisations. “Provisioning is the biggest pain in a high growth environment,” says IBM’s Steve Cliff.

Such products increasingly include the capability for allocating additional storage to an application when it is about to run out of capacity, again in line with predetermined policies. It also provides for storage of data on the most appropriate type of device. For example, the most recent data will be stored on fast disk, data which has not been accessed for a while on slower disks, and archive data on tape.

The third type of storage software is the traditional back-up and recovery.

SAM and SRM were almost unknown outside the mainframe world until the advent of networked storage. In many ways they are offspring of the demand. SAM and SRM are “new technology solutions to the demands of growth”, says Tivoli storage management marketing manager Jon Cooper. But in both cases, the challenge the software has to meet is made more complex because customers have storage devices from many different vendors.

Early SANs were simple because they had to be. Typically they consisted of a few switches, all from the same supplier, that linked storage devices of the same type and often attached to only one type of server. Moreover, they were frequently used for only one application, such as back-up.

As users started to acquire multiple SANs – in many cases built with different components, management complexity increased. Invariably, organisations would have a mixed storage environment anyway, consisting of SANs, network-attached storage (NAS) systems, and many disk systems that were not networked but directly attached to their servers.

The need to pull this disparate group together was as clear as it was pressing. “There has been a move away from device management to end-to-end management, especially in terms of storage resource allocation,” says Simon Gordon, business development manager at IP-based SAN vendor Nishan Systems.

Virtually there

Underpinning that is a technique for managing multiple types of device, known as virtualisation. Virtualisation products vary greatly, both in the capabilities they offer and in the way they implement that management facility. But the essence of virtualisation is the separation of the logical representation of data from its physical encoding on disk. This allows users to create a single logical pool of storage from a variety of different types of disk drive, each with its own individual formatting.

This leads to more efficient usage, because it is easier to make use of one big storage device – albeit a virtual one – than many small ones. It also helps with the provisioning of storage to applications.

Virtualisation is, in theory, an enabler for SRM. It makes it easier to shift around storage between different applications, and it provides organisations with an end-to-end view – from the user of the application through to all the intermediate infrastructure and to the storage devices. In practice, though, virtualisation products are usually sold separately and contain some SRM capabilities.

Virtualisation software was pioneered by start-up companies such as DataCore and FalconStor. Now the big companies, notably Fujitsu Softek, Hewlett-Packard, IBM, Sun and Veritas, are offering products as well.

The division of storage management software into SAM and SRM camps is necessarily simplistic. In practice the dividing line between the two is not clear cut – with some products handling elements of both jobs. Different vendors also focus on different capabilities.

  • Hewlett-Packard emphasises its capacity-on-demand capabilities. Once you have the ability to provide storage to applications as needed, a logical next step is to arrange to have storage available when needed but not in use, and not paid for until it is in use. Many companies offer this capability, but Hewlett-Packard has pioneered what it calls a utility storage model, which, says HP storage business director Russ Logan, means users “pay for less storage when demand is down and pay for more when it goes up”.
  • IBM is emphasising continuous uptime – maintaining access to data while changes, such as adding disks and controllers, are being made. Customers “still suffer from planned outages in SAN environments”, says IBM’s Steve Cliff.
  • Dell is majoring on visualisation of the storage environment as a means of simplifying fault-finding. “We are pushing the snapshot approach to the SAN fabric,” says storage software product manager Matt Brisse. “You take a snapshot of the environment, not just of the box. Then if something goes wrong, you take a comparison snapshot and do an overlay.”
  • EMC is promoting a vision of “a single method of managing information – information life cycle management” according to business development manager Greg Spence. “Customers don’t want to have to care where their information is sitting. The idea is to manage it from the cradle to the grave, with the movement of it being transparent.”

Standard bearers

Nobody is yet close to providing all of that capability. Although there has been huge progress over the past couple of years, much of the potential of SAM and SRM products has yet to be realised. “Policy management is one for the future,” says Tony Lock, an analyst at industry watcher Bloor Research. “What will also be big is accounting and the measurement of the use of storage and mechanisms used to protect it.”

One factor inhibiting the development and adoption of both SAM and SRM products in the early days of storage area networking was the absence of standards. To manage multiple different physical devices, the software needs to be able to speak the language of each one so that it can interact with the low-level device management software. That was difficult in early implementations because each device had its own proprietary software, with its own API (application programming interface), so organisations had to write new code for each additional type of device they wanted to manage.

That points to why the early SANs were simple. It was difficult enough getting even one set of devices to work together, a situation that involved complex software engineering. It was equally difficult to change the configuration of the SAN subsequently.

There has been considerable progress over the past two years, though. The Storage Networking Industry Association (SNIA) has produced a specification called Bluefin for storage management, which is now in its final stages of ratification under the name SMI-S.

SMI-S is a start, but it is only a start. “What we’ve got is only version 1: it is heavily focused on discovery and monitoring,” says Bloor’s Lock. He sees a need for additional standardisation covering management issues. “We need common descriptions for how snapshots work, as well as a common language for that kind of functionality,” he says. After that he envisages further standardisation work on policy-based automation.

SMI-S, as it stands, has received widespread industry support, which has essentially guaranteed its success even though it has still not received its final ratification. “We are getting pretty close to useful standards,” says Sun’s Chris Atkins. “SMI-S does probably 80% of what is needed.” And Tivoli’s Jon Cooper adds: “What we need to do is focus on that and make it a success.”

So a combination of much hard work in the development of SAM, SRM and virtualisation products, and the efforts of the standards-making bodies, have transformed the situation from the early days of storage area networking.

Storage management software is now available offering real business benefits. It allows companies to make more efficient use of storage, raising efficiency from typically 40% to 50% today to something closer to the 80% to 90% enjoyed by the mainframe world for many years. That in itself provides a big cost saving.

It makes for easier management of the storage pool, a development that translates into business value because organisations need fewer staff to manage a given size of storage. Furthermore, the staff needed require a lower order of skills, so are easier to find and less expensive to hire. And the automation capabilities mean less costly human errors.

Finally, SANs are now beginning to fill one of their biggest promises, which is greater responsiveness to change. This is being achieved by the development of capabilities such as automated policy-based provisioning, coupled with capacity-on-demand storage supply.

The upshot is that IT buyers are now able to take control of – and fully exploit – their storage assets.

In practice case study: Brookhaven National Laboratory

US government research agency Brookhaven National Laboratory is not just trying to answer some of the biggest questions in the universe, it is using one of the largest storage area networks (SANs) to do so.

Scientists at the Long Island, New York lab are attempting to recreate the conditions of the ‘Big Bang’ with which the universe is believed to have come into being. That simulation has involved the generation of masses of data – up to 50TB per week – and the establishment of a correspondingly beefy SAN to cope.

Brookhaven’s network links 1,000 desktops to an online database of 430TB held on two types of disk subsystem, with 26 Sun NFS servers sitting between the two. The network infrastructure contains 18 Brocade switches of three different types, arranged for redundancy so that there is always a path from the desktop to the appropriate data no matter which element fails.

This network was initially managed using Brocade Fabric Manager software. But, says Brookhaven’s Maurice Askinazi: “We needed more functionality. For example, the overall health of the switch might be OK. However, if you are multi-pathing, Fabric Manager can’t tell if there’s a problem until the path you’re using goes down.”

Accordingly, Brookhaven installed Fujitsu Softek’s SANView, which provides one point of control for all the devices connected to a SAN fabric. It allows monitoring of both status and performance, and allows customised reporting. This has allowed Brookhaven to tell which ports are most heavily loaded. That has translated into more efficient use of the switches and, in turn, has allowed Brookhaven to defer the purchase of new switches.

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics

SAN