It is an increasingly serious problem for organisations – as business forms company Appleton Papers can testify. In 2001, the world’s largest producer of ‘carbonless’ forms was experiencing an explosion in its data storage requirements as it
extended beyond its mainframe architecture to run a web environment on Windows NT servers. Storage costs (in terms of both systems and the burden of administering them) were rocketing and the company was becoming increasingly concerned about how well it could continue to scale its storage capacity and to provide 24×7 access to that stored data.
To address these issues, Appleton decided it needed to create a shareable resource for its web servers and distributed applications. There was only one real option: a storage area network (SAN) that pooled together 1.1 terabytes of data – the disk resources of the 13 Compaq Proliant servers devoted to web activity, the company’s email and LAN files, and two large finance and manufacturing data warehouses.
Based on the Freedom SAN system from storage array vendor Hitachi Data Systems, Appleton’s SAN has delivered scalability: “With the ability to easily and quickly expand the amount of storage in the SAN pool, we can re-allocate and redistribute storage for a given application,” says Appleton systems programmer Cliff Adkins. “We can easily allocate space as needs change without having to go out and purchase multiple standalone devices,” he adds.
Those are the kind of frustrations a growing number of companies are seeking to overcome by moving to a SAN. They are convinced that SANs are the main way they can get away from the escalating storage costs and low utilisation levels associated with direct attached storage (DAS).
The benefits stem from the capability to manage the storage environment as a centralised pool; to ‘mix and match’ hardware from different vendors in the same network; and to reduce poor utilisation by sharing data capacity across multiple devices.
However, a SAN is not the only storage network technology. The reality is that many organisations, especially mid-sized ones, take a piecemeal approach to deploying a storage network. Organisations moving beyond DAS often opt for network-attached storage (NAS) devices that plug into a standard Internet protocol (IP) network and are used for smaller storage requirements and for file sharing tasks. But at larger organisations, application variety requires a more mixed architecture.
For example, an organisation might use DAS for mainframe systems, a SAN for customer-facing applications, and NAS for sharing computer aided design files within an engineering department. Indeed some areas of storage are not likely to end up directly participating in a network. UK-based insurance company Direct Line, for example, has no plans to move its mainframe DAS onto its SAN architecture, according to Miodrag Pasko, systems manager of distributed infrastructure at the company.
Despite this, the long-term trend is to the network. By 2004, organisations will spend 7.5% less on DAS than in 2001, according investment bank Lehman Brothers. During the same period, spending on NAS will jump 24% to $3.5 billion, while SAN product sales will rise 13% to $8.9 billion.
The move to network storage will enable devices and data to be managed centrally, easing some of the cost burden organisations currently face with DAS. But administrators used to dealing with homogeneous environments
will have to get cope with multi-vendor environments – many of which will not interoperate.
Software will play an ever-greater role in storage – managing SANs of growing complexity and automating more routine storage tasks such as data replication and mirroring.
The construction of a SAN architecture clearly involves considerable upfront costs, but for large organisations, SANs appear to present the best long-term storage strategy. This applies especially to organisations that intend to increase the number of online applications they provide.
The Royal Borough of Kingston, the London local authority, realised it had to radically change its existing DAS architecture if it was to have a realistic chance of meeting the UK government’s deadline for providing all public services online by 2005. Kingston was also trying to cope with a data growth rate of more than 50% a year.
In April 2001, Kingston decided to deploy a SAN, outsourcing the project to Business Impact Technology Solutions (Bi-Tech), a UK-based storage consultancy. Kingston’s SAN, which is used by 1,200 internal PC users, now provides services, including email, an online community database and electronic payments forms.
An open SAN storage architecture was a vital element in Kingston’s strategy, says Robin Noble, information and communications technology manager at Kingston. “The SAN was designed specifically for an open heterogeneous environment that supported both Kingston’s Windows NT and Novell Netware platforms.
Subsequently, additional servers have now been added by Kingston’s infrastructure team with little impact to the storage data or its users.” For its storage disk systems and management software, Kingston used an integrated product from SAN specialist Xiotech, a subsidiary of hard disk drive maker Seagate Technology.
In a more heterogeneous storage environment, however, a lack of interoperability between different vendors’ storage hardware and software can cause substantial headaches for storage administrators (see box, Interoperability wars). For example, SAN customers are often locked into the use of specific storage management software with a supplier’s storage hardware, says Steve Murphy, CEO of Fujitsu Softek, a supplier of device-independent storage management software.
In fact, many analysts say storage management software will be critical for overcoming interoperability issues within storage networks. Suppliers of storage management software, including Fujitsu Softek, Veritas, EMC and HP, are now heavily marketing their interoperability credentials – but with varying levels of openness.
For example, Fujitsu Softek claims it delivered a major breakthrough in the interoperability of storage management software with the release of its Storage Manager product in July 2002. Murphy claims, “Storage Manager is the industry’s first standalone product that centralises and automates storage resource and data management tasks in a multi-vendor environment from a single console”.
To significantly alleviate the pain of managing a sprawling storage architecture, however, storage management software will also have to automate routine storage
resource and data management tasks. For a storage administrator at a large organisation, the capability to automate procedures such as archiving and backing up data files is a huge benefit, says Murphy.
But storage management software is not the only tool that can transform an organisation’s storage architecture. In particular, virtualisation software for SANs (see box, In practice: Pioneer Investment Management) is often viewed as critical for optimising device storage utilisation.
“Virtualisation has been hyped by many vendors, but at its most simple level it is presenting logical volumes of storage to an application for the purpose of simplifying what the application sees,” says Bob Passmore, a research director at Gartner.
Virtualisation, he adds, enables organisations to allocate virtual storage capacity to applications from a central pool of data on a SAN. Despite these benefits many organisations do not need, or cannot afford, a SAN. In particular, NAS remains a popular alternative for organisations with smaller data capacities and for those who want to share files locally.
But suppliers, such as Network Appliance and EMC, are now selling NAS devices into larger environments.
For example, Churchill Insurance, the UK insurance company, clustered together seven Network Appliance high-end NAS devices to run an Oracle database for some of its call centre and software development operations in mid-2001, says Mark Stevens, sales director at Network Appliance. “A few years ago, the idea that you could put a database on a NAS seemed ridiculous,” he adds.
However, NAS and SAN does not present an either/or option. Organisations that use NAS devices often integrate them into their SAN. And that will evolve. “We’ll continue to see a blurring of the lines between NAS and SAN,” says IDC analyst Claus Egge.
But in the end, organisations should be aware of just how difficult it can be to deploy a fully integrated storage network. “Years of piecemeal storage investment has yielded a mishmash of fibre channel, Enterprise Systems Connection and server-based storage islands. Integrating these components into a single standardised infrastructure is incredibly difficult,” says Forrester’s Galen Schreck.
Organisations are likely to find out just how difficult that task is over the next two years.