Networked intelligence

It is an increasingly serious problem for organisations – as business forms company Appleton Papers can testify. In 2001, the world’s largest producer of ‘carbonless’ forms was experiencing an explosion in its data storage requirements as it

 
 

In practice: Portman Building Society

Deploying a storage area network (SAN) can be a complex – and risky – business. The migration of volumes of data over to a fibre channel network that links together SAN-enabled storage disk systems, switches, host bus adapters, tape and disk back-up devices and storage management software requires expertise that is not yet established within most organisations.

Acknowledging this, the Portman Building Society, the UK financial services company, decided to outsource the coordination of its SAN deployment in late 2001 to a specialist consultancy, Sagitta Performance Systems. “The amount of data involved made it too risky for us to do ourselves,” says Frank Lake, infrastructure manager at Portman.

Taking advantage of a move to a new head office in Bournemouth, Portman implemented a SAN with the main goal of ensuring the security and redundancy of its data, especially for its financial applications such as share dealing and Internet banking, says Lake. To this end, Portman runs its SAN from two data centres: a primary site at its head office and a secondary site in nearby Poole. “Using the fibre channel network between the sites has allowed us to maintain two copies of all of our SAN-based data.”

Since July 2002, Portman has progressively brought key business applications into the SAN production environment.

The results, in terms of both performance and cost savings, have been encouraging, says Lake. Portman has recorded a 25% increase in data throughput speeds on its SAN compared to its previous direct attached storage architecture. And already, it claims its SAN has helped reduce its storage management costs. This is in no small part due to Portman’s use of storage management software from Tivoli, IBM’s network and systems management division, says Lake.

“Tivoli Storage Manager enables us to suck in data from Sun’s Solaris and other Unix machines, as well as our twin IBM mainframes and Windows NT servers,” he says. This can be done from a single Tivoli software console acting across the entire SAN, he adds.

That adds an element of simplicity to what is a complex structure.

 

 

extended beyond its mainframe architecture to run a web environment on Windows NT servers. Storage costs (in terms of both systems and the burden of administering them) were rocketing and the company was becoming increasingly concerned about how well it could continue to scale its storage capacity and to provide 24×7 access to that stored data.

To address these issues, Appleton decided it needed to create a shareable resource for its web servers and distributed applications. There was only one real option: a storage area network (SAN) that pooled together 1.1 terabytes of data – the disk resources of the 13 Compaq Proliant servers devoted to web activity, the company’s email and LAN files, and two large finance and manufacturing data warehouses.

Based on the Freedom SAN system from storage array vendor Hitachi Data Systems, Appleton’s SAN has delivered scalability: “With the ability to easily and quickly expand the amount of storage in the SAN pool, we can re-allocate and redistribute storage for a given application,” says Appleton systems programmer Cliff Adkins. “We can easily allocate space as needs change without having to go out and purchase multiple standalone devices,” he adds.

Those are the kind of frustrations a growing number of companies are seeking to overcome by moving to a SAN. They are convinced that SANs are the main way they can get away from the escalating storage costs and low utilisation levels associated with direct attached storage (DAS).

The benefits stem from the capability to manage the storage environment as a centralised pool; to ‘mix and match’ hardware from different vendors in the same network; and to reduce poor utilisation by sharing data capacity across multiple devices.

However, a SAN is not the only storage network technology. The reality is that many organisations, especially mid-sized ones, take a piecemeal approach to deploying a storage network. Organisations moving beyond DAS often opt for network-attached storage (NAS) devices that plug into a standard Internet protocol (IP) network and are used for smaller storage requirements and for file sharing tasks. But at larger organisations, application variety requires a more mixed architecture.

For example, an organisation might use DAS for mainframe systems, a SAN for customer-facing applications, and NAS for sharing computer aided design files within an engineering department. Indeed some areas of storage are not likely to end up directly participating in a network. UK-based insurance company Direct Line, for example, has no plans to move its mainframe DAS onto its SAN architecture, according to Miodrag Pasko, systems manager of distributed infrastructure at the company.

Despite this, the long-term trend is to the network. By 2004, organisations will spend 7.5% less on DAS than in 2001, according investment bank Lehman Brothers. During the same period, spending on NAS will jump 24% to $3.5 billion, while SAN product sales will rise 13% to $8.9 billion.

The move to network storage will enable devices and data to be managed centrally, easing some of the cost burden organisations currently face with DAS. But administrators used to dealing with homogeneous environments

 

Interoperability wars

Storage administrators are angry. Justifiably so, say analysts. This is because a lack of interoperability among storage management tools restricts them to using software from specific storage hardware suppliers. “Companies have complained loudly about having to use multiple storage management consoles and the lack of storage management standards,” says Galen Schreck at Forrester.

It is an issue that is being addressed but not in the most coordinated fashion. On the one hand, the Storage Networking Industry Association (SNIA), a consortium of storage vendors, is developing open standards for storage management. At the same time, suppliers (many of whom also support SNIA) are selectively cross-licensing their proprietary application programming interfaces (APIs) to each other in an effort to make at least some storage combinations viable in a network.

For example, storage suppliers EMC and Hewlett-Packard completed a cross-licensing agreement in July 2002. This agreement enables users of HP’s storage management software to manage EMC’s arrays across a storage area network, and vice versa. In August 2002, HP followed this by completing a similar partnership with IBM. In contrast, the high-end disk systems companies EMC and Hitachi Data Systems seem destined to put their competitive differences before any interoperability.

SNIA’s plans for open storage management interface standards are designed to do away with such API beauty contests. And it appears to have made significant progress. In May 2002, 16 storage vendors, including EMC, HP and IBM, signed off on a new storage management specification, ‘Bluefin’, which covers three areas: storage device discovery; SAN security; and ‘locking’ to prevent software clients’ contention.

However, this effort is moving too slowly for some customers, and SNIA has not yet been able to provide a timetable as to when other aspects of the Bluefin specification will be completed. In the meantime, vendors will swap APIs and customers will continue to “pull their hair out”, says Forrester’s Schreck.

 

 
 

will have to get cope with multi-vendor environments – many of which will not interoperate.

Software will play an ever-greater role in storage – managing SANs of growing complexity and automating more routine storage tasks such as data replication and mirroring.

The construction of a SAN architecture clearly involves considerable upfront costs, but for large organisations, SANs appear to present the best long-term storage strategy. This applies especially to organisations that intend to increase the number of online applications they provide.

The Royal Borough of Kingston, the London local authority, realised it had to radically change its existing DAS architecture if it was to have a realistic chance of meeting the UK government’s deadline for providing all public services online by 2005. Kingston was also trying to cope with a data growth rate of more than 50% a year.

In April 2001, Kingston decided to deploy a SAN, outsourcing the project to Business Impact Technology Solutions (Bi-Tech), a UK-based storage consultancy. Kingston’s SAN, which is used by 1,200 internal PC users, now provides services, including email, an online community database and electronic payments forms.

An open SAN storage architecture was a vital element in Kingston’s strategy, says Robin Noble, information and communications technology manager at Kingston. “The SAN was designed specifically for an open heterogeneous environment that supported both Kingston’s Windows NT and Novell Netware platforms.

Subsequently, additional servers have now been added by Kingston’s infrastructure team with little impact to the storage data or its users.” For its storage disk systems and management software, Kingston used an integrated product from SAN specialist Xiotech, a subsidiary of hard disk drive maker Seagate Technology.

In a more heterogeneous storage environment, however, a lack of interoperability between different vendors’ storage hardware and software can cause substantial headaches for storage administrators (see box, Interoperability wars). For example, SAN customers are often locked into the use of specific storage management software with a supplier’s storage hardware, says Steve Murphy, CEO of Fujitsu Softek, a supplier of device-independent storage management software.

In fact, many analysts say storage management software will be critical for overcoming interoperability issues within storage networks. Suppliers of storage management software, including Fujitsu Softek, Veritas, EMC and HP, are now heavily marketing their interoperability credentials – but with varying levels of openness.

For example, Fujitsu Softek claims it delivered a major breakthrough in the interoperability of storage management software with the release of its Storage Manager product in July 2002. Murphy claims, “Storage Manager is the industry’s first standalone product that centralises and automates storage resource and data management tasks in a multi-vendor environment from a single console”.

To significantly alleviate the pain of managing a sprawling storage architecture, however, storage management software will also have to automate routine storage

 
 

In practice: Pioneer investment management

A corporate takeover is often the catalyst for a major revamp of a company’s IT infrastructure. Take the example of Pioneer Investment Management, which was acquired by investment bank UniCredito Italiano in mid-2000. After the acquisition, Pioneer decided it needed to create a more efficient storage infrastructure at its Dublin HQ that would address the rapid increase in levels of “unmanageable storage data”, says Rupert Fuller, senior project manager at the company.

In March 2002, the company decided the situation would be best addressed with the creation of a storage area network (SAN). Its main storage devices consist of a one terabyte Hitachi Data System ‘Thunder’ mid-range disk array and a six terabyte ‘JBOD’ disk system from Eurologic at a back-up site. Using SAN switches from McData and host bus adapters from QLogic, it is building a network that will cover 14 application servers at the main site and several others at the recovery centre. But key to making that happen is SANsymphony virtualisation software from DataCore Software. This will help Pioneer manage separate physical disk capacity as a virtual pool across the network.

For Fuller, the SAN environment is delivering on four key requirements: It allows the consolidation of all data storage in a single management framework; storage can be allocated to individual application servers as required without the need for complex and risky server re-configuration; new and legacy storage can be included in the overall configuration with the same management functionality applying to the overall pool of storage; and the infrastructure removes the dependency on proprietary, costly components for all data storage requirements and retains the element of competition needed to maintain a reasonable cost base for future requirements.

And these requirements will certainly grow. As Pioneer adds more legacy data, the amount of data managed over the SAN will rise to 7 terabytes, he predicts.

 

 

resource and data management tasks. For a storage administrator at a large organisation, the capability to automate procedures such as archiving and backing up data files is a huge benefit, says Murphy.

But storage management software is not the only tool that can transform an organisation’s storage architecture. In particular, virtualisation software for SANs (see box, In practice: Pioneer Investment Management) is often viewed as critical for optimising device storage utilisation.

“Virtualisation has been hyped by many vendors, but at its most simple level it is presenting logical volumes of storage to an application for the purpose of simplifying what the application sees,” says Bob Passmore, a research director at Gartner.

Virtualisation, he adds, enables organisations to allocate virtual storage capacity to applications from a central pool of data on a SAN. Despite these benefits many organisations do not need, or cannot afford, a SAN. In particular, NAS remains a popular alternative for organisations with smaller data capacities and for those who want to share files locally.

But suppliers, such as Network Appliance and EMC, are now selling NAS devices into larger environments.

For example, Churchill Insurance, the UK insurance company, clustered together seven Network Appliance high-end NAS devices to run an Oracle database for some of its call centre and software development operations in mid-2001, says Mark Stevens, sales director at Network Appliance. “A few years ago, the idea that you could put a database on a NAS seemed ridiculous,” he adds.

However, NAS and SAN does not present an either/or option. Organisations that use NAS devices often integrate them into their SAN. And that will evolve. “We’ll continue to see a blurring of the lines between NAS and SAN,” says IDC analyst Claus Egge.

But in the end, organisations should be aware of just how difficult it can be to deploy a fully integrated storage network. “Years of piecemeal storage investment has yielded a mishmash of fibre channel, Enterprise Systems Connection and server-based storage islands. Integrating these components into a single standardised infrastructure is incredibly difficult,” says Forrester’s Galen Schreck.

Organisations are likely to find out just how difficult that task is over the next two years.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics