Efficiency drive

It is three weeks before Christmas, right in the middle of the busiest season of the year, when leading online retailer Amazon hears that the web site of a big rival has just crashed. Better still, it might be down for day or so.

This represents a major opportunity to pick up the customers of its rival. All it needs to do is double its number of web servers. The problem: installing new ones might take a few days – and so will repurposing existing ones. The opportunity is lost.

Now imagine that Amazon is using a utility data centre solution. This time, it decides to provide additional server capacity from one of its human resources departments. It is all taken care of in a few minutes using data centre management software.

HR staff are not impressed, but don't complain: After all, Amazon has the resources in place to successfully handle 10,000 extra customers. Following the upsurge in business, it gives all of its staff a Christmas bonus.

This hypothetical example is given by a Hewlett-Packard (HP) executive to demonstrate the potential benefits of what is increasingly being called ‘utility computing'. The flexible, dynamic use of computing resources has been an elusive target for both end users and vendors for several years. Now, it seems, it is starting to become reality.

The suppliers' goal is to enable their customers to create a centralised, virtual pool of computing resources. They, in turn, can then allocate resources (including processing power, storage and applications) to end-users and their applications as and when required.

If it all plays out as planned, utility computing promises to change the economics of computing, with end-users only paying for the computing capacity they consume – usually based on a per-processor, per-hour pricing model, for example.

Much of this is not new. Suppliers have been working towards the ideal of utility computing for many years, offering server partitioning and consolidation technologies, advanced systems management and Internet-enabled applications. But only in 2002 have comprehensive utility computing technology product sets started to come to market.

‘Proof of concept' utility computing packages of hardware, software and services are now emerging from big suppliers such as HP, IBM, Sun Microsystems, Compaq and Unisys. Other smaller suppliers are offering specialist point solutions.

 


Peter Hindle, HP: “[Utility computing delivers] cost control, quality of service and responsiveness.”

And while there are significant differences in their approaches, all agree on the wide range of benefits that utility computing will deliver. These can be placed into three broad categories: "Cost control, quality of service and business responsiveness," says Peter Hindle, senior technical consultant at HP in the UK.

Specifically, utility computing will enable organisations to drive down costs through better utilisation of their existing IT infrastructure. It will drive the consolidation of server resources and, in many cases, circumvent the need for upgrades or the purchase of additional devices. The more flexible, robust infrastructure will also help organisations achieve agreed service levels – but at less expense. And it will help organisations to respond more effectively to peaks and troughs in demand.

Who will benefit? In the early stages, vendors intend to target utility computing at large organisations – those running their own data centres or with many distributed departmental systems. They are also talking to operators offering managed services and application services.

This is not just something for service providers, however. "I don't see utility computing specifically as an outsourced service. This is something [companies] can do internally with resources they already have," says Galen Schreck, an analyst at market analyst group Forrester Research.

Although most mid-size and smaller organisations will not have the budget to justify the upfront cost of deploying a utility computing package themselves, they should also benefit indirectly. They will be able to buy a more flexible package of services from managed and outsourced service providers that have installed utility computing hardware and software technology.

Data centre disarray

For data centre operators, the consolidation of computing resources is a top priority. The reason: the "hodgepodge" state of many companies' data centres, says Schreck.

David Trapper, an analyst with IDC, says that he has "consistently seen customers who really want to consolidate their IT infrastructure". Michael Hjalsted, server marketing director for Unisys Europe, agrees: "I think that many data centre infrastructures are a mess," he says.

Certainly this ‘hodgepodge' of systems brings multiple operational and financial problems. For example, data centre managers need to hire administrators with the skill sets for different vendors products. On top of that, servers from different vendors – typically dedicated to individual tasks or departments – cannot readily communicate or share peripherals with other vendor's machines.


Bernard Tomlin, HP: “Organisations are not sharing resources because they lack the capability to do so.”

 

One of the reasons that companies are not sharing server resources, says Bernard Tomlin, UK technology solutions manager at HP Consulting, is that they lack the technology that would enable them to do so.

The result: large server islands of under-utilised capacity across data centres. "It is common in Unix and NT environments for servers only to be about 20% utilised… 80% of the cost of the server is wasted," says Ian Meakin, product marketing manager for enterprise servers at systems vendor Sun Microsystems.

This is where suppliers such as HP see a big opportunity. HP's Utility Data Center (UDC) is a combination of hardware, software and services. Initially, UDC requires that a data centre rewires, or reconfigures, its computing hardware and software into a UDC server administrative rack. For example, the server dedicated to an individual department could be removed from its present physical location and plugged into the UDC, enabling its capacity to be made available to other customers. Using UDC's software, an administrator can then get a virtual view across all systems connected to the UDC, helping them to quickly identify under- or over-utilised devices.

This will provide an opportunity to consolidate and reduce the number of devices in a data centre. UDC supports multiple platforms including Solaris, Windows and the open source operating system Linux, meaning many of these systems can be merged and managed centrally. An organisation may find it can run its entire data centre on much smaller number of servers. "Because the utilisation of your IT infrastructure is higher, you need to buy less equipment. When you consolidate, you could throw away hundreds of mid-range Unix servers," says Meakin of Sun Microsystems.

In addition, by standardising on the hardware and software of just one or two vendors, the burden on support staff is reduced. "We are talking about [utilising] significantly fewer administrators, especially for service providers who are continuously looking to change their infrastructures around," says Tomlin at HP Consulting.

Planning aid
Managers should also find it easier to plan capacity requirements. Many still attempt to gauge their capacity needs by using data gathered from previous projects coupled with a good deal of guess work.

Differences in planned capacity and the amount actually used can have a huge effect on an organisation's costs and their competitive advantage. This is particularly relevant for those selling products or services online.

For example, an airline operator may estimate it requires five web servers to cope with peak demand for an online sales campaign for discounted tickets over two weeks. If it overestimates demand, it will be left with excess capacity; but if it underestimates demand, the outcome is potentially more serious.

"If you are in a sales environment and people cannot get to your web site, then that is lost money," says Marcus Cox, business and strategy consultant at HP Consulting.

But with HP's UDC offering, for example, an administrator can incrementally adjust capacity as demand dictates. For example, if an company is anticipating a surge in demand, it could simply edit the UDC software template and switch out a 4-way processor system and swap it with an 8-way one being used elsewhere.

UDC also provides managers with the capability to quickly clone a production environment and turn it (temporarily) into a test environment. It can then be used to verify that putting in a specific processor will give the required performance improvement.

Improvements in autonomic, or self-managing, technology also look set to bring IT efficiencies to smaller organisations. For example, IBM's xSeries 440 server is the company's first Intel processor-based product that administrators can repartition, so that the workload can be redistributed without rebooting.

Utility computing will be particularly beneficial for organisations that regularly have peaks in their processing requirements. This includes those with public web applications or those using processor intensive applications such as business intelligence (BI) or collaborative computer-aided design.

BI analytic software tools that require a high-end performance database are tailor-made for utility computing, says Hjalsted at Unisys. "I think utility computing will be very much suited to those organisations with small transactions, but many customers." This might include large retailers, as well financial institutions and insurance companies, he adds.

For example, a large supermarket may use analytics software to identify buying patterns for specific products. This software could be run as a batch process overnight. But during the day, when the batch server is under-utilised, server partitioning technology from vendors such as IBM, HP and Sun (see box, Hands on: Virgin Mobile's utility call), could switch this capacity over to other internal departments. Another major business benefit to the supermarket would be the ability to run analytic software more regularly. This might help managers devise ways to increase sales.

Collaborative gateway
The benefits of utility computing should extend beyond internal facing applications. They may, in fact, help to open up the collaborative commerce market to small- and medium-sized businesses (SMBs).

At present, for example, many SMBs cannot use advanced CAD applications, and especially collaborative ones, because of the degree of processing power required. If they do manage to run CAD calculations during the working day, this often brings their network to a grinding halt.

Utility computing may provide an answer. For example, a small manufacturer that builds components for a large aerospace manufacturer may be able to use CAD software running on the systems of its aerospace partner, or, alternatively, on systems residing in a service provider's data centre. This approach might save the smaller parts manufacturer the upfront cost of purchasing, installing and maintaining high-end servers and managing the CAD software.

The service could be paid for on a pay-per-use basis, so that once the design process is finished, the small manufacturer pays nothing more.

For SMBs, freedom from systems administration can also be a major benefit. By outsourcing infrastructure provision, software developers can concentrate on core tasks. "Once you remove the infrastructure, the only thing left is the applications, which means that any company, even a mid-sized company, can have enough programmers. Their managers will not have to ask them to provision new servers or learn a new operating system," says David Tapper, senior analyst at market research company IDC.

Before this happens, managers will have to overcome a sizeable cultural barrier. Largely autonomous resources, usually thought of as belonging to a department or group, will have to be pooled. Tomlin at HP Consulting says that, usually, different lines of business buy their own equipment, which they then own and control. Convincing these divisions, or these departments to let go of their systems and share will not be easy.

Wanduragala at IBM concurs. "I think the biggest barrier [to utility computing] is the whole concept. Organisations are so ingrained with [the idea of] owning the whole [IT] resource. Breaking this mould will take time." That is, at least, until the benefits become self-evident.

Hands on: Virgin Mobile's utility call

 

 

There comes a point for all data centres when consolidating resources becomes a primary concern. This time arrived towards the end of 2000 for Virgin Mobile (VM), a virtual network operating division of the Virgin Group.

VM wanted not only to consolidate its large number of servers, but also to better utilise its existing infrastructure in its customer service data centre in Bristol, UK. In October 2001, therefore, it replaced most of its existing servers with Enterprise 10000 (E10000) Unix high-end servers from hardware vendor Sun Microsystems. This has enabled Virgin Mobile to significantly reduce its overall server tally.

The E10000's partitioning technology, which takes a single server and runs multiple operating system images on it, provides Virgin with far greater flexibility in how it provisions server capacity for different customer service applications, says Steve Martin, head of IT services delivery at Virgin Mobile. "We have managed to triple the amount of processing we can carry out – but using fewer boxes," he says.

VM typically runs about six partitions within a single server. Therefore, instead of dedicating multiple Unix devices to individual applications, it can run multiple applications in separate partitions on just a few E10000s.

The E10000s are now used to run all VM's core applications, including its customer care application used by staff at VM's 24-hour customer service centre; its payment and mobile device number management application; and its data warehousing software.

As VM's computing requirements grow, the E10000s will be able to scale too, says Sun. The systems are shipped with more processing capacity than initially required by the customer. Companies can then buy a ‘key' that will turn on the idle processors if they are needed.

This facility may not be as flexible as true utility computing offerings, which are based on a per-processor, per-hour basis, but, along with partitioning, it does give organisations greater control over their resources.
Back

 

 

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics