The delivery vehicles

When Information Age first published a report on utility computing in April 2002, the technology foundations for the revolutionary model for delivering computing power as a service were well under construction. One thing that most suppliers could not produce, however, was a list of customers willing to outline their commitments to some kind of utility architecture.

A lot can change in a year. All of the key systems vendors – IBM, Hewlett-Packard (HP) and Sun Microsystems, plus a few others offering key pieces of the utility jigsaw – have released details of early adopters’ projects. Over the same period, suppliers’ product lines have matured and broadened to the point where the majority of their large customers are now investigating the promise of a much more flexible, cost-effective, efficient IT architecture.

That prompts analysts, such as Frank Gillett at Forrester Research, to advise IT decision-makers to place their bets soon. Organisations should “plan on designating a prime [utility computing] contractor by the end of 2003”, he says. But with the market barely established, reference sites only reaching the proof of concept stage, and the technology issues still not all addressed, IT management needs to get a clear understanding of the different approaches of the main protagonists. After all, such decisions will govern the shape of IT architectures for decades to come.

IBM: The ‘blue typhoon’

In terms of the sheer breadth and scale of its investment in this area – in the overlapping and inter-related areas of utility computing, grid computing, autonomic computing, server consolidation and on-demand computing – IBM stands way ahead of the competition.

IBM’s first major step came in early 2001, with the launch of Project eLiza, an initiative aimed at developing ‘self-healing’ environments that can protect the delivery of their services, dynamically allocating workload across server resources to ensure an always-on service. An investment commitment of more than $1 billion to this area of ‘autonomic computing’ was accompanied by much fanfare though few details on a product roadmap.

Most recently, though, IBM CEO Sam Palmisano took the whole concept of computing as an uninterrupted service one big step further. In October 2002, he announced that IBM was investing $10 billion over five years in ‘on-demand computing’, a move that would involve the creation of a new, focused business unit within the company.

While the focus at some of its rivals has been on utility computing systems – the server controllers, the provisioning tools, the virtualisation software, the workload managers – for IBM the vision of on-demand computing is equally centred on delivering IT as an outsourced service. The argument for that is simple: companies don’t generate their own electricity or pipe in their own water, so why should they treat computing power any differently?

As such, alongside products that will enable organisations to reconfigure their own IT as a utility service, a large part of the $10 billion investment is earmarked for equipping vast IBM IT powerhouses in the US, Japan and Europe.

According to Dev Mukherjee, vice president of on-demand services at IBM, utility computing should be viewed as a portfolio of services that draws on multiple products from both IBM and its business partners, rather than a single, specific product line. “Utility means different things to different people,” says Mukherjee. “Some describe it as just providing extra processing on demand. But for us it is about processes, applications and infrastructure – each of which can be sold as a service.”

This portfolio of products and services is something of a ‘wedding cake’ architecture, says Roy Cochrane, European business development executive for e-business on demand. At the very bottom of the infrastructure sits hardware and storage; on top of that a management layer incorporating security and back-up; then a layer of applications such as customer relationship management or enterprise resource planning – bonded together using middleware. A set of products known as the IBM Utility Management Infrastructure (a product that was development under the codename ‘Blue Typhoon’) works in conjunction with IBM’s Tivoli systems management software to ensure that resources are allocated to the appropriate party or machine.

To complement the technology, IBM offers industry-specific services and systems integration from IBM Global Services or IBM Business Consulting Services (formerly the consulting arm of PricewaterhouseCoopers which IBM acquired in 2002). This top to bottom approach enables IBM to provide a more heterogeneous architecture than its competitors can muster, argues Cochrane – meaning that customers should be able to use systems from multiple vendors within an IBM managed utility architecture. “IBM Global Services probably runs more technology from other suppliers than it does of ours,” he says.

At present, IBM claims to have 41 sites that could in some way be described as ‘on-demand’ customers. Most of these sites are hosted entirely by IBM at its data centres; some are early grid computing projects, where workload is spread over computers linked together over the Internet, a small number are based on customers’ in-house systems but managed by IBM.

IBM sees both autonomic and grid as core components of on-demand computing. The company has already introduced some self-healing technology into its server lines as part of an initiative known as ‘X-Architecture ‘. And in January 2003, it unveiled ten grid computing products aimed at seeding the technology within major vertical industry sectors. Emphasising that grid goes well beyond scientific and academic applications, the company points to online broker Charles Schwab and oil company PDVSA as early users.2

Given this level of activity, the momentum behind utility computing is undeniable. Dennis Gaughan, an analyst at AMR Research, believes that IBM will be remembered as starting the utility computing revolution, whether it was first to market with deliverable products or not. “IBM has generated a ton of interest in this and is really driving the market,” he says.

Hewlett-Packard: First among equals?

Hewlett-Packard claims to have been the first technology company to conceive of utility computing when Joel Birnbaum, the director of HP Labs in the late 1980s, proposed the notion of computing power that could be harnessed in a distributed, transparent fashion.

And even if others can claim to have developed the idea much further, in terms of product delivery, analyst group Gartner identifies HP’s Utility Data Center (UDC) as the first product to come to market offering the key functions of utility computing.

That was back in November 2001. Since then HP customers have put its promises to the test. “The key difference between now and November 2001 is that we now have customer sites – it’s now real. This is in sharp contrast to our competitors,” claims Ken Maxwell, director of HP’s UK Solutions Organisation. But aside from a few customers that have outsourced their infrastructure requirements to HP on an on-demand basis, such as US-based leasing company GATX Capital, HP is reluctant to name any organisations that have implemented the UDC model within their own data centres.

< p> Nevertheless, many of the key HP components are in place. At the heart of UDC is the HP Utility Controller software. Servers, storage and related networking systems sit within a ‘virtual pool’, and Utility Controller enables administrators to allocate and reallocate computing resource to where it is needed within the organisation – all via a ‘drag and drop’ interface. Organisations can also provision computing resources between offices outside of the firewall, using a virtual local area network or virtual private network to ensure security.

To get to market quickly, HP has licensed workload management software from US start-up Terraspring as the basis for the HP Utility Controller, combining that with aspects of HP OpenView, the company’s own systems management suite. “Unlike IBM, HP found an established solution, improved it, bundled it with its own hardware, software and services and delivered value very quickly,” say Gartner analysts. Interestingly, rival Sun Microsystems has since acquired Terraspring and is using much the same technology within its Sun N1 utility architecture.

Until recently, HP UDC only supported HP hardware running HP-UX Unix and HP StorageWorks disk arrays. Since its acquisition of Compaq in 2001, however, it has fleshed out its support of Intel-based servers running Windows and Linux, and during 2002 announced UDC support for EMC disk storage and Sun servers running the Solaris operating system.

That heterogeneity is something that HP CEO Carly Fiorina wants to champion. “[UDC is] genuinely open,” she attests, “[We are] not a company masquerading as being open to get you to buy more of its proprietary offerings.”

However, Gartner warns that introducing non-HP hardware into the UDC could require major reconfiguration, adding significantly to the total cost. “Enterprises that do not already have a significant investment in HP-UX [HP’s version of Unix] will have more difficulty justifying a move to UDC,” says the analyst group.

HP’s roadmap does not stop at utility computing, however. In January 2003, the company demonstrated how users could take its Utility Controller software and link their own data centre resources to a grid computing network, enabling data centre resources to be used by grid users and vice versa.

While speed to market has certainly enabled it to gain mindshare over its competitors Sun and IBM, HP still needs to convince more customers to view the upfront costs and upheaval associated with moving to UDC as clearly outweighed by the longer term benefits.

Sun Microsystems: The N1 route

Sun has been the last of the three major hardware vendors to publicly embrace utility computing. By the time Sun launched its utility initiative in September 2002, IBM and Hewlett-Packard (HP) had already been touting their approaches for over a year.

But Sun has no intention of being left behind. Signalling that, in late 2002, it acquired two companies that develop core technologies for utility computing: Terraspring, a start-up specialising in server and network virtualisation, and Pirus Networks, whose technology enables users to view storage systems as a virtual pool.

Sun’s roadmap for utility computing comes in three phases. Over the next year, it will focus on providing the basic infrastructure for N1, while helping customers transform their existing servers, networks and storage into an aggregated pool of resources.

By mid-2003, Sun will launch a provisioning engine, which will control how computing resources and applications are spread across a virtual computing environment.

Finally, from 2004 onwards, organisations will be able build policies that support to their specific business requirements, and N1 will automatically manage applications and resources according to these policies. Accompanying this will be metering software that will enable organisations to monitor how computing resources are being used, and bill departments accordingly.

Sun stands apart from its competitors in two key regards. First, the Sun Grid Engine, a technology it acquired from Gridware back in 2000, enables several computers on a network to work on parts of a task at the same time by ‘abstracting’ the applications infrastructure – a capability that so far has not been matched by competitors.

Second, the acquisition of Terraspring’s technology, which Sun has since renamed the N1 Virtualisation Server, enables N1 to support hardware and networking equipment from multiple vendors within a virtual pool – another area where its competitors struggle.

This latter technology allows Sun to offer N1 across any of its own hardware products – from blade servers running Windows or Linux, through to high-end Sun Starfire servers running Solaris and non-Sun hardware running other versions of Unix. While many IT departments baulk at the higher prices of Sun’s high-specification servers, the broadening out N1 to embrace cheaper platforms will encourage swifter adoption.

   
 

Supporting roles: the second-tier utility computing suppliers

Analysts say it. Even the vendors admit. The emerging utility computing architecture is not a single product, not even a single product line, but will be made up of a series of integrated products and services from multiple suppliers. Alongside the major vendors of IBM, Hewlett-Packard and Sun Microsystems are an array of companies – from workload management specialists and blade server makers to pioneers in storage and server virtualistion – which are gearing up for what is undoubtedly a lucrative opportunity.

Systems vendors

IBM, HP and Sun may have made the most marketing noise about utility computing, but other server suppliers have not ignored the move. Dell, for example, already has some customers using large clusters of its Intel-based servers in ‘grid’ resource sharing environments. However, Dell executives say they have yet to see much demand for grid outside of compute-intensive sectors.

Another increasingly common feature of ‘on-demand’ computing environments is the blade server – a dense, thin server that can be slotted into and removed from a rack, scaling power without crashing the whole system. Most of the established vendors have started to offer this type of server. But a number of relative newcomers such as Egenera, RealScale and RLX Technologies are also on the cutting edge of blade technology. All employ clustering or advanced systems management software to ensures that resources are allocated where they are needed, and servers can be replaced with minimal disruption.

Software

By far the most crucial element of the utility computing infrastructure is the software management layer, since this virtualises, monitors and allocates computing resources. The major suppliers – IBM, HP and Sun – have all either acquired or developed some of the technologies needed in this area. But there are a significant number of other companies providing important pieces.

Storage software giant Veritas, for example, acquired two utility software start-ups in late 2002: Jareva, a server provisioning software specialist, and Precise Software Solutions, whose technology monitors and analyses web servers, application servers, databases and storage infrastructure. Services company EDS, meanwhile, acquired Loudcloud in June 2002 to make use of its Opsware resource allocation technology within the EDS services infrastructure.

There are also a handful of companies dedicated to supplying software that enables organisations to allocate and manage resources on-demand. Toronto, Canada-based Platform Computing, for example, has been developing technologies for managing distributed computing networks since it was founded in 1992. Meanwhile, Oxford, UK-based Sychron has developed products for virtualising organisations’ existing infrastructures and continuously managing the allocation of resources to applications across pools of servers.

Additionally, as grid computing networks have proved their effectiveness in academic and scientific research circles, a number of software companies have emerged with technologies for managing the distribution of computing resources across commercial grids. These include United Devices, Entropia, Avaki and DataSynapse.

Storage

Storage area networks (SANs) will be central to any utility computing deployment, because they consolidate disk capacity onto one network, making it easier to manage and improving utilisation. In order to manage storage area networks, many organisations are now looking to invest in storage virtualisation software, which treats disparate storage resources as a virtual pool so capacity can be moved around or shared according to needs.

Most of the storage hardware vendors, such as EMC and IBM, are incorporating virtualisation software into their product line-up. Computer Associates and Fujitsu Softek, two of the major storage software vendors, also have virtualisation products. In addition, there are a number of vendors who focus solely on this area, including DataCore, StoreAge, and FalconStor.

Services

Since much of the ‘nuts and bolts’ technologies involved in building a utility computing architecture are still maturing, services provided by consulting and systems integration companies will play a central role in many early deployments. IBM’s acquisition of PwC Consulting in 2002 reflects this, while Sun has partnered with Cap Gemini Ernst & Young, Deloitte Consulting and EDS Enterprise Hosting to support its utility computing role out. Other systems integrators, including Accenture, Computer Sciences Corp and BearingPoint (formerly KPMG Consulting) are also building expertise in this area, as are much smaller organisations such as the UK’s Esteem Systems.

 

 
   

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics