Servers cutting edge

In January 2001 Gary Stimac, the former head of Compaq Computer’s systems division, unveiled a new venture where he had recently been appointed CEO. “The company vision is crystal clear,” he said. “Our goal is to redefine server economics.”

Stimac’s company is RLX Technologies, and roughly six months after he announced its plans to the world, the Houston, Texas-based start-up duly launched what were widely agreed to be the world’s first blade server products.

At the time, the claims Stimac made for blade servers were exactly what many server buyers wanted to hear. It was the height of the dot-com boom, and data centre managers everywhere were struggling to install enough machines to satisfy spiralling demand for email servers, web site platforms and other online applications. Most barely had enough trained staff to manage existing platforms, never mind bring new ones online, and some were simply running out of space to house new machines. In the most extreme cases, booming server installations were consuming so much electricity that utility companies were struggling to keep up.


Blade benefits

Space Saving – Server Density

Originally designed to meet the needs of heavy server users, blades optimise the processing power squeezed into expensive data centre real estate by breaking traditional server architectures into modular components – processors, I/O, network interfaces and storage – assembled within standard 19″ racks using proprietary chassis that act as backplanes connecting the components together.

Power Consumption

First generation blade servers, such as RLX’s Transmeta Crusoe-based blades, often used low-power, relatively cool-running chipsets originally designed for use in laptops. Second generation blades, which typically use dual-Intel Xeon chipsets, draw more power, and require more cooling, but they are still relatively power efficient compared with conventional servers.

Physical Management

Blades are relatively inexpensive, and their shared network and power connections allow faulty boards to be ‘ripped and replaced’ in minutes.

Logical Management

The modular nature of blade servers makes them extremely flexible. Using proprietary management software, individual blades can be provisioned as required, using operating system and applications images downloaded from a central controller. This means server utilisation, which can typically be as low as 30% in conventional centres, can be increased to 50% or greater, and processor resources redistributed to different applications at different times with minimal effort, making IT resources far easier to map to business requirements.

Total Cost of Ownership

The space, power consumption and manageability characteristics of blades can add up to significant savings where there is demand for a large volume of servers. Last year, IDC estimated that companies that migrated from rack-optimised servers to blades reduced their average three-year cost of ownership by 48%. Those companies that invested in the most sophisticated blade systems also enjoyed average total cost of ownership benefits of some 65%.



Clearly, the time was ripe for a technology that would allow more processors to be crammed into less space, while consuming less power and requiring fewer management resources. Blade servers were designed to be exactly that product, and RLX could reasonably have expected to have shortly become one of the hottest items since Compaq itself rewrote the rules of the hardware business in the late 1980s.

Then the bubble burst, and a world that had once had too few servers suddenly discovered that it had far too many.

Three years on, RLX is still a privately held company and has seen its first-mover advantage diluted by a series of launches from rivals that now include all the server market’s major manufacturers. However, even if it now seems unlikely that RLX will rebuild the face of the hardware industry in its fresh image, the technology it pioneered may very well still redefine the economics of the server business.

Indeed, although today, according to IDC analyst Dan Fleischer, blade server sales still constitute a tiny part of the overall market, this situation is likely to change quickly. In the past two years, even though the economic climate has depressed server sales in general, “we have seen a lot of organisations evaluating blades, and a lot of pilots,” he says.

Sharp rise

Now, with growth coming back into the server market, customers that have been impressed with their early experiences are beginning to invest in blades in earnest. The segment is emerging as the fastest growing of the server market: IDC says blades will account for 27% of all server shipments by 2007.

Certainly, the operational and economic benefits that blades appear to offer (see box, Blade benefits) fit well with today’s climate of server procurement prudence. According to Fleischer, although there is pent up demand for new servers, with many machines acquired in the run-up to Y2K now significantly past their sell-by dates, CIOs and IT managers are not working with an open chequebook. And for an increasing number of applications, the return-on-investment for blades stacks up against conventional alternatives.

This is certainly true when server consolidation is at the top of the IT agenda. According to Una Du Noyer, Capgemini’s head of infrastructure, security and mobile technology services, this has become one of the chief drivers in the server market, and a major opportunity for blade server vendors.

“Companies are trying to make it easier and less expensive to administer servers, and to better share and utilise those resources by consolidating them. Once you start to do that, the issue is server density. You need a lot of space to put all of your servers back into the data centre,” she says.

This is exactly the problem blades are designed to solve. Compared with today’s ‘rack-optimised’ systems (where relevant components – CPU, disks, routers and so on – can be loaded in separate racks in a free-standing, upright chassis), blades typically require half as much space and consume considerably less power. Compared with ‘pedestal systems’ (the conventional closed and boxed server assemble), blade server density can be as much as five times greater. Add in the potential longer-term savings to be had from halving power consumption and greatly increasing the number of servers that can be comfortably managed by a single data centre operator, and blades can be a compelling proposition.

However, “blades are not a panacea. Every company should prepare a careful business case,” says Du Noyer. Those companies that have done the due diligence and invested in first generation blades have typically been looking for a cost-effective platform for tier-one distribution applications such as web or email serving, or for thin client applications.

The blade generations

First generation blades, which are often based on relatively low-powered chipsets, can be ideal for this class of applications. Such applications require relatively little computing power, but need to easily scale up and down to meet fluctuating end-user demand. And while vendors may argue the point, there is really little to choose between the blades themselves in technical terms.

This is not to say that there are no strategic issues to consider when selecting a blade platform. Blades may be based on standard, commodity components, but the chassis that hold them are proprietary and dictate which second and third generation blades will be available to the user. Even more important, such chassis are governed by proprietary management systems that determine the level of control that administrators apply to blade components.

At the moment, IBM, with 47.1% of the market, and Hewlett-Packard (37.2%) dominate the Intel-based blade server market in western Europe, according to IDC, while Fujitsu Siemens has an 8% share. Dell is placed a surprisingly distant fourth with only 5.2%, reflecting what has so far been a lacklustre commitment to a low-volume market from a high-volume manufacturer. (Such figures ignore Sun Microsystems, whose blade strategy is focused on its own Sparc processor.)

In most other respects, the market shares resemble the state-of-play in the overall server market, and suggest that many early blade users have taken a ‘devil you know’ approach to product selection.

This may change now that blades are entering their second generation with the introduction of more powerful Intel Xeon dual-processor and quad-processor boards. These products are proving attractive platforms for second- and even third-tier applications, and lend themselves to high-performance cluster configurations.

Already, according to Joseph Reger, Fujitsu Siemens’ chief technology officer, users are starting to build systems that combine these “ultra-dense” single-processor boards with higher performance multi-processor boards in the same racks. This allows an application’s first-tier distribution front-end processing, and its second-tier application logic and even third-tier database processing to be executed against a common set of hardware resources.

It also presents an opportunity to vary the proportion of those resources that are dedicated to particular aspects of overall application processing at any given time – allowing more blades to be allocated to first-tier distribution when end-user demand is high, or reallocating them to more computer intensive batch processes when end-user demand is low. In this way, instead of having three separate systems running at an average utilisation level of 30% or lower, one virtual resource can be driven at 50% or higher.

This will require sophisticated hardware engineering. “Systems availability will be key,” says Reger, and this will come down to the amount of redundancy that can be accommodated into blade cages and chassis without compromising on server density. However, hardware design is less crucial to the future utilisation of blades than the management systems that will be required to optimise them by mapping their use against fluctuating business demands.

Optimal running

Certainly, management systems are now the chief focus of blade manufacturers’ research effort. IBM’s autonomic computing work, Hewlett-Packard’s adaptive enterprise development and Sun’s N1 technology, though not specifically blade-focused, are all multi-million dollar programmes geared to automating and virtualising system resources. All are also far from being finished, and there is still room for smaller innovators to play a role in the market.

RLX, for instance, may just have 1.5% of the western European blade market (IDC), but as Fleischer points out, its Control Tower software is specifically designed to manage blade systems, and has gone through several more generations than equivalent products from its bigger rivals. Egenera, another four-year-old blade start-up, also has an established track record with its BladeFrame and Pan Manager software, and has found favour at several major financial customers, including JP Morgan Chase and Credit Suisse First Boston.

These leading-edge blade makers, alongside specialist storage, network and systems management suppliers, all have contributions to make to the equation. Yet the most influential contribution will come from application vendors.

According to Reger, “something that this industry doesn’t talk about enough is the role that applications play [in virtualisation].” Dr John Manley, director of Hewlett-Packard’s Bristol-based utility computing laboratory, agrees. He believes that more application developers need to be involved with standards processes, such as those of the Open Grid Forum, that are currently dominated by systems builders.

So far, Oracle is the only application vendor to explicitly address the virtue of blades and hardware virtualisation in the market. Its Oracle Enterprise Manager 10G software is, according to Bob Shimp, the company’s VP of technology marketing, the only product that currently can automatically discover, provision and configure applications across virtual systems resources, including blades.

Such claims, predictably, are treated with some scepticism in the systems community. Reger, for instance, believes that it is still necessary to customise the integration of applications with virtual resources on an application-by-application basis. His company, for instance, has worked with SAP to closely integrate its MySAP business suite with Fujitsu’s SysFrame blade servers.

The product of this effort, Fujitsu Siemens’ FlexFrame product, is the first implementation of a complete application stack on top of a blade management system. The integration has actually involved minimal changes or extensions to SAP, but it has allowed the SysFrame infrastructure to be more in tune with how MySAP works. In this way, the various services encompassed in a suite like MySAP can be allocated, de-allocated and reallocated to SysFrame resources dynamically, optimising the performance of the overall application suite, at the same time as maximising the utilisation of available hardware.

As products such as FlexFrame become more commonplace, the role that blades play in supporting them may come to be overlooked. In the virtual data centre of the future, after all, administrators will no longer have responsibility for running discrete hardware systems. Instead, according to visionaries like Reger and Manley, they will become responsible for managing whole composite applications, ensuring that they are receiving their due share of the common pool of systems resources that the data centre contains. This vision is still somewhere in the future, but one of the first steps along the path to realising it will likely be blade-based.


Blades vs servers 3-year server cost cpmparison
Source: IBM (*IBM eServer 325)

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics