Grid’s evolution

Grid computing is not an unproven technology. In academic research, in science and engineering and for the computer-intensive applications of select industries, grid is already widely exploited. However, the limitations that have held back mainstream adoption are now being addressed with the promise of much more efficient use of low-cost processing resources.

That can’t come soon enough, says Nick Werstiuk, director of product management at Platform Computing. Today’s IT departments face pressure to cut operating costs while improving performance; business units demand faster, more powerful applications, organisational leaders want innovation. Satisfying those demands is extremely difficult, he says, given the existing complex and inefficient silo-based infrastructure that most organisations have to deal with.

Such an infrastructure had once seemed the only option, says Werstiuk, as IT directors were forced to plan for “unpredictable and potentially infinite demand” on computing resources. The result was to build for peak demands, using systems dedicated to a single application or department.

Nick Werstuick

As product manager for new ventures at grid technology pioneer Platform Computing, Nick Werstiuk leads a group dedicated to ensuring cutting-edge developments are turned into commercial successes. His 17 years experience in enterprise infrastructure solutions has previously focused on the customer adoption of new products at companies such as Divine, eAssist Global Solutions, Delano Technology.

Speaking at the Future of the Data Centre 2006 conference, Werstiuk outlined how that meant most businesses were massively “over-provisioned” – an uneconomic situation at a time when the business is demanding efficiency.

So is the alternative to share resources using a grid or utility model? Early adopters suggest that grid is the way forward, he says. Grid has worked well in instances where select applications have needed processing scale – in investment banking, telecoms, gaming and insurance.

“But we cannot assume more mainstream business tasks will be simply switched to grid computing overnight. The process of decoupling applications from underlying resources is a significant barrier,” Werstiuk says.

As organisations move to supporting business processes using components from separate applications, building the necessary palette of services, the method of delivering applications will change, says Werstiuk.

“Businesses are going to need to build a service-oriented infrastructure to support this model. In other words: taking all enterprise applications and running them on a virtual computer.”

The first steps along this road involve changes at three layers within the infrastructure: at the business process layer; the application layer; and the infrastructure layer.

Applications need to be “broken down” into flexible components, which can be reconnected – using web services and standard protocols – in a loosely coupled manner, with services reused where appropriate, explains Werstiuk.

Once the application layer has been transformed into this agile architecture, virtualisation technologies will uncouple applications execution from any specific piece of kit.

Notwithstanding the advantages of such a development, there is likely to be internal resistance to these changes, he warns: “Most organisations are scared by the prospect,” he says, as business units want to retain control over the specific parts of the infrastructure they feel they ‘own’.

The big picture might sound compelling, but the day-to-day demands on most IT organisations mean that such a momentous transformation will not happen overnight. “People don’t call us up and say ‘we want to build a grid to run hundreds of applications’. It’s not that sort of model,” says Werstiuk. The IT department needs to stage the transition, he says, but the businesses will derive more value the further they are willing to move towards a grid model.

One of the biggest challenges will be to develop robust policy models that can guarantee appropriate levels of access to the shared services for line-of-business managers. The chief financial officer will not be impressed if, when it comes to running the payroll, there is insufficient capacity because the marketing organisation have grabbed all the resources to run a mass email campaign.

As such, there is likely to be a considerable amount of horse-trading behind the establishment of policy models. That’s inevitable, says Werstiuk, “but this is a management challenge, not a technical one.”

Cautionary notes

Looking to the point where the deployment of grid technology broadens across the enterprise, he sees benefits in terms of both cost reductions and increased flexibility. IT leaders need to be aware of some of the risks too.

“Firstly, this is not a zero-sum game. There will be some upfront costs,” says Werstiuk

Some early adopters have shown that the return on investment can be made after as little as one year, others have found the improvements in organisational agility have made definite longterm differences.

Even so, today’s deployments are taking place entirely behind the corporate firewall, allowing companies to use existing security checks. The ultimate step would be to tap into external as well as internal grids, securing these through the use of virtual private networks or encryption.

“We are only at the start,” says Werstiuk. Commercial grids have sprung up to support the science and engineering functions of business; other parts of the organisation will not be far behind.

Ending mainframe dependency

The IBM mainframe remains the unrivalled champion of large data centres, despite the arrival of new technologies such as blade servers. In financial services, telecoms, utilities and many other industries that demand high performance, reliability and durability, mainframes are still the mission-critical workhorses.

The drawback is that they are expensive and rely on a relatively inflexible, proprietary architecture. “There’s a trade off you make: proprietary mainframe technology comes at a price.” says Christian Reilly, product marketing director at Platform Solutions.

Over the years, there have been several attempts to address some of those downsides. IBM itself enables users to run Linux and Unix alongside the native z/OS operating system. But one of the most ambitious plans has emerged from Platform Solutions.

It has developed a set of patented microcode technology that enables z/OS to run on standard open system Intel Itanium 2 processors – as well as Unix, Linux and Windows.

Mainframe users are increasingly looking for ways to bring the value of open systems into their data centre, says Reilly, so they have a choice to run workloads on Unix, Windows or Linux operating systems, and not just IBM’s z-series mainframe operating system.

As technologies – such as those from Platform Solutions – improve, users are presented with the option to move mainframe applications onto an open system alternative, without having to make changes to the applications. Platform’s product supports all the IBM mainframe extras such as FICON for connectivity over distance, network-attached Virtual 3270 and fibre channel devices. That is perhaps not surprising as it is based on technology acquired from Fujitsu’s now-extinct IBM plug-compatible mainframe unit, Amdahl.

The notion that companies may chose to retire mainframe systems has never become a reality, says Reilly – until now…

 

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics