Imperial-led project aims to unlock processor diversity in the cloud

One reason why cloud providers can keep their prices down is their use of commodity hardware. 

Not only does this mean that hardware costs are low, the fact that their IT environments are "homogeneous" – i.e. built from standard kit – means they can automate systems management and keep HR costs down too. 

However, there are some applications that require non-commodity hardware; systems that have very high performance requirements, for example. 

At the moment, the cloud model is not well suited for this kind of application, as it is not in the cloud providers' interest to introduce "heterogeneity" into their data centres.

But an EU-funded project, led by Imperial College London and incorporating German applications maker SAP, hopes to change that. 

According to Professor Alexander Wolf, of Imperial's Department of Computing, one of the principle drivers to the HARNESS project is the growing use of in-memory computing. 

"The main advantage of in-memory databases (IMDBs) is reducing the times it takes for data to move from storage to processing," he explains. 'If all you need to do is simply read the data from the database, it can be very fast. 

"But writing the data can be slow," Wolf adds. "That means that if you have a real database, and you want to update parts of it incrementally, the way in which your data is organised can break down over time."

This problem can be addressed by accelerating certain database operations, Wolf says, and that is done using non-standard processors such as GPGPUs (general-purpose graphics processing units) or FGPAs (processors that can be programmed for a specific purpose).

But the cloud providers' need for homogeneity makes it uneconomic for them to offer access to this kind of processing power at scale. "The commodity data centre does not offer resources that can be used to effectively carry out this kind of acceleration."

The purpose of HARNESS is to develop cloud platform technology that providers could use to incorporate these non-standard processors into their data centres, while preserving the economic benefits of the cloud model. 

As it happens, Wolf explains, Amazon Web Services does rent out servers based on GPGPUs, as part of its high-performance computing (HPC) suite of offerings. But it is up to the user to figure out how to build an application that makes use of their capabilities. 

"One goal of HARNESS is to figure out ways to virtualise these new technologies, and to manage them in a cloud environment."

At the end of the three-year project, Wolf hopes to have developed software techniques that would allow cloud providers to offer their customers various deployment options based on the performance requirements of their application. 

"The cloud providers could say, give us some variants of your application, tell use your performance requirements, and we'll figure how to allocate resources in our data centre to those applications."

Longer term, this avenue for research could one day lead to cloud environments that automatically detect the best kind of processor for a given workload. "That would be a wonderful goal, but it's a long-term horizon," says Wolf. 

SAP's interest in the project is clear. In January, the company announced the availability of its HANA in-memory database for its Business Suite ERP application portfolio. 

Many businesses are keen to put their ERP applications into the cloud, but are concerned about application performance (among other issues). Unlocking HANA's potential in the cloud could conceivably remove that particular stumbling block.

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics