When the Amazon Web Services public cloud computing offering first emerged, it sparked excitement in the industry and more than a few copycats.
The ability to scale hosted infrastructure up and down at the touch of a button suggested a future in which IT expenditure is in lockstep with demand. Some even imagined a day when organisations owned no IT infrastructure of their own.
Now, though, the many caveats to public cloud computing are better understood. Concerns about security, regulatory compliance and liability in the event of disaster, combined with a more subtle understanding of the cost implications, mean that while there is clearly a time and place for the public cloud, it is not suitable for every workload.
Meanwhile, the concept of a private cloud – a highly virtualised internal infrastructure that can be operated like a company-specific utility service – has its attractions, but on its own does not remove the need to provision infrastructure to meet peak demand. It was this characteristic that had so many organisations excited about public cloud computing in the first place.
Inevitably, a compromise has been proposed in the form of ‘hybrid clouds’. Put simply, this involves creating a highly virtualised internal infrastructure that can be augmented with public cloud resources – from one or more external providers – as and when they are needed. These resources would be integrated seamlessly and perhaps even provisioned automatically.
For the IT industry, this compromise is ideal. Customers will still need to invest in the hardware, software, services and consultancy required to build their own private clouds, but will also pay for hosted resources when they need
them most. But what about the end-user organisations? Does this compromise represent the best of both worlds, or will the cost of building and managing such a complex infrastructure cancel out any potential benefits?
IT infrastructure suppliers clearly see hybrid cloud computing as a potential money-spinner. Hewlett-Packard, Microsoft and VMware all have products designed to help organisations to extend their internal infrastructure onto public cloud platforms.
Open source infrastructure vendor Red Hat is pursuing the hybrid cloud opportunity. In September 2009 it launched Deltacloud, a system that allows organisations to tie together various cloud-based systems through a single interface. The aim is “to enable an ecosystem of developers, tools, scripts and applications which can interoperate across the public and private clouds,” the company said at the time.
One customer that uses Red Hat’s cloud infrastructure to benefit from both internal and external resources is digital animation studio Dreamworks. “When they have to get their movies ready, they have huge requirements in terms of compute power for rendering the films,” explains Red Hat’s European general manager, Werner Knoblich.
Dreamworks meets these requirements with a combination of internal and external systems. “They have different internal clouds where they can easily move resources from one virtual cluster to the other,” he explains. “Then out of the same user interface they can go out to Amazon and just provision another 1,000 or so machines to render the film.”
Page 2 of 3
Another commercial open source software vendor operating in this space is Eucalyptus. A start-up spun out of the University of California, Eucalyptus sells a ‘fabric controller’ application that customers use to monitor and provision virtual machines (VMs). The system allows users to provision VMs based on internal infrastructure or on Amazon’s elastic cloud EC2 from the same console.
According to Eucalyptus founder and CTO Dr Rich Wolski, however, few organisations are using the system for true ‘cloud bursting’, i.e. moving IT workloads onto public cloud infrastructure when demand outstrips internal capacity.
“In the future, cloud bursting will be one of several things that people will want to do with hybrids – and it will be significant,” he says. But he adds that a truly dynamic hybrid cloud infrastructure would be one that automatically provisions public or private infrastructure depending on demand, and that would be fraught with difficulties.
“You would have put a machine in charge of two things that you care about very, very deeply – privacy and money,” he explains. If through some policy error such a system moved sensitive data outside the firewall, the damage could be “irrevocable”, Wolski says. “Most people in the excitement phase of hybrids don’t consider that.”
Despite these reservations, Wolski accepts that businesses will be attracted to the idea of having a hybrid environment. That means it will soon become the IT department’s responsibility to manage workloads across multiple cloud infrastructures. “It will be the IT manager’s job to combine the infrastructure that you maintain and the infrastructure that you rent across the Internet in the combination your business requires,” he says.
An ecosystem of clouds
At the moment, the only public cloud offering that Eucalyptus’s fabric controller supports is Amazon Web Services. The ideal scenario for end-user organisations, however, is to be able to draw on multiple public cloud services with different capabilities, cost profiles and security levels.
There are a number of factors holding this scenario back. One is the glaring absence of interoperability standards between public cloud service providers. Many of the industry players have tried to forge de facto standards of their own. Red Hat, for example, has proposed Deltacloud as the industry standard interface between public and private cloud systems. Web hosting provider Rackspace, meanwhile, recently announced an open source cloud infrastructure stack that could ease interoperability – if it happens to be widely adopted.
Page 3 of 3
“The vendors are beginning to feel the pressure in terms of interoperability,” says Mike Rosen, CEO of US analyst group Cutter Consortium, as it dawns on customers that they risk being locked in to public cloud offerings. But the proposed standards will not be agreed or implemented overnight, Rosen adds. “Typically, the process takes two or three years for those standards to emerge, a year or two for companies to adopt them, and then another two or three years for them to actually work.”
Another barrier to a hybrid environment that includes multiple public cloud services would be the challenge of managing such a highly distributed system. What is needed is a systems management platform that can monitor and control both internal and external cloud systems with equal efficacy, says Ken Owens, vice president of security and virtualisation at outsourced infrastructure provider Savvis.
“In the true hybrid model, a company’s workloads are being moved to a provider’s environment, but the assets are still owned by the customer; and they want to be able to manage and patch those assets, as well as get real-time application performance measurements,” he says. “What they don’t want to have to do is look at their dashboard and a service provider’s dashboard and try to fill in the gaps.”
An infrastructure manager needs “a single pane of glass” to monitor the entire hybrid cloud environments, Owens explains, but this is not generally offered by the public cloud providers themselves. That need is instead being met by third parties known as cloud brokers, he says. “These cloud brokers are aware of the internal and external infrastructure, and they know how to talk to the cloud providers. They can give you policy and governance rules on when you deploy workloads and to which clouds,” he explains.
The services these cloud intermediaries provide include consulting on public cloud selection, integrating data sources and advising on how to spread data across multiple clouds to minimise security risk. Vendors in this nascent space include CloudSwitch, RightScale and CloudKick, while IT services providers including Capgemini and Logica also now offer cloud brokerage services.
However, these brokers are themselves hampered by a lack of cloud interoperability standards, Owens says. “Today, these cloud brokers have to go through a manual certification process [with public cloud providers],” he explains. “It’s a one-off design to connect to each cloud.”
A third challenge to the viability of hybrid cloud environments is network latency. The more geographically diverse a hybrid environment is, the more susceptible it is to network lag. This can be addressed with technology such as WAN acceleration, but this is yet another financial burden that jeopardises the potential cost benefits of the exercise.
Is it worth it?
Evaluating whether the flexibility that hybrid cloud may allow justifies the associated infrastructure and management costs, information risks and supplier management challenges is a complex task.
Some experts question whether the model is really likely to bring that much flexibility. Hamish Macarthur, founder of enterprise architecture analyst company Macarthur Stroud, argues that while the ability to scale systems according to daily fluctuations in demand sounds attractive, the cost savings available when a customer commits to a fixed-term contract are even more compelling. “If you buy on a daily basis, things tend to cost a lot more,” Macarthur says.
Amazon Web Services’ own price list corroborates this claim. According to company literature, provisioning 45 EC2 virtual instances on a three-year term costs $31,600 per year. Renting the same infrastructure on-demand would cost $58,000, almost twice as much.
This being the case, it seems unlikely that the hybrid cloud model – in its purest form – will be sufficiently beneficial for many companies to adopt, at least until it becomes significantly easier to do so.
Nevertheless, the typical enterprise IT infrastructure is likely to become more virtualised and more distributed across organisational boundaries in future. The tools, standards and best practices developed in pursuit of the hybrid cloud model may therefore become more widely applicable in time.