Any organisation storing data in the cloud should, at the very least, encrypt it. Most of the major cloud providers either offer data encryption as an option, or encrypt data by default.
Unfortunately, encryption is not infallible. In 2009, researchers from MIT and the University of California, San Diego, argued that data stored in the cloud could theoretically be extracted and decrypted by another user of the service.
“We argue that fundamental risks arise from sharing physical infrastructure between mutually distrustful users, even when their actions are isolated through machine virtualisation as within a third-party cloud compute service,” they wrote.
See also: Exposing the cracks in cloud security
First, the attacker would need to find out which physical server the target’s virtual machine was hosted on. If they know the IP address of a given VM, the researchers showed, they could theoretically figure out what type of VM it is and which data centre it is hosted in.
They could then provision a bunch of VMs of the same type and location. There are various tests they can run on those VMs to determine whether they have indeed ended up on the same physical server as the target.
Once they have found a VM co-located on a server with the target machine, the researchers claims, the attackers could then use recently developed techniques to analyse the performance of its host server’s processor to extract data from the target VM.
Most worryingly, the researchers said that, in theory, attackers could measure the memory access patterns of the CPU – they way in which it calls data from memory – to intercept they cryptographic key used to encrypt the data on the virtual server.
All of this could take place with no more access to the cloud provider’s infrastructure than an average customer.
This may seems like an esoteric attack, but with the cloud market growing four times faster than the IT industry as a whole, cloud vulnerabilities are fast becoming critical.
Happily, new research by MIT promises to counteract the ‘memory access pattern’ vulnerability. Professor Srini Devadas and his team have developed a technique called “oblivious RAM” which obfuscates those memory access patterns so that they cannot be decoded.
The team has come up with a new hardware design based on a technique known as Ascend, which adds noise to the memory access patterns by randomly swapping the addresses of nodes in a database.
When an access request is made, it is sent down a different random path of nodes each time, making it impossible for a hacker to ascertain which address has been accessed – almost like Internet anonymity service TOR, but at the microscale.
Normally, this kind of obfuscation would impose an unacceptable performance lag on the system. “When you go outside of the processor chip and access memory, that’s when things slow down a bit, because the longer you take to access memory the slower your programme is going to run,” Devadas says.
“But what we’ve done is drastically minimize the performance overhead associated with encryption and decryption as you go off a chip,” he explains.
“IT managers are generally prepared for these kinds of trade-offs with security but where previously it may have been a factor of 100 slower to do this, we believe this technique will reduce that to a factor of two, which is far more palatable.”
After describing the hypothetical hardware component in a paper at the International Symposium on Computer Architecture in June, the next step for the Ascend team will be to create a working prototype, and ultimately convincing hardware makers of the benefits of including the component.
“It’s the people in the software world that will be crying out for this in their hardware,” says Devadas. “Correct software is hard to write – people produce thousands of lines of code a day and bugs are unavoidable.
“But with functionality in the underlying hardware that allows software to be secure, developers no longer have to worry about inevitable bugs.”