This means that storage environments are largely closed environments. If a company wants to expand its storage footprint, it has little choice but to buy a new, higher capacity storage platform.
By contrast, if a company wants to add capacity to its server estate, software technologies such as open source operating system Linux and server virtualisation allow it to add cheap, commodity hardware as and when it is needed.
So far, however, these forces have not been applied to storage. “The liberating effect that virtualisation and open source have had in the server world over the past ten years hasn’t really taken place in the storage world yet,” explains Simon Robinson, vice president at 451 Research.
The problem with the ‘scale-up’ model of growing storage infrastructure – i.e. buying discrete, high-capacity network-attached storage (NAS) systems to meet new demand – is that it increases the management complexity, says Robinson.
“Management overhead is the single biggest pain point in the data centre generally, but in storage specifically, because of the complexity, it sticks out like a sore thumb,” he says. “Historically, investing in a storage infrastructure that’s going to scale in a cost-effective way has been very difficult to do.”
Developments in a parallel field of technology – networking – have shown the potential of open standards-based software to disintegrate technology markets. ‘Software-defined networking’, it is hoped by some, will help to open up the networking equipment market by allowing organisations to manage heterogeneous networking environments as though they were homogeneous ‘resource pools’.
Talk has inevitably now turned to the potential for ‘software-defined storage’. As always, it pays not to place too much weight on the neologism itself – software has always played an intrinsic part in storage systems.
But two characteristics of some emerging storage management systems are important. Firstly, like server virtualisation, they abstract the concept of a logical data store away from the hardware itself. And secondly, they do so in an open (although not necessarily open source) fashion.
These two characteristics have the potential to redefine the economics of storage planning, by allowing customers to use whatever hardware they want.
The idea of an open, software-defined storage management system is not a new one. In 2006, IBM developed an open-source storage project, called Aperi, which closely resembled technologies now being dubbed ‘software-defined storage’, according to Martin Brown, who advised IBM on the project and is now VP of documentation at NoSQL database software vendor Couchbase.
“The principle was to get rid of the idea that there’s a physical disk and a letter associated with storage drives,” says Brown. “We’re all used to this concept of a C drive or a D drive on a Windows machine and volumes on Macs, but when it comes to deploying virtual machines, people wanted something different.”
But Aperi was ahead of its time, he says. “A couple of years ago, people would install one storage-area network [SAN] that could support their entire network, and they didn’t even know why they would need something like Aperi,” he says. “But now people are beginning to realise that they can create a SAN-like environment using a good software management platform and commodity storage.”
That realisation has been prompted by the unstoppable growth of data volumes, and the rise of cloud infrastructure services such as Amazon Web Services. “People don’t actually care what the hardware is that’s used,” he says. “They just want fast, large and readily available storage that can be deployed in their own data centre environments.”
Nor is the idea of decoupling the logical storage environment from the underlying environment the preserve of emerging start-ups. Many of the established storage platform vendors use ‘storage hypervisors’ to do just this, says Mark Peters, an analyst at Enterprise Strategy Group.
Importantly, though, “a lot of that has been trapped within single proprietary storage systems”, Peters adds. That prevents organisations from using virtualisation to build ‘scale-out’ storage environments.
Runners and riders
One company that explicitly describes itself as a ‘software-defined storage’ provider is US-based start-up Nexenta. CEO Evan Powell pulls no punches when describing the impact of closed storage systems on the IT industry.
“I think that it’s wrong, antediluvian and archaic that the storage world is dominated by vendor lock-in,” says Powell. “Not theoretical vendor lock-in, either. I mean the data that’s stored on the disk in the proprietary format where you only have one way to get your data back, which is the vendor’s product.”
Nexenta’s open source storage platform, NexentaStor, can be installed on commodity hardware, and pools those underlying storage resources as a single, logical data store. The resulting system can be used as a SAN, i.e. a single storage system, or a NAS, an array of disks than can be built up over time.
LeaseWeb, a global hosting provider based in Amsterdam, rolled out NexentaStor in March alongside its existing legacy storage to serve large numbers of virtual machines on a cloud platform.
“We wanted a platform that has high performance in terms of IOPS but is built on commodity hardware,” says Robert van der Meulen, manager of cloud services at LeaseWeb. “We like to be able to select our hardware vendors based on their offering, their pricing and their specifications, instead of being tied to one specific vendor for all our expansion.”
NexentaStor also offers the ability to adapt software to storage requirements on the fly, van der Meulen says. “Because Nexenta is software based, I can scale-out by tweaking the hardware and software configuration based on the workload by adding caching or more memory. “We also like open source as the technology evolves a little more quickly,” he adds.
Nutanix is a US-based vendor that aims to create the ‘SAN-free’ data centre by converging compute and storage into commodity x86 servers running virtual machines. “We’re using some of the open source techniques that were developed by the likes of Google and so on to create massively parallel clusters,” says regional technical manager Rob Tribe.
Commercial open source giant Red Hat offers an open source, scale-out NAS solution, based on its acquisition of Gluster last year. In the company’s most recent earnings call, CEO Jim Whitehurst said that “interest in this technology is growing rapidly, with over 30 global companies running [proof-of-concept projects]”.
Two companies whose software is not open source but which nevertheless allows customers to use commodity hardware are DataCore Software and Coraid.
DataCore Software claims to possess the first and only “true” storage hypervisor in its SANsymphony-V platform. DataCore says its storage hypervisor’s “comprehensive set of storage control and monitoring functions operate as a transparent virtual layer across consolidated disk pools to improve their availability, speed and utilisation.”
Coraid, which unveiled its EtherCloud platform in August, claims to make storage completely programmable and can allow one-click provisioning of resources.
Even EMC, the colossus of the storage sector, says that it has a software-defined storage play. According to Sean Horne, EMC’s unified product director for UK and Ireland, its Atmos ‘cloud storage’ system allows customers to use commodity x86 servers as the hardware platform.
As it turns out, he says, most customers are interested in buying EMC hardware along with the Atmos system. “A lot of enterprises building cloud architectures want to take this one step at a time. They’re thinking: let’s do the software bit first, and then we’ll look at the hardware later.
“They’re not quite ready to throw it all out of the window because the PowerPoint scene has got more excited,” he adds.
As that comment suggests, EMC is unsurprisingly dismissive of the idea that the emerging SDS suppliers pose a threat to its business.
“The IT industry is very conservative and is used to a new technology being presented as the second coming, only for it not to do what it says on the tin after it’s been stressed and pushed,” says Horne. “People are not going to replace an infrastructure they’ve spent 20 or maybe 40 years building, developing and investing in, to really understand how to deliver business continuity and data integrity to a very high degree.”
Certainly, it would take a miracle for any of the ‘software-defined storage’ providers to make a noticeable dent in EMC’s considerable storage revenues in the near future, or those of IBM and NetApp for that matter.
However, by demonstrating that storage management software need not be hermetically sealed to the hardware, they may well have an impact on customer expectations. And that could force the big storage vendors to open up their systems.