How software-defined infrastructure is transforming the storage industry

The so-called ‘explosion of data’ is probably the most cited technology fact spouted from vendors and analysts in recent years. The Internet of Things has only added fuel to this ferocious fire, which has already been well built by other trends like mobility, big data and social media.

Buried amongst all the hype, however, is one undeniable truth: the infrastructure operated by many enterprises today is simply not sufficient to store the data that they will be generating in the years to come.

IDC predicts that the amount of data in the world will grow to 40 trillion gigabytes by 2020, which means that it will double every two years until then. This is a scary thought for CIOs but has proved to be quite exciting for the storage industry, which has severely lacked differentiation over the past decade or two.

It has allowed a raft of new vendors to enter the space with their own solutions for managing the data growth, most of which have settled on one emerging technology: software-defined storage (SDS).

The basic concept will sound very familiar to the large majority of organisations that have virtualised their servers and wrapped them with highly automated software, and the same thing is also happening with networks.

When all three are done, this essentially creates a software-defined data centre (SDDC), which offers new levels of scalability, availability and flexibility, with a lower total cost of ownership and the ability to deliver applications on any hardware.

New storage architectures are permeating the enterprise to address the well-known challenges with legacy three-tier architectures, which are expensive, hard to manage and create multiple silos within the data centre.

‘The simple truth is that legacy architectures were not built for the storage requirements of virtualised environments,’ says Greg Smith, senior director, product and technical marketing at Nutanix. ‘And the patches applied to make them work are starting to wear out.’

This is driving the need to take a fresh look at how storage infrastructures are designed, managed and scaled.

In terms of financial benefits and flexibility, SDS can be a huge improvement on the traditional model.

According to Gerald Sternagl, business unit manager storage, EMEA at Red Hat, it can prove cost-effective by helping companies avoid the management of complex, manufacturer-specific infrastructure silos.

‘It also provides much greater flexibility due to the fact that it can be configured quickly, depending on the needs of the customer rather than relying upon time-consuming physical upgrades.’

Celebrating separation

A key element of SDS is the decoupling of storage hardware from the application. Traditional storage models involve a direct connection between the machine hosting the application and the storage pool that it is connected to.

Abstracting that relationship allows a storage administrator to select from a variety of different storage options when provisioning an application.

The decoupling means that end- users are given a greater choice of storage types and vendors – something that hasn’t existed in the industry for a long time. This not only benefits the buyers, but also spurs greater innovation amongst the vendors.

‘An SDS model allows a storage administrator to more easily mix and match different storage types based on application requirements and profiles,’ says Nigel Moulton, CTO for EMEA at VCE.

For example, as flash-based storage systems become lower cost, an SDS model permits this technology to be added relatively easily to the overall storage pool and thus be available to the applications that need it.

‘The required storage service may be cloud based,’ Moulton adds, ‘and as such, external storage capability should be capable of being provisioned outside the data centre.’

That is not the only separating going on. In traditional storage systems, the control plane and data planes are tightly coupled, but in SDS the planes are autonomous.

The control plane deals with the management of the system, providing features and functions such as data management, security, protection, housekeeping and provisioning. The data plane describes where the data lives.

‘The benefit of the separation of the planes is that the system can be easily designed for distributed operation and is essentially self-managing,’ says Laurence James, UK products, alliances and solutions marketing manager at NetApp.
‘This has the additional benefit that horizontal storage scale can be achieved with low management overhead and at lower cost.’

Challenging SDS

Before rushing into an SDS deployment, however, CIOs should be fully aware of the challenges they may face.
Traditional storage delivers high availability through the use of dual controllers, so that if one controller fails then the other can take over.

But with SDS, redundancy would need to be implemented using either mirroring or erasure encoding techniques, both of which involve significant overhead.

Furthermore, while SDS offers new degrees of flexibility, as already outlined, it is jeopardised in other areas.

>See also: Dissecting the software-defined data centre

Unlike traditional storage where the hardware configurations are preset and fewer in number, SDS requires storage software to be qualified with a wide range of hardware devices and configurations.

‘Customers need to be aware of compatibility details, support matrices and underlying limitations,’ says Radhika Krishnan, VP, product marketing and alliances at Nimble Storage. ‘The complexity of supporting a wider range of hardware often comes at the expense of high availability.

‘Also, demanding workloads such as VDI and databases work best when optimised vertically up and down the stack. Optimising performance in the SDS model can be expensive and onerous.’

Ultimately, storage is not standardised. Vendor platforms use different interfaces and protocols, so SDS solutions must be able to interact with and bring together all of these to create a seamless storage infrastructure.

‘Vendors also need to provide SDS solutions that can work with multiple vendor storage platforms, as well as having the functionality to manage the mobilisation of huge quantities of data around an enterprise storage infrastructure,’ adds James.

Silver Peak’s director of replication product management, Everett Dolgner, however, points out that the access protocols, for the most part, are all standard: iSCSI and FCP for block, and CIFS, NFS or HTTP for file access.

Of course, replication and management interfaces are still different, but this is similar to the situation seen today with regard to dedicated arrays.

‘The smaller SDS vendors have an opportunity to build products that work together, but it is unlikely to happen quickly, if at all,’ Dolgner says. ‘With SDS, we are starting to see more vendors build orchestration layers, which remove the need to manage every SDS instance separately and combine day-to-day management in a single piece of software.’

Standardisation has a meta-cycle to it, and the best standards allow for products to interoperate while still leaving room for innovation.

After a period of time, older innovations become commonplace but not interoperable, and standards need to be updated.

‘In this way, storage management and provisioning standards are due for a refresh,’ says Curt Beckmann, CTO, EMEA at Brocade. ‘And yet, this is difficult to drive through normal industry dynamics.

‘Individual vendors that push for this are often not rewarded. As in SDN, the buyer side needs to help alter the dynamic.’

Closing the deal

Despite all the stated benefits of SDS, adoption hasn’t exactly rocketed yet.

According to Sternagl, there are two main reasons for this. Firstly, migrating mission-critical data is a time-consuming, risky and costly endeavour, and because of this, companies tend to resist addressing these problems until they become too prevalent and costly, and can no longer be avoided.

Secondly, proprietary storage vendors have created technology dependencies that make it very hard to change an existing storage system and swap it for another vendor’s model.

‘This approach has been actively protected by the storage industry veterans for a long time, and as a result innovation has been stunted,’ Sternagl says.

But while adoption is at an early stage, momentum is high, says Craig Parker, ‎head of product marketing at Fujitsu UK & Ireland. ‘Early adopters are IT organisations that have high-cost pressures due to high-data volumes, and have already a good expertise in open source. Typical examples are service providers, research institutions like universities, public sector institutions, media and broadcasting companies.’

The cost of deployment will ultimately depend on the age, type and upgradeability of the storage already housed by the organisation.

‘Organisations with higher levels of virtualisation should find this lower- cost than those with considerable estates of physical servers running legacy applications,’ adds Moulton.

>See also: Clearing the smoke on the software-defined data centre

Whatever the cost, CIOs will need some ROI measurements to hand in order to obtain management approval for such a transformative project.

Since storage is not over- provisioned, customers significantly reduce their capex while operational savings are made by eliminating the need for storage managers to provision storage for every VM in the environment.

Further ROI comes from the ability to manage an entire infrastructure, with full visibility, centrally – instead of relying on multiple management consoles. ‘With support for in-depth automation, IT can automate low- value tasks and instead focus on business-critical projects,’ says Smith. ‘Silos within a data centre will be broken and employees can collaborate and focus on what is important for the business, as opposed to protecting their territories.’

SDS is without doubt reshaping the industry, but it’s a transformation that will not happen overnight.

What does appear clear, however, is that the SDDC stands to disrupt organisations’ entire IT environment.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...