The software-defined data centre (SDDC) has been a fast-emerging buzzword in enterprise infrastructure over the last couple of years.
It hints of a promise of true business agility – in a context where IT and business leaders have been plagued by the labour-intensive process of physically deploying resources when the business demands operational change that draws on IT.
The theory is marvelously simple; within a few clicks, you can dynamically allocate virtual IT resources from a simple management interface. And thanks to the rise of computer and network virtualisation in recent years, this has been increasingly possible.
But a significant and vital part of the problem – dynamically managing storage resources – has been beyond our grasp. Without this, the SDDC vision is incomplete.
The challenge of software-defined storage (SDS) stems from two main issues. The first is that of a total absence of standards in storage.
Everyone’s platforms use different interfaces and protocols. This naturally makes the process of software-defined more challenging than that of compute (dominated by x86 architecture) or network (standardised on IP).
Second is far more significant. Storage has what we refer to as ‘gravity’.
An example of the acceleration of businesses’ requirements for storage illustrates this: it took EMC 26 years to sell its first exabyte of data; five years later, it shipped an exabyte within a year; within a year after that, it was shipping one exabyte per quarter; 18 months later, the pace went up to an exabyte a month; and last year, one customer in a single deal purchased almost an exabyte of storage.
This is what every enterprise is facing. With this enormous mass of data, there’s intense gravity. It’s non-trivial to dynamically create and shift the location of immense quantities of data around virtual pools of storage, and this has been the key barrier in every previous attempt to dynamically virtualise storage resources.
According to EMC, the enabling requirements for software-defined storage fall on two fronts. First, in taking the view that the software-defined element needs to take place in the control plane, instead of the data plane. Historical attempts to manage software-defined storage in the data plane led to latency and performance issues, but, by elevating the management of data to the control plane, this problem is sidestepped.
>See also: Dissecting the software-defined data centre
This creates the second issue: interfacing with the dozens of different types of storage in use in enterprise IT environments. Rather than building SDS for its own products, EMC has set about building in interfaces for the hardware arrays most often seen in its customer environments, including, to date, NetApp and HDS – with more planned to be added.
Getting into orbit
Ron Redmer, COO and CTO of assure360, was part of EMC’s early adopter programme for its SDS product, ViPR, and said it accelerated his company’s provisioning of storage services, gave it policy-based control to ensure regulatory compliance, and delivered automation benefits, stretching the resources of his team.
Of course, the ideal of dynamically allocating IT resources from a simple management interface sounds a lot like public cloud services.
EMC says public cloud services are an excellent example of an implementation of the SDDC model, but tend to come with a few caveats: no control over where the data is, vague ownership status, complex data migration data off public cloud resources, and no user say in its backup or continuity frameworks.
For smaller or unregulated businesses, this may not be an issue – but for the enterprise market, EMC says, it’s unacceptable.
The issues of data protection, data sovereignty and in-flight corruption are all too great for businesses whose crown jewels are held in their digital data. As such, EMC believes public cloud services will play a relatively limited role in enterprise IT infrastructure in the near future. Indeed, analysts forecast they’ll make up less than 4% of enterprise IT workloads in 2014.
Enabling the third platform
The move to third platform is the current super-trend in enterprise IT, most easily conceptualised by considering the consumer smartphone experience.
Unlike most enterprise users, a consumer smartphone user has access to millions of apps, dynamically priced and managed against usage models, in some cases with governance around the security and feature-set they have (for example, Apple’s App Store exercises some control over which apps are available to its customers, designed to protect their security and experience).
These apps are delivered via the cloud and managed via the cloud, automatically updated and designed for mobility. Most leverage usage data to improve the service, deliver services or manage users in some way.
For enterprise IT to deliver this degree of scale, flexibility, mobility, chargeback services and data use requires a move to a fully software-defined model, else the kind of dynamic resource allocation required in this world simply wouldn’t be possible.
And if enterprise IT doesn’t move this way itself, it’ll face up to significant pressure from ‘shadow IT’ as end-users bring in their own services (often uncontrolled and unregulated on the public cloud) creating significant risks for the organisation.
EMC believes the SDDC is the key to future proofing a business’s infrastructure through this transition by creating a pathway to the third platform.
There’s no doubting the potential benefits of SDDC for the enterprise. In a market that is seeing some – challenging – recovery, businesses need the agility and flexibility to move quickly, and have users that demand digital experiences that match those they receive from the world of consumer technology.
Software-defined storage has, for a long time, been the mountain to climb in terms of delivering true SDDC services, but thanks to the move to the control plane and the opening up of interface standards, it is appears finally achievable.
Sourced from Amitabh Srivastava, president of EMC's Advanced Software Division