Dissecting the software-defined data centre

It’s no secret that today’s organisations are experiencing an unprecedented data explosion. As much as 90% of the world’s data has been created in the last two years and the pace of data generation continues to accelerate thanks to trends like cloud, social, mobile, big data, and the Internet of Things.

These developments create additional pressure on data centre managers already struggling to make do with flat or decreasing IT budgets.

Thankfully, help is on the horizon with the emergence of the software-defined data centre (SDDC), which claims to deliver new levels of scalability, availability and flexibility with a dramatically lower TCO.

Already lauded in the infrastructures of Amazon, Google and Facebook, SDDC is built on three key pillars: compute, storage and networking.

The last decade saw the transformation of the compute layer thanks to technologies from the likes of VMware, Microsoft and the open source community. The next stages are storage and networking.

While software-defined networking (SDN) was all the rage a couple of years ago, actual market traction and customer adoption has been slower than expected as industry players continue to work to align all the technology pieces required to deliver full SDN solutions.

>See also: Clearing the smoke on the software-defined data centre

The story is quite different for storage. With storage typically being the most expensive part of an enterprise infrastructure, we are witnessing a dramatic acceleration in software-defined storage (SDS) adoption.

2014 promises to be very significant for software-defined storage as customers realise its potential for addressing their critical pain points: scalability, availability, flexibility and cost.

As SDS increasingly takes centre stage, it is important to ensure customers see through legacy vendor marketing seeking to preserve their hegemony by dressing up high margin, inflexible proprietary hardware in SDS clothes.

Thanks to fairly creative marketing teams most, if not all, storage vendors make some claim related to SDS.

It is amusing to note, however, that almost all are selling closed hardware products with the 60% to 70% margin that has been the norm in the enterprise storage market over the past decade. Calling a product SDS does not make it so.

Having a lot of software in a given hardware product (as most storage arrays do) might make a product software-based, but it does not make it software-defined. Similarly, adding an additional management layer or abstraction layer on existing proprietary hardware might increase the amount of software sold to customers, but really does not make the solution software-defined.

What legacy storage vendors are doing is very similar to what Unix vendors of old (remember Sun, HP and IBM) did when they added virtualisation and new management software to their legacy Unix operating systems to compete with VMware.

While these were technically interesting extensions to legacy technology, it was VMware running on standard Intel based servers that truly unleashed software-defined compute and changed the economics of enterprise compute forever. The same is true for SDS.

Done right, SDS allows customers to build scalable, reliable, full featured, high performance storage infrastructure from a wide selection of (low cost) industry standard hardware.

As such, SDS is about much more than the latest technology innovation. True SDS allows customers to do things they could not do before while fundamentally changing the economics of the enterprise storage business.

True SDS allows customers to deal with their storage assets in the same way they deal with their virtualised compute infrastructure: pick a software stack for all their storage services and seamlessly swap industry standard hardware underneath as cost, scale and performance requirements dictate.

Eliminating vendor lock-in without compromising on availability, reliabilitiy and functionality is how SDS will change the storage industry.

From a technology perspective, true SDS must be able to support any ecosystem (VMware, HyperV, OpenStack and CloudStack) and any access protocol (block, file and object), while running on a wide variety of hardware configurations, be they all flash, all disk, or hybrid.

>See also: The software-defined data centre is happening today: Eschenbach, VMware

Having a strong open source DNA helps in getting an active community of users and developers around the SDS solution. SDS openness will play an increasingly important role as customers move towards converged software-led stacks that harness technologies such as cloud, hyperscale, big data, NoSQL, flash hybrids, all flash, object stores and intelligent automation.

As mentioned earlier, the SDDC will deliver new levels of scalability, availability and flexibility with significantly lower cost than today’s approaches. With storage playing such a critical role in the SDDC, the accelerating adoption of SDS in 2014 will make it a breakthrough year for software-defined everything (SDx).

When the building blocks of software defined compute, storage and networking have all been put in place, enterprises will be free from expensive vendor lock-in and free to scale more easily, innovate more rapidly and bring new solutions to market more efficiently.

More than yet another technology fad, SDDC is poised to change core data centre economics and free enterprises to invest more in their own business.


Sourced from Thomas Cornely, Nexenta

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Data Centres