Why the future of storage is software-defined

Anyone starting a company today is unlikely to buy a server, or fill a shopping cart with boxes of Microsoft SQL Server, SharePoint, and Exchange DVDs.

They would be far more likely to sign up for online accounting, website management, cloud HR, and online customer database tools. They would almost certainly get a laptop with a big hard drive, store files there, and start sharing the content with Gmail, Dropbox, or Slack.

If they needed productivity tools, they would go to Microsoft Office 365, or they might go all Google and just create content there. Small businesses are increasingly using storage inside cloud apps and they are never turning back. But where does that leave larger businesses?

Like their smaller counterparts, enterprises are also turning to the cloud to host more and more of their applications, but still tend to keep the bulk of their transactional data and intellectual property on their own servers, to ensure security and conduct faster analytics.

>See also: The move to software-defined everything: how secure is programmable infrastructure? Primary tabs 

They use a small amount of latency-sensitive storage for their most time-sensitive data, but often store the bulk of their data on capacity-optimised storage. As incremental storage costs decline to near zero, they are unlikely to delete any of their petabytes of data, because there is no cost in keeping it.

Service providers will host more and more data in cloud-based, fully managed, and hosted environments. They too will provide latency-sensitive and capacity-optimised tiers of storage, both for enterprises and storage-as-a-service (SaaS) providers.

The $100 billion storage market is being disrupted by software-defined storage

For the last 20 years, storage has been defined by closed, proprietary and monolithic hardware-centric architectures, which were built for single applications, local network access, limited redundancy, and highly manual operations.

The continuous surge of data in modern society has radically changed this environment and now requires systems with massive scalability, local and remote accessibility, continuous uptime, and greater automation, enabling fewer resources to manage much more capacity.

With the emergence of petabyte-scale environments, today’s storage appliances simply cost too much to acquire, upgrade and manage. Realising this, more and more large-scale enterprises are now looking for a generational leap in their capabilities.

Until recently, the technological breakthroughs of internet giants like Google, Facebook and Amazon were not replicated by mainstream enterprises and service providers.

These vast internet-based service providers had the resources to develop and deploy intelligent software on standard servers to manage their data.

Software-defined storage (SDS) – which is designed to bring the advancements of hyperscale to mainstream organisations and enable them to manage their data growth in a radically more efficient, scalable, and cost-effective way – has now become mainstream.

Old storage is broken

Why is the model changing? The old storage model was designed for thousands of files; single applications and dozens of clients; local networks and regular downtime; highly manual operations; compatibility with block or file-based applications.

The problem for those businesses operating on older infrastructures is that scaling up to millions of users, petabytes of data, constant availability, and attaining near-complete automation was never the goal for the legacy technology.

It should therefore be no surprise that the old storage model is completely outmoded and scaling capacity is a huge challenge for the old storage model. Conversely, the software-based storage model has been designed for thousands of files and petabytes of data, organised in rigid file systems and volumes.

Whereas the legacy storage model had severely limited performance-scaling capabilities, its software-based counterpart provides performance on a huge scale.

Previously, most businesses didn’t need to run 24/7 and multiple sites didn’t need to collaborate closely. Moreover, digital information was not all business-critical.

Storage is now designed to tie data management capabilities to a specific unit of hardware in a closed model. Storage administrators must allocate, optimise and manage physical disks and inflexible containers of data (file systems and LUNs), rather than the data itself.

Enterprises are currently more than 40% virtualised and are increasingly adopting infrastructure automation. With this in mind, storage needs to be part of the automation strategy, not isolated within silos of the physical hardware.

Moreover, storage must support scale economics, application awareness, and orchestration frameworks.

To reflect the changing nature of the way in which businesses operate, the new storage model is designed from the ground up for data that is stored and used now and, importantly, into the future.

SaaS service providers of even moderate size have millions of end users. This means that storage needs to support thousands of clients simultaneously. These types of environments will not only grow to multiple petabytes of data (even exabytes), but also multiple billions of individual files, so conventional storage limitations on file count must be eliminated. However, storage on this scale inevitably brings increased complexity.

Dealing with complexity

There is no end user or business tolerance for downtime anymore. End users are highly likely to abandon an application or website when it isn’t available for even just a number of seconds.

As a result, businesses have increasingly aggressive SLAs around downtime, and infrastructure failures, expansions and upgrades can no longer come with maintenance timeframes of hours or even days.

Traditional data protection techniques like RAID are no longer relevant in the age of multiple terabyte drives given the extremely long rebuild times and limited failure tolerances.

>See also: 5 predictions for software-defined networking

Built to take full advantage of economies of scale, regular improvements in hardware, and greater application and framework intelligence?software-defined storage, is fully decoupled from hardware to take advantage of the most relevant hardware form factors, the latest innovation, and the fastest or most dense media – all without interruption to the application.

Decoupling software from hardware, this allows environments to scale out massively, improving the economies of scale linearly without the typical step-function costs of new hardware-driven systems.

Decoupling enables services that are truly meaningful for the modern enterprise, such as data protection and availability, to span instances of physical hardware and physical locations. And it enables the addition of new storage capabilities, again without service interruption.

Much like compute has become primarily a virtualised infrastructure, storage is becoming software-defined. Modern storage solutions are built for the software-defined data centre, with massive scalability, broader application support, higher availability, and policy-driven data durability levels, as well as greater efficiency than any traditional storage technologies.

 

Sourced from Jérôme Lecat, CEO at Scality

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Storage