Architecting modern data centres: how to use SDS in hyper-scale environments

Storage workloads in modern data centres increasingly require scale-out environments to run demanding enterprise applications, so they can benefit greatly from software-defined storage

Related topics
Data
Networks
Storage

Related articles

Software-defined storage is driving data centre infrastructure innovation
Storage ‘gravity’: the key to unlocking the software-defined data centre

Share article

Short of time?

Print this pageEmail article

SDS brings benefits of value, flexibility and operational efficiency

Modern data centres are running demanding, performance hungry, enterprise applications like No-SQL, online transaction processing (OLTP), cloud and big data analytics. They increasingly require scale-out environments using a separate storage layer on top of commodity or brand name hardware to provide appropriate service levels to users and applications.

Only Software Defined Storage (SDS) can efficiently manage these environments in terms of economic value, flexibility and operational efficiency. To realise these benefits, organisations need an enterprise-class SDS-approach combining intelligent data services with predictive analytics across any primary or secondary storage hardware, in the cloud or on-premise. This approach helps to achieve more economic value from existing environments and future storage investments.

Commodity or brand name storage?

The architecture of a hyper-scale data centre depends on the nature of the applications and business priorities, such as flexible capacity, security and uptime. The architecture must also be able to grow to meet compute, memory and storage requirements on-demand.

> See also: Software-defined storage is driving data centre infrastructure innovation

Most modern applications that need hyper-scale scale-out environments offer built-in resiliency, protect themselves from hardware failures and can self-heal, which eliminates the need to build in high-availability (HA) at the storage layer.

This opens the door to using consumer-grade, commodity hardware that can fail without impact on service availability. On the other hand, revenue-generating scale-up applications may justify paying a premium for name brand storage with HA and data protection features, because it’s unwise to test radical new technologies in that environment.

Properly architected SDS platforms enable the use of heterogeneous commodity hardware to drive the lowest possible cost, orchestrate data services such as replication, and create policy-driven, heat-map based tiers that place data on the appropriate storage media. An SDS approach eliminates the reliance on expensive, proprietary hardware and the dangers of vendor-lock-in.

Enterprises move to different storage models

The two most common models for scale-out hyper-scale storage are either direct-attached storage (DAS), or a model based on various protocols such as iSCSI or NVMe. Some very large custom data centre installations at companies with the right protocol-level engineering staff run on homemade, workload-specific protocols developed to optimise storage traffic for custom use cases.

But the available slots constrain the DAS model in a server, so the scale is limited and can be outgrown quickly. In the DAS model, independent scaling of compute and storage resources cannot be optimised. As a result, enterprises have started to move to models that use a separate storage layer on top of commodity hardware.

SDS advantages in managing new storage environments

SDS adds intelligent orchestration and management via an abstraction layer that separates heterogeneous storage hardware from applications, resulting in a more resilient, efficient and cost-effective infrastructure.

The fact SDS is hardware agnostic allows enterprises to implement new storage technologies in their existing infrastructure, eliminating the need for deploying greenfield infrastructure when migrating to newer storage models.

SDS allows the migration from legacy to modern technologies to happen over time, maximising Return on Investment (ROI) from an already established storage infrastructure. SDS provides flexibility in data migration, seamless tech refresh cycles and independent scaling of the storage and server resources.

Even where data protection and high availability (HA) capabilities aren’t necessary, SDS can provide valuable features such as actionable predictive analytics, Wide Area Network (WAN) optimisation, application-aware snapshots, clones, Quality of Service (QoS), de-duplication and data compression.

Squeezing the lowest possible TCO out of storage

SDS blends well with hyper-scale infrastructures built to meet growing requirements for storage flexibility, density and performance. Falling prices of flash, the introduction of various flavours of storage-class memory, and an increasing appetite for commoditisation of the data centre infrastructure, have helped fuel possibilities in hyper-scale storage. SDS enables deployment of storage technologies with different capabilities, at various cost points, to drive the lowest possible Total Cost of Ownership (TCO).

> See also: Storage 'gravity' the key to unlocking the software-defined data centre

Most data centres, battling enormous data growth and challenging SLAs, have to squeeze out the lowest possible TCO for their investments. To achieve this goal, they need an SDS-solution to manage every kind of storage hardware onsite or in the cloud, get value out of existing environments and future storage investments and maximise flexibility and operational efficiency.

Sourced from Farid Yavari, VP of Technology, FalconStor