Is the modern data centre mission impossible?

The way businesses use data is changing, mainly due to the introduction and increased adoption of cloud computing, virtualisation, big data, The Internet of Things, and social media.

This exponential growth is partly caused by massive technological advancements, with more online and mobile transactions, higher resolution images and videos, and new regulations requiring digital information to be stored for longer periods of time.

The resulting data is the most valuable asset for most organisations, which want to capture and analyse it to turn it into usable, insightful information that can successfully help grow the business.

This reliance on data is why most organisations are highly dependant on their IT infrastructures. According to a recent IDC study, reliability and performance are top contributors to the optimisation of a storage infrastructure. However, with budget constraints, increased complexity and the limitations of current storage architectures, companies are increasingly forced to choose between scalability, availability, performance and cost.

>See also: Redefining the data centre: from budget line item to revenue driver

Even with all of the technological and industry advancements, there was no single storage solution able to meet all of these requirements and remain business competitive. This is because, to achieve storage nirvana variables such as availability, reliability, performance, manageability and application ecosystem integration must be tuned almost to perfection.

According to the IDC survey, storage outages are still a common occurrence, especially in larger companies. More than a third of the respondents reported their organisations had experienced at least one event of storage service unavailability over the previous 18 months.

This downtime often has a long recovery time, with high volumes of lost data, revenue and trading, resulting in costs that can run into millions of dollars per hour.

It is therefore not surprising to hear that IDC also revealed that IT infrastructure reliability was reported as a key requirement for decision makers when it comes to managing large data sets.

Relying on performance

Most companies are highly reliant on the performance of their data centres to maintain productivity and delivery of critical revenue-generating services. When designing an IT infrastructure, it is vital to outline the capabilities it will need to effectively serve the business.

Throughput should be consistent, with low latency thresholds, and data should be highly available, whether demanded by a user on premises or on the move.

The storage infrastructure, for example, must be able to deal with spikes in traffic with little to no impact on performance – this is where many solutions today can still struggle.

The necessary performance can be realised by choosing the right technology for the specific needs of the business. This may not be one system, such as just disk or flash based arrays, but it may be a hybrid, for example. It might be a case of retaining some storage on premise and outsourcing some to the cloud. Flexibility is the name of the game.

Traditionally, the ability of a storage system to meet the needs of a company has been directly linked to its cost: the more boxes it ticked, the higher the price tag.

In recent years, power, cooling and data centre footprint have surfaced as additional costs that most solutions on the market today are yet to address.

It is therefore clear why the need for curbing capex and opex is also critical when it comes storage purchasing decisions – almost half (47%) of the participants in IDC's survey mentioned TCO as the most important factor.

Of course, the idea of TCO today is different from what it was a few years ago: system performance, uptime and ease of use can now greatly sway buyers. And this is why some newer solutions are quickly gaining ground, because they have been built from the ground up to address the current needs placed on storage environments.

Ecosystem integration and platform architecture

One of the issues most commonly found in data centres is the isolation of data, which is largely due to the various IT departments of a company working as separate units instead of communicating efficiently.

This often leads to a specific breakdown in communications between these teams that can result in events such as requests for additional resources, where a redesign of the existing ones might be a better solution, or where one team develops a policy that does not abide by corporate guidelines.

One of the solutions to this problem is to create and maintain closer integration between these silos, a more centralised yet transparent approach to management of the data centre.

This should enable a deeper understanding by the IT department of the needs and expectations of each team (from HR to applications, and from finance to security for example).

Such a model also allows organisations to achieve efficiencies that are only possible through a unified approach to storage and overall data centre management.

Moreover, there is an expectation by end users for a high level of flexibility within their storage environments to support secondary applications at low expense with capabilities such as reliability, low latencies, availability, throughput and scalability.

To meet these challenges head on, innovation has enabled simple and easy storage system integration within the storage ecosystem. This eliminates the need for compromises between storage capacity, performance, reliability, and cost.

As storage environments continue to grow at an unprecedented pace, they also become more complicated to manage. With the limitations of legacy storage architectures becoming apparent, there is increasing demand for new storage architectures that are natively designed. Anything less, may negatively impact on business performance.

A well-managed IT infrastructure is one that keeps costs and the need for additional training and extra resources to a minimum. It’s one of the biggest contributors to IT opex reduction.

However, a large part of the storage environment is still manually configured and the storage administrator’s time is frequently taken up with mundane tasks that should be automated to allow greater focus on strategic decision-making.

>See also: How the evolution of data centres is changing the role of the CIO

It is clear that reducing operational complexity is becoming very important in order to achieve business agility, which is key for an organisation if it is to innovate and move with the times.

This fact was further proven in IDC's survey, with results highlighting storage management as a particularly critical issue, and one that influences many storage purchasing decisions. CIOs should look for storage tools that will increase administrator efficiency and accelerate application deployment.

Horses for courses

The old adage “horses for courses” has never been more apt to the data storage industry. The market is awash with a range of technologies, architectures and solutions.

Devising the best combination is a unique process as each business must consider the nature of the data, the users’ needs and expectations, SLAs, physical and legal constraints, and much more.

At the top of the priority list is the ability for the infrastructure to adapt as the organisation grows and changes.

When looking for a storage system to meet the needs of today’s modern data centres, organisations should make it a priority to measure what trade-offs the vendor forces them to make.

The key is carefully balancing all of the business needs without breaking the bank. As Charles Darwin said, it is not the strongest that survive, not the most intelligent, but the one most responsive to change.

 

Sourced from Sabo Diab, Infinidat

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Data
Storage