The right storage software for DevOps

A major operational problem in on-premises enterprise data centres has emerged over the past 10 years. This is rearing its head as a fundamental mismatch between the infrastructure and the needs of increasingly virtualised applications.

There is a major contrast between what organisations want their staff to focus on – strategic projects, new application deployment, new customer acquisition and releasing new products – and the things they end up spending too much time on. This includes infrastructure ‘plumbing’ and reconfiguring, re-architecting or redeploying across all stacks.

The widespread adoption of virtualisation has introduced increasing complexities to IT infrastructure. However, it is spurring businesses to accelerate development efforts and deliver applications and services through a DevOps model.

>See also: How can an enterprise cloud platform support all parts of the business?

The arrival of cloud infrastructure and cloud-native workloads has made addressing the mismatch an even more pressing issue. As they seek to overcome the shortcomings of out-dated traditional infrastructures, organisations are starting to adopt a cloud strategy that includes the public cloud and, in many cases, their own private cloud built from all-flash storage and intelligent software.

This move to a hybrid cloud platform, a mix of public and private cloud, seeks to blend the benefits of public cloud for workloads that need agility and scale, while retaining control of workloads and data that would be better served on-premises because they are too valuable to entrust to other providers.

An effective storage platform should combine the performance, control and management of internal data centres with the agility and scale of public cloud. This provides organisations with the ability to build and run agile environments for cloud-native and mission critical applications in their own data centres.

The attraction is that it helps to solve the fundamental mismatch between infrastructure and virtual applications and aids in an organisation’s preparation to adopt DevOps practices, which cannot be fully supported by traditional infrastructure. Businesses are increasingly attracted to the DevOps model as a means to accelerate development efforts and deliver new applications and services.

>See also: Building a smart storage strategy

DevOps essentially seeks to merge two personas into one and achieve the best communication and collaboration between developers who create platforms and runtimes, and operations teams, who lead configuration management. The requirements for DevOps is to have the ability to build applications with the latest production data, distribute updates quickly with more application testing in less time, accelerate the release cycle, speed up integration testing and reduce restoration time.

Choosing the correct storage software for a DevOps environment

The growing emphasis on DevOps is placing an extra burden on infrastructure teams, so choosing the right platform to underpin the IT infrastructure is highly significant. Not only will this make life easier for the IT infrastructure team, but it will ensure the DevOps model works for the organisation.

Traditional storage systems are often unable to cope with the requirements of a simple, flexible and automated enterprise infrastructure. There are a number of issues that need to be taken on board for the hybrid enterprise infrastructure to work with the DevOps model, including the following:

Copy data management

The issue with copy data management in DevOps is that refreshing and updating testing/development environments is time consuming. In addition, rapid test cycles require data synchronisation with potentially hundreds of servers. What is required, is a system that provides up to date virtual copies to the DevOps team, eradicates the need for physical data duplication, lifts the load on production storage and protects the performance of applications.

>See also: EXCLUSIVE: Tech experts give their predictions 

Data protection and disaster recovery (DR)

Modern storage systems in conjunction with next-generation software can create data copies using snapshots, cloning and replication – they can also utilise the same technologies to protect and manage the development side of the environment in the event of failure or data corruption.

Quality of service (QoS) and performance guarantees

QoS and guaranteed performance is a major factor for an IT team to consider when seeking to adopt DevOps practices. QoS is vital to control the performance allocation of a storage system to different workloads. It gives organisations the capability to set a maximum and minimum on the number of IOPS or bandwidth consumed. Using the higher performance of all-flash and QoS, DevOps can be consolidated on the same platform as production, making it easier to access production data sets while reducing the storage footprint.

Monitoring and troubleshooting

If an organisation wants to provide data at the speed suitable for DevOps practices, it requires the ability to monitor the infrastructure and correct problems and misconfigurations rapidly.

>See also: Mobile in the enterprise and the changing role of the CIO

Continuous monitoring and end to end visibility across the IT stack, along with predictive analytics, integration with other monitoring tools and VM-level monitoring are particularly valuable attributes when seeking to provide the basis for data-driven decisions in a DevOps environment.

Automation

In addition, it is crucial that the storage platform provides the capability to provision and manage IT infrastructure programmatically. With the increased use of automation, organisations can create and break down environments as necessary, incorporate snapshots and cloning as part of daily workflows and eliminate the potential for error from manual or interactive configuration.

Fast storage software is fundamental

DevOps environments thrive off a storage system that has the ability to match the requirements of a simple, flexible and automated enterprise infrastructure. Storage that operates at the VM level eradicates the need for LUNs and volumes, enabling enterprises to work at a granular visibility. With clean REST APIs, users can connect all-flash to compute, network and other elements of the cloud and see and share across the entire infrastructure.

>See also: 95% of large companies woefully unprepared for ‘digital business’

Application test and development can be accelerated from a matter of days or weeks to minutes by utilising the right storage platform. By retaining custom settings and automating with APIs, rather than having to rebuild the entire environment after every data refresh, enterprises can radically speed up the release cycle and integration testing.

For example, by using this approach, one financial services company was able to reduce development update time from five hours to five minutes and reduce latency quite considerably. With the visibility into files, VMs and vDisks provided by storage, it is possible to recover these in less than five clicks and response times can be reduced dramatically.

Keep up with the pace of DevOps

Enterprises are moving more ferociously towards the mixture of virtual and physical, on-premises and cloud-based IT. This means it is vital that an organisation’s storage infrastructure integrates seamlessly to work across both physical and virtual environments. Having this right from the get go will allow for the correct facilitation of DevOps environments to meet its objectives. All while eliminating any concerns around performance, scalability, manageability, resilience and flexibility.

Choosing the right storage to underpin an organisation’s IT infrastructure platform is required to support the business in its desire to be simple, fast and flexible. The adoption of storage that can operate at the VM level and provide application performance will ensure data is ready for the DevOps model. If the requirement is to deliver data at speed, organisations simply can’t afford to keep their storage in the slow lane.

 

Sourced by Mark Young, VP Systems Engineering, Tintri

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Automation
Data Storage
DevOps