How intelligent software delivery can accelerate digital experience success

Demand for digital services is undeniably soaring, along with user expectations. Consumers want a seamless connected experience, and organisations are expected to invest $1.78 trillion in digital transformation investments in 2022, compared with $1.31 trillion in 2020 to keep pace with these demands.

This rapid pace of transformation has put ever greater pressure on DevOps teams to move faster without compromising quality. They are now expected to build and launch smaller, incremental updates to applications multiple times a day. Just a few years ago, teams would more likely have delivered one large update per quarter. With this mounting pressure, even major corporations that exemplify the highest standards of digital experience don’t always get it right.

Facebook’s outage in October, which left users unable to access its services for six hours, is an example of how even a small digital infrastructure configuration change creates chaos. For organisations to innovate without undermining user experience, they need modern and intelligent development and delivery practices. This can reduce the risk of unexpected errors, improve code quality, and relieve the burden on DevOps teams.

How to mitigate the impacts of an IT outage

This article will take a look at how organisations can mitigate the operational effects caused by an IT outage, and prevent them from occurring. Read here

Compromising quality for speed

Innovation cycles have gotten faster. Recent Dynatrace research indicates that organisations expect the frequency of their software releases to increase 58% by 2023. But many will find it difficult to keep pace, as DevOps teams already struggle with existing workloads. Countless hours have been invested in developing updates for hundreds of variations in devices, applications, and operating systems. As IT complexity grows, the demands on DevOps teams’ time will increase even further.

Still, writing code is only half the battle. Time-consuming manual testing, increasingly fragmented toolchains, and the explosion of data that’s resulted from the shift to the cloud have added friction to the development process.

With so much to do and no additional resources, the pressure on DevOps teams can force them to sacrifice code quality. As a result, coding errors are more likely to slip through the net, jeopardising digital services and user experience.

Even small changes bring risk

Adding to the challenge, it can be difficult to understand the true impact of a new software release until it goes live. Worse still, it’s often difficult to roll back the change in the event that it creates a problem, and revert to a previous, stable version of the application.

Much of this challenge is created by the complexity of today’s multi-cloud environments. Digital services are made up of hundreds of millions of lines of code and billions of dependencies, spanning multiple platforms and different types of infrastructure. This interconnectedness makes it difficult for DevOps teams to understand the consequences of the changes they make — however minor they might seem.

It has also created alert overload, as cloud monitoring tools capture a volume, velocity, and variety of data that is beyond human capacity to manage. It’s often impossible for DevOps teams to quickly find the single line of code that has triggered a problem.

How to drive impact and change via DevOps

Stephen Magennis, managing director for Expleo Technology (UK technology), discusses how impact and change can be driven via DevOps. Read here

A more automated and intelligent approach

To prevent poor-quality code from reaching production and to ensure seamless user experiences, organisations need a more intelligent approach to software development.

This starts with applying continuous automation to repeatable tasks, which frees up DevOps teams to work on higher-value activities. First, organisations should establish automated quality gates that measure new builds against service-level objectives (SLOs) for key performance indicators such as response time or throughput. This means new code changes cannot go live unless they meet the minimum baseline for user experience, which prevents unexpected negative impact.

In the event that something goes awry, organisations can improve their time to resolution by harnessing unified end-to-end observability capabilities. This level of observability provides DevOps teams with code-level insights into all software builds, apps, and services across any cloud platform, whether they’re in development or already deployed.

Combining this observability with AIOps – the use of AI in operations – can take those insights one step further, by automatically prioritising issues according to their business impact. This enables DevOps teams to quickly identify the most pressing alerts and resolve them, before users experience a problem.

Relieving the pressure and delivering success

Improving development practices through AIOps, automation, and observability can significantly relieve the pressure on DevOps teams and help them to keep pace with digital transformation. As organisations continue to release software faster, it is increasingly important to integrate continuous and automatic insight into their entire digital services environment, to accelerate transformation and deliver more seamless software experiences.

Written by Greg Adams, regional vice-president UK&I at Dynatrace

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com