Putting the trust back in software testing in 2022

2021 has seen more high-profile software outages, continuing a growing trend over recent years that has seen businesses suffer damaged reputations and financial loss. Businesses are increasingly digital-first, with more software being built and customised than ever before. But when the world runs on software, just a single error can be catastrophic.

In June, Fastlybroke the internet’ when a valid software configuration change by one of its customers triggered a previously undiscovered bug introduced during a May software deployment. In October, Meta saw Facebook, WhatsApp and Instagram down for seven hours. The outage cost Meta an estimated $100 million in lost online advertising sales, while its shares fell 5%, wiping about $40 billion from its market value.

These are just the outages that make the headlines. Many more occur on a regular basis, and there is a common thread running through the cause: human error. Most importantly, it’s not just brand damage and bottom lines that are impacted; people’s lives can be at risk. When the UK’s national health service experienced a computer failure, GPs found themselves unable to access critical blood and X-ray results and medical appointments were unable to take place, creating a backlog for care.

Cyber resilience will need to be taken more seriously by healthcare in 2022

Abel Archundia, managing director, global life sciences & industrials at ISTARI Global, discusses the need for the healthcare sector to focus more on cyber resilience in 2022, to unlock better health outcomes. Read here

These incidents are not an inevitability. Instead, they underscore the critical need for businesses to properly and thoroughly test their software to identify problems before deployment. The downtimes expose the fundamental flaws in how the majority of businesses approach testing.

Millions of organisations rely on manual processes to check the quality of their software applications, despite a fully manual approach presenting a litany of problems. Firstly, with more than 70% of outages caused by human error, testing software manually still leaves companies highly prone to issues. Secondly, it is exceptionally resource-intensive and requires specialist skills. Given the world is in the midst of an acute digital talent crisis, many businesses lack the personnel to dedicate to manual testing.

Compounding this challenge is the intrinsic link between software development and business success. With companies coming under more pressure than ever to release faster and more regularly, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point. Companies should be testing their software applications 24/7 but the resource-heavy nature of manual testing makes this impossible. It is also demotivating to perform repeat tasks, which generally leads to critical errors in the first place. As a result, firms are forced to choose between cutting corners on testing or having unacceptable delays in time to market and losing their competitive edge. QA teams are faced with an impossible task, and it is simply not a problem that can be solved by adding more people to the team.

Instead of relying on manual processes, automation can be harnessed to supercharge the testing efforts. By leveraging automation, companies can fuel efficiencies, enabling them to test greater volumes of software, while simultaneously removing the risk of human error. This can reduce application errors by as much as 90%. Automated testing can cut down on the time spent on data test preparation by some 80%, while feedback cycles are accelerated. This fosters a culture of continuous integration, with automation being integrated within CI/CD pipelines, ultimately making it effortless to test and release software.

Business continuity through data

Andy Cotgreave, technical evangelist at Tableau, discusses how business continuity can be achieved through the use of data. Read here

These are significant benefits, enabling companies to accelerate time to market tenfold, giving them a clear advantage in our digital economy. However, not all automation platforms are created equal. Some platforms rely on code or low code, meaning that while they create some efficiencies vs manual processes, they still require technical skills and understanding of coding to operate. With 64% of companies experiencing a shortage of software engineers, finding people with the capabilities to operate such platforms is exceptionally challenging. But when it comes to test automation, many organisations have the test and QA talent in-house that, given the right tool, could use their experience to create a scalable test automation strategy. It’s the ill-suited, code-dependent tools that are the problem, not the people.

This landscape underscores the importance of no-code solutions. While “low code” and “no code” are often used interchangeably, they are not one and the same. Low code still requires developer skills, creating scalability issues and impacting resourcing. In contrast, truly no code solutions can democratise automation, as testers can build test logic based on real business processes. This means testing experts already working within the company can easily automate workflows in a scalable, sustainable way.

Rather than forcing businesses to search for talent externally, no-code allows companies to harness their existing capabilities and build a flow in minutes. Technical resources are then free to focus on high-value tasks. Not only does this help to accelerate innovation and digital transformation strategies, unlocking 97% productivity gains – it also ensures teams can undertake more fulfilling work.

Crucially, by adopting a no-code solution, businesses can achieve a wider coverage of testing, without having to compromise any existing software or systems. As a result, they can ensure their applications are of the highest quality, minimising the risk of damaging outages while accelerating business growth. Essentially, this approach creates trust in the testing process, and puts the business back in control.

25% of total IT spend is allocated to quality assurance, yet 85% of all testing still happens manually. For as long as manual testing remains prominent in business, we will continue to see high profile outages like those that have topped news bulletins in 2021. Businesses must now realise this is a critical need, and that by focusing on solving test automation, they are also focusing on securing business continuity.

Written by Christian Brink Frederiksen, co-founder and CEO of Leapwork

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com