New apps and versions of software are frequently released, with some even released continuously. In the on-demand world we live in, consumers want new capabilities and functionalities yesterday, all with seamless user experience.
Keeping up with this demand can be challenging. And, to ensure that these releases can happen quickly, most software testing covers much fewer than 1% of the user journeys through the software. For a website or simple app that’s OK. However, when it comes to critical pieces of technology–from surgical machines to autonomous vehicles–lives are at risk. Suddenly, 1% testing is far from ideal.
However, testing software should be simple. Testing teams should be able to determine what should interact with its software, and will consequently understand and implement what it should test. Although, in the physical world, you can’t constrain this testing.
When software or technology is interacting with the physical world, there’s a high level of variability and a lot can go wrong. And, if something goes wrong with safety-critical software, it can have disastrous consequences. Whereas product testing is about making sure that the right inputs and the right outputs happen, with human life, determining the right result isn’t simple.
“Is AI really, really stupid?” On the limitations of AI
Take, for example, surgery on an individual. Testing a piece of technology or software that is used in a surgical procedure can be easily manageable, as it’s restricted to one person alone. The input and output are much simpler to quantify. However, when using the example of autonomous vehicles, there isn’t just one person in the equation, there are multiple scenarios to consider, many which also raise ethical questions.
If someone is travelling in an autonomous vehicle and it approaches the cliff, the software has three choices. It can either drive over the cliff (killing the vehicle passenger), swerve to the left (and hit some pedestrians) or swerve to the right (and hit some other pedestrians). Humans can be forgiven for a spur-of-the-moment decision that impacts the lives of themselves and others. But, with autonomous vehicles, someone needs to programme the car to carry out the decision.
How medical staff are on the frontline of testing to improve hospital business outcomes
There is a philosophical element that needs to be taken into consideration when testing software in life and death situations. Manufacturers need to make that choice for programmers to deploy, but the process is very far from simple. And, if you as a passenger knew that the car you were getting into would choose the cliff, or would choose the pedestrians, would you get into it?
Suddenly, human choices are codified. Are we ready for that yet?
Written by Antony Edwards, COO, Eggplant