Explainable AI or XAI: the key to overcoming the accountability challenge

AI has become a key part of our day-to-day lives and business operations. A report from Microsoft and EY that analysed the outlook for AI in 2019 and beyond, stated that “65% of organisations in Europe expect AI to have a high or a very high impact on the core business.”

In the banking and financial industries alone, the potential that AI has to improve the customer experience is vast. Important decisions are already made by AI on credit risk, wealth management and even financial crime risk assessments. Other applications include robo-advisory, intelligent pricing, product recommendation, investment services and debt-collection.

However, the adoption of AI across business sectors has not come without its challenges. In a recent forecast , Forrester predicted a rising demand for transparent and explainable AI models, stating that “45% of AI decision makers say trusting the AI system is either challenging or very challenging.”

This isn’t very surprising when we consider that most companies today still work with what are known as “black box” AI systems. These opaque models rely on data and learn from each interaction, thus can easily and rapidly accelerate poor decision making if fed corrupt or biased data.

These “black box” systems also leave the end customer in the dark, doing nothing to instil trust in the technology. This lack of trust is being compounded by widespread scepticism from consumers who are reticent to share their personal data, especially if they cannot be sure how it is going to be used.

Four things we need to realise about explainable AI

Artificial intelligence (AI) and its capabilities are undoubtedly astounding, and they leave many people wondering “How does it do that?” The answer to that question drives the concept of explainable AI, which is sometimes called XAI. Read here

Explainable AI (XAI)

Fortunately, XAI models have the capabilities to overcome these concerns, while providing reassurance that decisions will be made in an appropriate and non-biased way.

“White box” XAI systems are highly transparent models which explain, in human language, how an AI decision has been made. Crucially, they do not solely rely on data, but can be elevated and augmented by human intelligence. These systems are built around causality, creating space for human sensibility to detect and ensure that the machine learning is ethical and course-correct if it is not.

This is extremely valuable when we consider that most companies don’t usually have the privilege of finding out that their AI model is biased until it’s too late.

In many sectors of the economy, XAI is creating positive outcomes for both the company and the customer. For example, in banking and finance, XAI systems have allowed institutions to carve out new revenue streams.

By providing insights into a particular AI outcome, banks can reroute customers that have been denied a service and recommend a more suitable option for them for which they would qualify. This allows banks to provide highly personalised services to customers and explore new product lines based on evidenced demand.

The customer, on the other hand, receives an explanation of why a particular service has been denied and an alternative is offered in its place. With this insight, the customer may also be able to make lifestyle changes in order to attain their financial goals and improve in their financial wellbeing.

Explainable AI : The margins of accountability

How much can anyone trust a recommendation from an AI? Yaroslav Kuflinski, from Iflexion gives an explanation of explainable AI. Read here

A vital role to play today

During this volatile economic climate amid the evolving coronavirus situation, businesses may be tempted to de-prioritise investment in new technologies, like XAI, believing that its outputs are not mission critical and cannot aptly support the current needs of the business.

There is a fundamental flaw in this line of thinking.

XAI is not a technology for your business’ future, it is a technology for your business today — and one that can play a vital role in mitigating these turbulent times. XAI not only supports increased efficiency and automation but, by virtue of being entirely transparent, it provides a model that businesses can trust entirely to support their operations.

As more employees are stretched further to cope with illness and childcare, freeing up their time to focus on the work that cannot be undertaken by XAI will be crucial to business continuity. By adopting new XAI technologies today, businesses aren’t only investing in their future, they are investing in their bottom line — their human workforce and their business’ resilience.

We have only really seen the tip of the iceberg in terms of what XAI can do, but as businesses examine the breadth of its capabilities, it will surely become an integral part of product development and everyday business operations.

With regulators in a number of key industries now also joining the discussion around explainable and transparent AI models, 2020 could perhaps be the year that XAI enters the mainstream.

Written by Hani Hagras, chief science officer at Temenos

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com

Related Topics

Explainable AI
XAI