Five lessons to fix the failures of AI

Engineers have yet to invent the perfect artificial intelligence system for forecasting public trust. If such a tool existed though, it would issue some pretty stark warnings about AI itself. The technology is accused of failing to protect users from harmful content (as in the case of Facebook), discriminating against ethnic minority patients being treated in hospitals and against women applying for credit cards. Its critics say it risks hardwiring bias into crucial choices about the services we receive.

In recent months, those concerns have entered the mainstream. White House science advisers now propose a Bill of Artificial Intelligence Rights, spelling out what consumers should be told when automated decision making touches their lives.

This much is clear: the digital industry cannot ignore these concerns. They will only become more pressing and prominent. But the debate is not: ‘AI: right or wrong?’ – the technology won’t go away. It’s too useful and too widely used. The challenge instead for any firm that deploys machine learning is simple: get it right before your customers lose faith.

The good news is we can fix these problems. Yes – it’s a complicated business, but we can spell out the essential lessons explaining what needs to done simply enough.

These systems rely on two things: a machine learning model that analyses data to teach itself how to make decisions, and the raw data itself. Businesses must get both right to avoid getting important decisions horribly wrong.

The dawn of AI ethics – from equal representation to AI legislation

Improving standards in AI ethics requires equal representation in technology and new laws, such as the EU’s proposed AI legislation. Read here

Lesson one – Make sure the data you use to train your AI isn’t more likely to give a negative result for one group of people than for another. Imagine you run a loans firm and use this technology to work out who is likely to default. If, by chance, your historical data happens to show a greater number of women defaulting than men, your AI could unfairly discriminate against women for ever more.

Lesson two – If you have less data for women than men, or one ethnic group than another, make sure that’s reflected in your maths. Otherwise, you’ll reach unfair decisions because parts of society are under-represented.

Lesson three – Once the system is running, test it. Set performance targets and watch closely to make sure your AI doesn’t begin discriminating against individual groups in society. If you are doing the job right, this should be a never-ending process.

Lesson four – AI systems aren’t crystal balls; they don’t know what will happen in the future. They just work out how likely something is to happen. Imagine that AI-powered loans firms again. There will be few more important decisions than working out when to refuse someone credit. Is it when your AI model says there is a 50% chance of an applicant defaulting on a loan, or a 75% chance, or 90%? The experts call this the ‘probability threshold’, and choosing where it falls is vital.

The final lesson is perhaps the hardest: you have to be able to explain the decisions that your AI model makes. Artificial intelligence systems cannot be black boxes whose working is beyond reason or challenge. Companies should be as accountable for choices made by AI as for any others.

Errors are inevitable – no system is perfect – but bias is not. Most of us cannot see inside the algorithms that power this technology, but we can make sure they generate fair outcomes. AI has issues, but collectively we know how to fix them. Get it right and we can ensure that fairness is woven into our decision-making technologies. Get it wrong, and both companies and consumers will pay the price.

Written by Sray Agarwal and Shashin Mishra, data scientists at Publicis Sapient and authors of ‘Responsible AI: Implementing Ethical and Unbiased Algorithms

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com