Did you hear the one about my wife — well, she… is a really nice person, actually.
We know that people suffer from bias. Alas, a growing pile of evidence suggests AI can be too.
Now it seems that Amazon has found this out the hard way — after investing in an AI recruitment tool.
Ethical AI – the answer is clear
The idea was for the AI engine to scan job applications and give hopeful recruits a score between one and five. Reuters quoted one engineer saying: “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”
Alas, it started weeding out CVs that included a certain five letter word. The ‘W’ word — women, there said it.
This was back in 2015, let’s face it, as far as AI is concerned, 2015 is ancient history.
>See also: Regulating robots: keeping an eye on AI
It’s not news to learn that AI can be something of a bigot.
In 2016, it emerged that US risk assessment algorithms — used by courtrooms throughout the country to decide the fates and freedoms of those on trial – are racially biased, frequently sentencing Caucasians more leniently than African Americans despite no difference in the type of crime committed. How could this happen within a system that’s supposed to be neutral?
AI researcher Professor Joanna Bryson, said at the time: “If the underlying data reflects stereotypes, or if you train AI from human culture, you will find bias.”
This brings us to the issue of diversity. Scot E Page is an expert on diversity and complex systems. Areas where he is most well-known include ‘collective wisdom’.
He is famous for saying “progress depends as much on our collective differences as it does on our individual IQ scores.”
And: “If we can understand how to leverage diversity to achieve better performance and greater robustness, we might anticipate and prevent collapses.”
AI, however, because of the way it learns from data, can reflect the biases in society.
“The fact that Amazon’s system taught itself that male candidates were preferable, penalising resumes that included the word ‘women’s’, is hardly surprising when you consider 89% of the engineering workforce is male,” observed Charlotte Morrison, General Manager of global branding and design agency, Landor.
She added: “Brands need to be careful that when creating and using technology it does not backfire by highlighting society’s own imperfections and prejudices. The long-term solution is of course getting more diverse candidates into STEM education and careers – until then, brands need to be alert to the dangers of brand and reputational damage from biased, sexist, and even racist technology.”