Ethical AI – the answer is clear

Alistair McLean, Head of Data Science and Machine Learning for one-to-one marketing company Clicksco, examines ethical AI and how being transparent is vital to engaging with the public in a responsible manner.

Artificial intelligence (AI) has grown beyond comprehension in the last decade, but according to a recent public inquiry into the development and use of AI, the law governing its effects “is not currently clear”.

Utter the words ‘artificial intelligence’ and thoughts quickly turn to lifelike robots and Hollywood blockbusters – quite often with a Black Mirror-esque slant, accompanied by technology inevitably going awry!

In reality, AI is all around us – algorithms are determining a number of decisions, based on the patterns they have learnt from the data fed through them. Something as ordinary as spam filters do more than just filter out messages containing trigger words. To keep ahead of the spammers, they must continuously learn from several different signals including words in the message and metadata.

>See also: Can artificial intelligence be trusted

And while AI may be unconsciously slipping into everyday life and technology, recent revelations have shown us that governance and transparency in AI are essential and will be welcomed with open arms by credible professionals and data scientists across the globe.

In January this year, Wired magazine investigated a reportedly-fixed glitch in the image recognition algorithms in Google Photos. Three years earlier, a software engineer had discovered that his black friends were being classified as “gorillas”. While Google apologised for the error and promised to immediately rectify the problem, Wired’s own research found Google had merely blocked its image recognition algorithms from identifying gorillas altogether. Wired’s testing uncovered that although some primates were found in their searches, others including gorillas and chimpanzees were not. Other testing around racial categories, specifically “black men” and “black women”, returned pictures of people in black clothes, sorted by gender and not race.

Examples like this clearly demonstrate that AI is only as good as the data it learns from and that having confidence that said data is representative of all different backgrounds, ethnicities, age groups, genders and socio-economic demographics, is crucial. But, how can we always be sure the data mined decades ago or even today isn’t biased? And what steps should be taken to avoid the potentially harmful pitfalls of AI?

>See also: AI should be developed for “common good and benefit of humanity”

The House of Lords Select Committee on Artificial Intelligence suggested that an “AI code” covering five basic principles is created. It would explore and answer questions specifically around: ‘How does AI affect people in their everyday lives, and how is this likely to change?’ ‘What are the possible risks and implications of artificial intelligence?  How can these be avoided?’ and ‘What are the ethical issues presented by the development and use of artificial intelligence?’.

These initial discussions and the body of research undertaken by the Select Committee is commendable. As a data scientist involved with the creation of an audience management platform that uses AI and machine learning to better understand the intent of consumers to inform digital marketing, I welcome the introduction of a Code that takes an ethical stance. With GDPR on the horizon, the validity of data has never been more important and, the resulting outcome has led my peers and I to focus on the concept of ‘Ethical by Design’ across all our tech.

The underlying issue here, however, is trust and how anyone, in today’s age, can trust increasingly complex tech platforms. Wired showed how under a little scrutiny, global giant Google failed to right the wrongs of an embarrassing and offensive mistake of three years prior. So where does that leave smaller innovators and high-growth tech start-ups who don’t have the money or legal resources behind them that Google does?

>See also: The evolution of artificial intelligence

While these companies won’t have these resources to lean upon, the opportunity for them to be transparent with AI, is within their capabilities. Determining principles at work that consumers can see, will ensure AI can be deemed more trustworthy.

While the work started by the Select Committee is on the right track and well-informed, their next steps in creating a code will be vital. Governance needs to include a set of understandable guiding principles and a credible process to ensure big tech adheres to them – all packaged in a way that doesn’t stifle innovation and creativity. Simple…

Sourced by Alistair McLean, MEng, DPhil, Head of Data Science and Machine Learning, Clicksco Group.

Kayleigh Bateman

Kayleigh Bateman was the Editor of Information Age in 2018. She joined Vitesse Media from WeAreTheCIty where she was the Head of Digital Content and Business Development. During her time at WeAreTheCity...