AI ethics and how adversarial algorithms might be the answer 

AI can be racist. The problem lies with data,  The data can throw up results that discriminate between certain people. This in turn is creating a need for ethical AI, but it is incredibly difficult  to come up with algorithms that are not in some way negatively impacted by data to create results that are biased. The solution may lie with creating adversarial algorithms. 

There is another phrase to describe it. Create a devils advocate.

Dr Marc Warner founder of Faculty and member of the UK AI council explained to Information Age how it works. 

“It turns out,” he said, that “there are multiple definitions of fairness. 

“Society may come to a data scientist, for example,” and it might say that in this particular instance, you are not allowed to take into account a certain characteristic when performing a given function. So, an algorithm for calculating insurance premiums must be unaware of the race or religion of each applicant. Likewise an algorithm for sorting CVs must not be aware of gender, race, religion…etcetera.

Does your company have an AI ethics dilemma?

The ethics of Artificial Intelligence has been in the news — particularly with the creation and almost immediate collapse of Google’s AI Ethics board. But do companies that are new to AI tools need to be asking themselves: ‘Do I have to ‘care’ about ethics?’ asks Alexa Hagerty and Igor Rubinov.

But in practice, this is not so easy. Maybe you delete all references to race in the data you have, or better still, following the principle of data minimisation, you don’t gather data on race in the first place. But as Warner pointed out, in a city like London, “postcodes are very strongly correlated with different ethnicities — post code essentially becomes a proxy for race.”

Marc Warner: “In the case of insurance, you have one algorithm making a decision, a second adversarial algorithm, which takes the output and the protected characteristics and tries to see whether, from those things, it can predict a protective characteristic”

In fact, there are many studies that back this claim. For example, a former

Google expert on data, Vijay Pandurangan, analysed anonymised data on taxi trips and fares in New York, and from this concluded: “This anonymisation is so poor that anyone could, with less than two hours work, figure which driver drove every single trip in this entire data set. It would even be easy to calculate drivers’ gross income or infer where they live.” Another study found that the identity of 95% of people can be ascertained just from information about their location on an hourly basis. 

The answer is to create AI ethics combating bias even when data is anonymised. 

The solution: according to Warner, is to create algorithms that have fairness and robustness and explainability built into them, that’s where adversarial algorithms enter the story. 

AI ethics: Time to move beyond a list of principles

AI ethics must move beyond lists of ‘principles’ says new report from the Nuffield Foundation and Leverhulme Centre for the Future of Intelligence

It’s the idea of ethics by design — GDPR requires privacy by design, by which privacy considerations are built into a product at the outset. Ethics by design takes this principle applying it to ethics. The result should be ethical AI.

Warner argues that the answer is to have two algorithms. “In the case of insurance, you have one algorithm making a decision, a second adversarial algorithm, which takes the output and the protected characteristics and tries to see whether, from those things, it can predict a protective characteristic.”

If the algorithm can work out someone’s race, for example, even though race isn’t in the data, you know there’s a problem. 

He draws an analogy with the “the adversarial justice system, where there’s a set of people who are trying to ‘prove x’ and a set of people trying to ‘prove not x’. And the outcome of those two fighting really, really hard is some relatively neutral version of the truth.”

Organisations must adopt ethics in AI to win the public’s trust and loyalty, warns study

Study finds consumers are responsive to levels of ethics in AI use and are ready to reward or punish behaviour

AI safety

Warner reckons that there is a great deal of confusion over what AI safety means.

To try and reduce this confusion, Faculty has created what it calls a risk landscape. 

One side of the landscape is reinforcement learning. “Let’s say you have a household robot, and it is required to get you a coffee, and it just smashes through the door because you didn’t say, ‘open the door, then give me a coffee’. And then you tell it ‘don’t smash the doors,’ and then it treads on the cat. If you think about all of those things, it’s a really hard problem to solve.” Meeting the problem of reinforcement learning, Warner talks about a parenting algorithm— which mimics the way parents teach children —  ‘don’t do that, darling’. 

Then in the bottom left you might have things like deep fakes.

Deep fakes pose an interesting challenge. Warner suggests that people have become used to the idea that text documents might be fake. So if you get a letter, apparently from the police saying you owe a million pounds, then people will look at it and realise it must be a fraud. But a video of the head of the Metropolitan Police saying that, might create doubt in people’s mind.

“It takes time for society to adjust’ and the latest technologies that can create things like deep fakes is developing very fast.

Warner cites Professor Stuart Russell, author of Artificial Intelligence: A Modern Approach, who said if you build a bridge, you don’t build a ‘wonky’ bridge fast, and then worry about making is safer. Safety is enshrined into the process of building a bridge.

“We think of it as a building in safety from the ground.”

So that’s Dr Marc Warner’s take on ethics for AI, you can call it ethics by design, or building in AI safety from the ground, you can apply an algorithm acting like a lawyer for the prosecution or indeed defence, seeking holes in their opponents case. 

But ethics for AI is set to become ever more important — and it needs to be factored in from the outset. 

Ethics of AI, the machine human augmentation and why a Microsoft data scientist is optimistic about our technology future

Will data ethics and regulation drive innovation in AI?

‘AI causes new challenges for research ethics at universities’

The Great Hack: are data scientists becoming the new bond villains?

Avatar photo

Michael Baxter

.Michael Baxter is a tech, economic and investment journalist. He has written four books, including iDisrupted and Living in the age of the jerk. He is the editor of Techopian.com and the host of the ESG...

Related Topics

AI Ethics