Ethics of AI and augmenting humans
“I used to think my grandfather was half machine” says Miguel González-Fierro. So who was this man that the young Miguel called grandfather? Was he a cyborg, a character from a Marvel comic in the fresh? No, he had simply experienced a knee injury and by the miracle of medical science, was provided with an artificial knee. Not so much Iron Man as Knee Man.
Yet oddly, Miguel González-Fierro, senior data scientist at Microsoft, Board Member of the Dyson School of Design Engineering at Imperial College London, not to mention a PhD (in robotics from Kings College, London), talks about Iron Man.
His job is to help Microsoft’s customers with machine learning. Ethical AI is his passion.
“It was around 2012,” he says, that is when the combination of advances in computing power and an explosion in the amount of data out there, created a revolution in deep learning. “People were surprised by how fast we could go.” Deep Learning was transformative.
“Five to ten years ago, the advanced research labs in machine learning were at universities, now they are at companies like Google, Facebook, Microsoft and IBM.”
It is a point that technology cynics overlook. Either technology is sufficiently powerful to do a task, or it isn’t. When it isn’t, it seems slow, cumbersome and irrelevant. When it is, the impact upon the world can be rapid indeed. Smartphones are a case in point: not so long ago, they were little more than toys, fun to show people, about as useful as teapot made from a certain product made from roasted and ground cacao seeds. Then in 2007 Apple announced the iPhone, and the rest, as they say, is history.
Returning to González-Fierro, he sees the smartphone as an example of augmentation — a kind of merger between human and machine.
Does your company have an AI ethics dilemma?
The ethics of Artificial Intelligence has been in the news — particularly with the creation and almost immediate collapse of Google’s AI Ethics board. But do companies that are new to AI tools need to be asking themselves: ‘Do I have to ‘care’ about ethics?’ asks Alexa Hagerty and Igor Rubinov.
The day when we hit some kind of singularity, is quite far off though, he suggests.
“People think AI is more developed than it actually is. We try to solve small problems one step at a time, say creating a network that can identify dogs and cats or translate between English and Spanish. General intelligence is quite far.”
Most researchers are less concerned about a Terminator type scenario, when conscious machines seize control, rather it is more likely machines will become part of us, augmenting our strength or maybe intelligence — not so much like Terminator as Iron Man, suggests González-Fierro.
But this merger is already underway, there was his grandfather, the predecessor to Iron Man, Knee Man, and then there was the smartphone.
The risks are obvious and that takes us to the ethics of AI.
“Some people spread fake news and lies for a political agenda, this is dangerous as you can change the political landscape of a country. You need a common policy between companies and countries to address this. Now we have the ability to fake videos.”
There is a well known example, a fake video of what appears to be President Obama, speaking. “This technology isn’t new, it happened two or three years ago.” Suppose a fake video had a President apparently saying: “‘We are going to attack Russia.’ It could start a war, and people are not addressing such risks as strongly as they should.”
There is the issue of bias — biased AI. He cites the example of Haar Cascade, a machine learning object detection algorithm. It takes a black and white image and defines differences between dark and light regions — eyebrows, mouth, nose for example, showing up as dark. “But if you are tanned or dark skinned” it was unable to operate effectively — seemingly racist. “But Haar, the creator of the algorithm was not thinking about this, he just focused on data he had — the machine leaning was trained on data from a local population, who were light skinned.”
So a solution to that problem would be to train machine learning on more diverse data sets.
As for spotting fake videos, one solution is to “use AI to identify the ownership of content on the internet such as videos, and identify if the video is not real. I see in the future more fake news will proliferate,” but AI will help identify truth from fake news.
AI ethics: Time to move beyond a list of principles
“Ethics of AI is integral to this.”
Science is neutral. González-Fierro gives the example of nuclear technology — “it can be used to create energy or a bomb, you can’t stop working on science, but you can be mindful.”
Ethical AI, then, can perhaps come to the rescue, like Iron Man. But “if you like Iron Man you could save or kill people.”
On 23 May 2019, Miguel González-Fierro will be delivering a presentation at Information Age’s Data Leadership Summit,