Why AI cannot be blamed for its bias

Artificial Intelligence (AI) can sometimes get lumbered with some bad reputation. Despite the ability to simplify many processes and actions in our day-to-day lives, we often focus on the more negative elements of AI, such as if it holds too much power, its perceived bias, or Terminator’s SkyNet.

As AI becomes more present in our everyday lives, it seems we’re beginning to see more and more examples of AI gone bad. For example, Lee Luda, a South Korean chatbot, recently came under fire after being pulled from Facebook for displaying racial bias and issuing hate speech towards minorities. Bad news carries a higher emotional load, and we pay more attention to it.

However, the technology itself is rarely to blame here. One of the biggest challenges of AI is, in fact, people. AI technology doesn’t have conscience or intention; it was the lack of consideration for the importance of ethical factors and consequences when dealing with easily impressionable AI that was to blame.

Why real AI innovation is centred around people

Susana Duran, vice-president of engineering – emerging tech & mobile at Sage, discusses the need to look beyond AI when it comes to innovation. Read here

This is the real problem we are facing. AI has the power to deliver great things and improve our daily lives in ways we are yet to imagine, but we must proceed with caution and care – it’s not AI with the bias, but how we build and use it.

What is AI and how does it learn its bias?

In a nutshell, artificial intelligence is any task performed by a machine that previously required human intelligence. This ranges across everything from taking notes during meetings to driving planes and cars. But AI cannot think like a human — at least, not yet! It can only follow the rules and processes it is taught or it has learned from the data. It is less about perceptual intelligence and more about doing a job following a computational model.

Ironically, bias in computational models originates in the same way as humans. Our bias is dictated by the people we grew up with, the situations we were exposed to, our educators, and everything that defines who we are as people — similarly, an AI’s bias comes directly from the computational model creation process. AI is like a high-tech sponge that absorbs everything it can from the data to produce processes to perform tasks, typically by reproducing patterns present in the data. If an AI-powered technology is acting in a discriminatory way, it’s because it is (blindly) following the patterns it founded. Due to its malleable nature, it is just as susceptible to unconscious bias as humans are.

Improving understanding of machine learning for end-users

This article will explore how AI vendors can improve the understanding of machine learning for the benefit of end-users. Read here

Mitigating risks and finding solutions

When it comes to combatting the bias of an AI-powered technology, it needs to start way before the product’s conception. Part of the biggest issue with AI is that there isn’t enough education on the ethical implications of the technology.

The University of Cambridge announced last year that it would be the first university to offer an AI Ethics postgraduate course. Yet, despite this stride forward, AI ethics education is not yet an essential aspect of computing courses. This is the first thing that needs to change before we can see any other widescale issues being amended. The fundamentals of ethical AI must be taught in educational environments, otherwise a lot of elements of working with AI will need to be ‘unlearnt’. We must go into the development of AI with an open mind, a lack of bias and a complete understanding of the technology — and its possible implications.

Ethical design thinking: empowering designers to drive ethical change

As technologies like artificial intelligence become more advanced, the question of what we should do is at times lost to the question of what we can do. And, what we can do often raises ethical concerns. Read here

Beyond training in the fundamentals of AI ethics, there is a need for teams to work on their own internal diversity and inclusion training, especially for those working directly with AI technology. It is essential for these AI-dedicated teams themselves to be diverse to harness the power of different backgrounds and experiences and ultimately, create better performing, and less discriminatory, products and systems. Much like with humans, being surrounded by the people we feel comfortable with can make us less open to diversity, and being stuck in our echo chambers makes change even harder. Time must be dedicated to making diversity training a core part of everyday learning, and for all team members to undertake such training to avoid potential mishaps. Otherwise, we risk not even realising that we are feeding the AI with biased data or that the application of the computational models generates discrimination. AI will be deployed at scale and we must ensure it does not repeat discrimination patterns of the past.

Not even the sky is the limit for Artificial Intelligence. Technologically, we are there, and we are ready to take on any potential issues that may arise, but in terms of ethics, we have a way to go. Incorporating ethics training into AI education is a great place to start and with time, as organisations themselves naturally become more diverse, we will see less bias (unconsciously) injected into these technologies. Ethical AI is an absolute necessity if we wish to see AI reach its full potential, we just need to make sure that we are feeding it the right knowledge, from the right people to do the right things – technologically and ethically. Nobody sells cars without brakes.

Written by José Rodriguez Alberto Ruiz, chief data protection officer at Cornerstone OnDemand

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com