Fighting AI bias and where it comes from 

Artificial intelligence increasingly influences most of our important decisions. If you apply for a loan, auto insurance or a job, you will probably be evaluated at some point by an algorithm, trained on historical datasets. Algorithms are now even used in policing and by the criminal justice system to help determine prison sentences. Have we accepted that machines are qualified to make the most profound decisions regarding people’s lives?  When human beings wield this kind of authority, we rightly insist on their adherence to the strictest ethical standards, free of biases that can negatively impact individuals based on their race, origin, social status, gender or sexual orientation. But why would we assume that algorithms are flawless and logical, and that bias need not be a concern? Fighting AI bias is essential.

AI is, after all, a technology created and programmed by human beings, using as its guide datasets that are the products of decades of human experience, with historic injustices included. We need to understand how bias enters into algorithms, and draw on examples of how organizations have successfully worked to stop bias from being perpetuated.

Organisations must see AI as not just another technology, but as an enormous change initiative to transform every aspect of how they do business and interact with customers, often with unintended and unforeseen consequences. Integrating AI into business operations without considering all the ways it will impact the customer can result in a failure to fully understand how it furthers bias.

AI bias: It is the responsibility of humans to ensure fairness

Fact is, bias has been introduced into AI by humans, says Dinesh Singh from Aricent. Minimising AI bias relies on three factors.

The ‘bad seed’ of biased datasets

Bias in AI comes from large datasets in which a massive amount of information is synthesised. Datasets can be examples of unconscious and unintentional – but nonetheless real and impactful – bias. An instructive case in point is the way algorithms decide on behalf of banks and other financial institutions whether to extend credit or loans to individuals. To arrive at a decision, an algorithm will examine decades’ worth of data, examining such factors as the neighborhood in which the applicant lives and how often people from there have been allocated credit.

But what happens if you are a member of a minority group, living in a historically disadvantaged neighborhood where credit was once denied to residents for explicitly racist reasons? Conscious bias from 50 years ago could unconsciously influence an algorithm’s decision in the present.

The same scenario could be repeated when algorithms draw on historical data to calculate insurance premiums, screen job applicants or decide on sentencing or parole for convicted criminals. And, the fact that AI is designed to recognize and replicate patterns through machine learning means that it will reinforce this bias over time.

Ethics of AI, the machine human augmentation and why a Microsoft data scientist is optimistic about our technology future

AI can combat bias, it can route out fake news, and when the day finally comes when we merge with the machine, ethical AI will be paramount, or so a senior data scientist told us.

Ensuring human insight – and oversight

For fighting AI bias, the solution is twofold. First, there must be a greater emphasis on human insight. By relying on insights gleaned from studying those who are or will be interacting with the AI, we ensure that we are not bringing our own biases and assumptions into the solution.

For example, Cognizant leveraged human insight to improve employee engagement and retention efforts at a major global company. The company recognized an opportunity to improve its retention strategies, thereby reducing training costs. At many companies, a leader may say, ‘I know what will solve this, let’s do X.’ Instead, Cognizant recognized the only people who could truly improve retention were the employees themselves. Therefore, we engaged in a rigorous anthropological study of the employees – their behavior, their preferences, how they interacted with HR tools, etc., all with their consent and cooperation. We also asked direct questions about their job satisfaction (or lack thereof).

We found the company did not fully understand what was causing employee dissatisfaction. We gained a number of impactful insights and were able to improve employee retention by embedding these new perspectives in the algorithms they used to engage with their workers. This test case illustrates how getting more granular and human-centered can gave direct payoffs.

The second part of the solution is to ensure human oversight. As machine learning becomes more sophisticated, algorithms will increasingly learn on their own and teach other algorithms to follow the same patterns. That is why there must always be a human presence in the way these algorithms are developed, monitored and governed.

An interesting example is an AI recruiting engine set up to screen job applicants. This algorithm developed a clear bias against women, dropping candidates for no other reason than that they had gender-specific job titles, based on having access to datasets in which hired employees were overwhelmingly men. The company realised this bias was occurring due to its active bias monitoring programs in effect as part of its broader AI governance structure, and ended the programme.

For organisations that lack such governance standards, bias may continue in an ongoing, systematic way indefinitely. This case illustrates the need for vigilance and oversight of programs, and an awareness that AI can develop a rather bigoted mind of its own if people are not carefully watching.

Does your company have an AI ethics dilemma?

The ethics of Artificial Intelligence has been in the news — particularly with the creation and almost immediate collapse of Google’s AI Ethics board. But do companies that are new to AI tools need to be asking themselves: ‘Do I have to ‘care’ about ethics?’ asks Alexa Hagerty and Igor Rubinov.

Addressing big tech’s diversity problem

The human insight and oversight solutions for biased algorithms involve more human input into their design and governance. But, that ignores the very problem that leads to AI bias: human bias.
AI is informed by biased datasets precisely because humans have historically engaged in discriminatory practices, whether excluding women and people of color from particular jobs, expending fewer public resources on underprivileged communities or imposing tougher sentencing on members of minority groups. The people building AI systems today may be less prejudiced than the average person was a few decades ago, but lack of diversity in the industry is still worrying.

Research conducted in 2018  indicates only 24% of tech employees in Silicon Valley – where so much of the world’s dominant platforms and algorithms originate – are women, and only 5% are African American, or Hispanic or Latino- Americans. There is the very real concern that tech workers lack the personal experience of bias or exclusion that would make them conscious of ways in which AI can replicate social injustice. Tech employees may end up contributing to algorithmic bias by feeding AI systems datasets which fail to factor in the bias concern.

Will data ethics and regulation drive innovation in AI?

As customer expectations for trust and transparency grow, a comprehensive data ethics framework will be essential for driving innovation in AI

Teaching algorithms like we teach our children

In the past half century or so, many societies have increasingly stressed the importance of educating children to reject prejudice based on race, gender or sexual orientation. So too must we teach AI to reject bias through a rigorous devotion to filtering it out at all levels and watching closely the decisions that algorithms train themselves to make. Because as with human beings, algorithms can learn from even the unconscious actions of others and imitate the injustices we thought we put behind us.

Poornima Ramaswamy is Vice President of Cognizant’s AI and Analytics Practice.

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com

Related Topics