Algorithmic and human bias in hiring: Can either be mitigated?

For decades the recruitment industry has been debating how to eliminate the impact of bias on hiring, yet prejudiced decision-making continues to be a prevalent challenge. Unconscious bias, in particular, has been the subject of research by both social and data scientists, revealing time and again that humans unknowingly allow social stereotypes to influence their perceptions and decisions about people.

Bias can begin impacting the process of hiring from the moment the job description is created. When looking to fill a position, employers often write a job description that matches the characteristics of the person who previously held the job. Consciously or unconsciously, those interviewing candidates for the position may be inclined to rate those who are most like the previous employee more highly, even if they are alike in ways that are irrelevant to job success.

But it doesn’t just stop there. Human bias appears in many different forms, particularly with first impressions, either during in-person interviews or over the phone when people can be judged based on their accents. Many studies have proven the old adage that ‘first impressions count’ — in fact, one UK study noted that if candidates are considered overweight, they were less likely to get a job than those who weren’t. Furthermore, the bias appeared to be even more prominent if candidates also fit into another underrepresented category, e.g. overweight women were considered less suitable than obese men.

We need to accept that even well-intentioned recruiters and hiring managers have biases that influence the hiring process. It’s been proven that a variety of external factors can unconsciously impact a hiring decision and can disadvantage a potentially perfect candidate.

In order to reduce the possibility for bias, recruiters are increasingly turning to machine learning and algorithms to help create a fairer process. However, it isn’t just humans who can have a bias towards sections of society – such prejudices can also form within algorithms if they are not developed and tested with a rigorous scientific process of bias mitigation. So should we be worried about using these types of tools?

Recruitment trends in tech for 2019: Machine learning, AI and predictive analytics

Major changes are occurring in the ways human resources and other related professionals find the right people for open positions. Kayla Matthews looks at recruitment trends in tech. Read here

Are algorithms inheriting human bias?

In the past year alone, we have seen multiple stories in the media which suggest algorithms are inheriting and scaling human bias in recruitment. For instance, Amazon recently announced that it had abandoned development of its AI recruiting tool, which vetted applicants by observing patterns in resumes submitted to the company over a 10-year period. However, it came into trouble when the AI taught itself that male applicants were preferable to women as the majority of resumes had come from men over the past decade, and those resumes were the input data used to train the algorithm. It began to penalise applications that included the word “women”, instead focusing on masculine wording in order to rank the best candidates.

It is important to remember that AI does not have the “gut instinct” that humans often use to make unconsciously biased decisions. It cannot learn to avoid particular social groups by itself. Algorithms can only start to form these biases if corruption occurs when the tool is configured in the beginning, with either biased data points or biased trainers.

Here’s a solution to the AI talent shortage: Recruit philosophy students

A partial solution to the problem of lack of artificial intelligence talent might lie with recruiting philosophy students. Read here

Building the right data sources

So, the million-dollar question is: How can companies effectively integrate AI algorithms into the recruitment process without adding human unconscious bias into the mix?

Successful algorithms are first and foremost built upon the data points they are fed. In order to have a competent algorithm for recruitment, organisations must collect a large amount of candidate data points to provide a comprehensive view of a successful applicant. This can be done through candidate assessments – either traditional question-based or video and game-based. In the first instance, it’s a good idea to invite existing employees to undergo an assessment and run them through the algorithm to test its output, before asking a wider number of candidates to take part.

However, in order to test the algorithm’s ability to identify the strongest candidates, it is critical for developers and their customer companies to establish objective measures of competency and “fitness” for a job role (i.e. performance benchmarks). After that, the machine learning algorithms must be trained to predict the likelihood of success in the job based on data from the interview. Then the model is tested to ensure that it is valid in its analyses, based on the assessment answers.

Each model will then be tested for bias in the output data before implementation can begin, in order to ensure there is no adverse impact against protected groups. If any are found, companies must look in the data points for any factors that could be contributing to the bias and remove them from the model before retraining and retesting the algorithm.

How will the growth of AI impact the HR and recruitment sectors?

Answering the question on many people in the recruitment industry’s lips: will AI mean it’s the end of the line for recruiters? Read here

Raising your algorithms right

It is important to remember that the type of machine learning algorithms employed in new hiring practices are created by humans for humans, in order to help them make more informed decisions at an early stage in the interview process.

We must also accept that AI alone is not the ‘silver bullet’ and will not solve all diversity issues. You can’t take shortcuts if you are serious about preventing bias from creeping into your algorithmically supported talent identification and hiring process. However, when tested, retested, and then implemented effectively, algorithms are ideally suited to bypass the limitations of human unconscious bias and open up a company’s talent pool, not only to increase diversity in the workforce, but also to attract greater talent. Importantly, these tools can be re-trained and improved upon — it is far harder to change a human’s perceptions.

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com

Related Topics

Algorithms