ICO to scrutinise AI recruitment bias

The Information Commissioner’s Office (ICO) is set to investigate whether employers’ recruitment bias within AI discriminates against underrepresented groups

Information Commissioner John Edwards has stated that the watchdog will look into the software and evaluation techniques of AI-powered hiring systems, reported The Times.

The investigation — carried out as part of a reported three-year plan — comes amidst concerns raised over possible discrimination against ethnic minorities, as well as neurodivergent groups, due to speech or writing patterns used.

Citing the importance of a diverse group of developers being involved in putting recruitment software together in order to minimise possible bias, Edwards has promised to consider “the impact the use AI in recruitment could be having on neuro-diverse people or ethnic minorities, who weren’t part of the testing for this software”.

Additionally, an ICO spokesperson commented: “We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds.

“We will also set out our expectations through refreshed guidance for AI developers on ensuring that algorithms treat people and their information fairly.”

The Commissioner went on to announce intentions of further examining predatory marketing calls; algorithms within the Universal Credit system; and support for children’s privacy.

Work to be done

While AI has been widely believed to help prevent management biases and discrimination towards job candidates, many cases have emerged over the past five years where human biases have instead been emphasised.

The US Department of Justice and Equal Employment Opportunity Commission, for example, warned earlier this year that algorithmic software such as automatic video interview tools can place candidates with disabilities at an unfair disadvantage.

Meanwhile, Amazon was forced to ditch a flawed algorithm in 2018 upon findings that candidates were being rejected purely based on their education at women-only institutions.

To combat recruitment bias, explainability frameworks have been established by AI vendors to inform businesses about how algorithms come to their decisions — an aspect that could go a long way in achieving this goal.


AI bias: Why it happens and how companies can address it — Richard Downs, UK director at Applause discusses how businesses can use the wisdom of crowds to source the diverse set of data and inputs needed to train algorithms.

Adapting hiring strategies to increase diversity in tech — This article will explore how tech organisations can adapt their hiring strategies, in order to increase diversity levels across the workforce.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.