AI ethics staff layoffs across big tech bring safety concerns

Layoffs at an array of big tech corporations affecting responsible AI personnel have raised concerns around long-term safety

Companies including Amazon, Google, Meta, Microsoft and Twitter are resorting to including employees responsible for overseeing AI ethics in staff redundancies, to cut further costs, reported the Financial Times.

The amount of staff affected is said to be in the dozens, accounting for a small fraction of the tens of thousands of tech employees laid off by big tech firms in response to economic uncertainty and stakeholder dissatisfaction.

Microsoft demobilised its ethics and society lead team in January, with the company stating that the departments consisted of less than 10 members of staff, and that hundreds were still actively working in responsible AI, as well as growth of responsible AI operations being cited by chief responsible AI officer Natasha Crampton.

Meta disbanded its responsible innovation team back in September, which affected engineering and ethicist roles overseeing evaluation of civil rights and ethics on Facebook and Instagram. A Meta spokesperson declined to comment.

Meanwhile, the employees laid off at Twitter — amounting to over half of its workforce — included the ethical AI team responsible for fixing algorithm biases around race, on which the social media platform is yet to comment.

A source said to have inside knowledge of Alphabet’s operations said the redundancy of 12,000 Google employees included ethical AI oversight staff, but the search engine giant said it is unable to specify the proportion, though says that responsible AI remains a “top priority at the company”.

Ethics and safety concerns

With tools such as Microsoft-backed OpenAI‘s ChatGPT, Google’s Bard and Anthropic’s Claude bringing a new wave of generative AI innovation, the ethical nature and safety of the technology has been called into question in the wake of ethics staff layoffs.

Misinformation and bias are among the risks that the technology has been found to be susceptible to.

“It is shocking how many members of responsible AI are being let go at a time when arguably, you need more of those teams than ever,” Andrew Strait, associate director at the Ada Lovelace institute, and former ethics and policy researcher at Alphabet-owned DeepMind, told the FT.

Josh Simons, ex-Facebook AI ethics researcher and author of Algorithms for the People, commented: “Responsible AI teams are among the only internal bastions that big tech have to make sure that people and communities impacted by AI systems are in the minds of the engineers who build them,

“The speed with which they are being abolished leaves algorithms at the mercy of advertising imperatives, undermining the wellbeing of kids, vulnerable people and our democracy.”

There are also questions posed towards internal AI ethics teams regarding whether intervention into algorithms should also transparently involve public and regulatory stakeholders.

Related:

What is generative AI and its use cases?Generative AI is the is a technological marvel destined to change the way we work, but what does it do and what are its use cases for CTOs?

ChatGPT vs GDPR – what AI chatbots mean for data privacyWhile OpenAI’s ChatGPT is taking the large language model space by storm, there is much to consider when it comes to data privacy.

UK government announces AI white paper to guide innovationThe government announcement of a national white paper for AI innovation states an aim to “supercharge growth” across the UK.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.