AI ethics: Time to move beyond a list of principles

AI ethics should be a universally accepted practice.

AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws, and we must tread carefully with the integration of AI into weaponry — all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?

Efforts are being made by individual companies, such as Digital Catapult (who, last year unveiled plans to increase the adoption of ethics in artificial intelligence), individuals, academic committees and governments. But, more could and should be done, and this requires industry and government-wide collaboration.

AI should be without prejudice — and that’s down to the developers and coders

Indeed, ‘an ethical approach to the development and deployment of algorithms, data and AI (ADA) requires clarity and consensus on ethical concepts and resolution of tensions between values,’ according to a new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

Organisations and governments need help, and this report provides a broad roadmap for work on the ethical and societal implications of ADA-based technologies.

The roadmap identifies the questions for research that need to be prioritised in order to inform and improve the standards, regulations and systems of oversight of ADA-based technologies. Without these, the report’s authors conclude the recent proliferation of various codes and principles for the ethical use of ADA-based technologies will have limited effect.

Ethical AI – the answer is clear

Being transparent with ethical AI is vital to engaging with the public in a responsible manner. Read here

Dr Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge said: “In recent years, there has been a lot of attention on how to manage these powerful new technologies. Much of it has centred on agreeing ethics ‘principles’ like fairness and transparency.

“Of course, it’s great that corporations, governments and others are talking about this, but principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it — because they are vague and come into conflict in practice. They also risk distracting from developing measures with real bite, like regulation.

“This report points the way to the hard thinking that we as a society must do in order to really harness these technologies for good and avoid the kind of scandals we saw so much of last year.”

AI ethics: the principles

To address the gaps in AI ethics, the roadmap sets out detailed questions and principles for research based around three main tasks.

 Uncovering and resolving the ambiguity inherent in commonly used terms, such as privacy, bias, and explainability

This will require identifying how these terms are used in different disciplines, sectors, publics and cultures, and building consensus in ways that are culturally and ethically sensitive. Where consensus cannot be reached, there is a need to develop terminology to prevent different groups from talking past one another.

 Identifying and resolving tensions between the ways technology may both threaten and support different values

There are four central tensions:

  • Using algorithms to make decisions and predictions more accurate versus ensuring fair and equal treatment.
  • Reaping the benefits of increased personalisation in the digital sphere versus enhancing solidarity and citizenship.
  • Using data to improve the quality and efficiency of services versus respecting the privacy and informational autonomy of individuals.
  • Using automation to make people’s lives more convenient versus promoting self-actualisation and dignity.

 Building a more rigorous evidence base for discussion of ethical and societal issues

This should include research on the impacts of ADA-based technologies on different groups, particularly those that might be disadvantaged or underrepresented. It should also include public engagement, to understand the perspectives of different groups of people.

‘AI causes new challenges for research ethics at universities’

The ethical dilemmas posed by the rise of artificial intelligence is a concern to a number of academics and tech experts. Read here

Tim Gardam, chief executive of the Nuffield Foundation said: “The report reveals just how far there is to go to address the question of how society should equitably distribute the transformative power and benefits of data and AI while mitigating harm. The questions identified will be valuable in stimulating new ideas for the Nuffield Foundation’s digital society research funding, and for informing the work of the Ada Lovelace Institute, a new independent research and deliberative body that we have established to ensure data and AI work for people and society.”

 For more on ethical AI

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...