AI ethics: Time to move beyond a list of principles

AI ethics must move beyond lists of ‘principles’ says new report from the Nuffield Foundation and Leverhulme Centre for the Future of Intelligence AI ethics: Time to move beyond a list of principles image

AI ethics should be a universally accepted practice.

AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws, and we must tread carefully with the integration of AI into weaponry — all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?

Efforts are being made by individual companies, such as Digital Catapult (who, last year unveiled plans to increase the adoption of ethics in artificial intelligence), individuals, academic committees and governments. But, more could and should be done, and this requires industry and government-wide collaboration.

AI should be without prejudice — and that’s down to the developers and coders

Indeed, ‘an ethical approach to the development and deployment of algorithms, data and AI (ADA) requires clarity and consensus on ethical concepts and resolution of tensions between values,’ according to a new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

Organisations and governments need help, and this report provides a broad roadmap for work on the ethical and societal implications of ADA-based technologies.

The roadmap identifies the questions for research that need to be prioritised in order to inform and improve the standards, regulations and systems of oversight of ADA-based technologies. Without these, the report’s authors conclude the recent proliferation of various codes and principles for the ethical use of ADA-based technologies will have limited effect.

Ethical AI – the answer is clear

Being transparent with ethical AI is vital to engaging with the public in a responsible manner. Read here

Dr Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge said: “In recent years, there has been a lot of attention on how to manage these powerful new technologies. Much of it has centred on agreeing ethics ‘principles’ like fairness and transparency.

“Of course, it’s great that corporations, governments and others are talking about this, but principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it — because they are vague and come into conflict in practice. They also risk distracting from developing measures with real bite, like regulation.

“This report points the way to the hard thinking that we as a society must do in order to really harness these technologies for good and avoid the kind of scandals we saw so much of last year.”

AI ethics: the principles

To address the gaps in AI ethics, the roadmap sets out detailed questions and principles for research based around three main tasks.

 Uncovering and resolving the ambiguity inherent in commonly used terms, such as privacy, bias, and explainability

This will require identifying how these terms are used in different disciplines, sectors, publics and cultures, and building consensus in ways that are culturally and ethically sensitive. Where consensus cannot be reached, there is a need to develop terminology to prevent different groups from talking past one another.

 Identifying and resolving tensions between the ways technology may both threaten and support different values

There are four central tensions:

  • Using algorithms to make decisions and predictions more accurate versus ensuring fair and equal treatment.
  • Reaping the benefits of increased personalisation in the digital sphere versus enhancing solidarity and citizenship.
  • Using data to improve the quality and efficiency of services versus respecting the privacy and informational autonomy of individuals.
  • Using automation to make people’s lives more convenient versus promoting self-actualisation and dignity.

 Building a more rigorous evidence base for discussion of ethical and societal issues

This should include research on the impacts of ADA-based technologies on different groups, particularly those that might be disadvantaged or underrepresented. It should also include public engagement, to understand the perspectives of different groups of people.

‘AI causes new challenges for research ethics at universities’

The ethical dilemmas posed by the rise of artificial intelligence is a concern to a number of academics and tech experts. Read here

Tim Gardam, chief executive of the Nuffield Foundation said: “The report reveals just how far there is to go to address the question of how society should equitably distribute the transformative power and benefits of data and AI while mitigating harm. The questions identified will be valuable in stimulating new ideas for the Nuffield Foundation’s digital society research funding, and for informing the work of the Ada Lovelace Institute, a new independent research and deliberative body that we have established to ensure data and AI work for people and society.”

Latest news

divider
Digital Transformation
The four steps you need to take to kick-start the leadership revolution

The four steps you need to take to kick-start the leadership revolution

26 March 2019 / Organisations are significantly increasing their investment in new digital technologies. Yet, it would be easy [...]

divider
Data Analytics & Data Science
The unstructured data pandemic

The unstructured data pandemic

26 March 2019 / Gartner estimates that today over 80% of enterprise data is unstructured. That means that the [...]

divider
Business & Strategy
Majority of remote workers are being excluded from meetings

Majority of remote workers are being excluded from meetings

25 March 2019 / New data from Igloo Software released today shows 56% of remote employees missed out on [...]

divider
Business Skills
Bracing for the inevitable skills crisis

Bracing for the inevitable skills crisis

25 March 2019 / EU migration hit a six-year low following the Brexit vote. For industries projected to grow, [...]

divider
Cybersecurity
Lack of skills the main challenge in recruiting information security talent

Lack of skills the main challenge in recruiting information security talent

25 March 2019 / Despite, Raj Samani — chief data scientist at McAfee — last week claiming that there [...]

divider
AI & Machine Learning
Kasparov and AI: the gulf between perception and reality

Kasparov and AI: the gulf between perception and reality

25 March 2019 / Kasparov and AI “Chess used to be connected to the mysteries of human intelligence,” said [...]

divider
AI & Machine Learning
Machine learning is not real learning argues a new paper

Machine learning is not real learning argues a new paper

25 March 2019 / Artificial intelligence is a misleading phrase, or so many argue. It is misleading because there [...]

divider
Automotive
Overcoming simulation hurdles to expedite the fully autonomous vehicle

Overcoming simulation hurdles to expedite the fully autonomous vehicle

25 March 2019 / Not a day goes by without artificial intelligence (AI) and machine learning (ML) making headlines. [...]

divider
Data Analytics & Data Science
Data gurus comment on Revoke Article 50 petition

Data gurus comment on Revoke Article 50 petition

22 March 2019 / “You have to take the data with a pinch of salt, especially as the second [...]

Do NOT follow this link or you will be banned from the site!

Pin It on Pinterest