Theresa May wants UK to lead the way on AI

Later today, Theresa May is expected to tell world leaders at the World Economic Forum in Davos that the UK wants to the lead the world in deciding AI’s future; in how it can be deployed in a safe and ethical manner.

She will say that a new advisory body, announced in the Autumn Budget, will help co-ordinate ethical efforts with other countries. And she will confirm that the UK will join the Davos’ own council on artificial intelligence.

The best country to lead the way?

This week, Google picked France as the location for its new research centre, which will be dedicated to exploring how AI can integrate with health and the environment sectors.

>See also: Growing the artificial intelligence industry in the UK

At the same time, Facebook has announced that will double the size of its AI lab in Paris, while software firm SAP committed €2 billion ($2.5 billion; £1.7 billion) of investment into the country, which will include work on machine learning.

Do these recent decisions, by some of the world’s top technology companies, undermine May’s desire for the UK to be the world leader in AI?

The prime minister is expected to base the UK’s claim to leadership on the health of its start-up economy, with a new AI-related company being created in the country every week for the last three years.

On top of this, she will say the UK is first in the world in its preparedness to “bring artificial intelligence into government”.

The questions of ethics surrounding AI, will be addressed – May announced – by the National Centre for Data Ethics and Innovation, which aims to position the UK as a world-leading force for the future of AI.

>See also: True AI doesn’t exist yet…it’s augmented intelligence

Jonathan Ebsworth, head of Disruptive Technologies at Infosys Consulting said that the “idea of an independent committee is to be welcomed. In spite of tremendous recent advances, AI is in its infancy; we are still blind to many of the ethical dilemmas that this technology will impose on our lives. Existing laws and codes of ethics were not designed with an AI-enabled world in mind, and it’s clear that we need guidelines and a code of practice to ensure that humans are not harmed by new technology. This is not a new idea: Isaac Asimov developed his Three Laws of Robotics more than three quarters of a century ago, after all.”

“This new committee must resist intervention in specific projects unless there is a very clear public interest, and to this end we urge industry to be closely involved with the new panel. It’s vital that we get the ethics of AI right, and doing so can help to cement the UK’s role not only as a leader in technology, but also in its ethical application.”

Test of leadership

In her address, May will declare that AI poses one of the “greatest tests of leadership for our time”. It is true that many people believe artificial intelligence will put jobs at risk, but the prime minister will say that “it is a test that I am confident we can meet”.

>See also: Is artificial intelligence the United Kingdom’s productivity solution?

“For right across the long sweep of history from the invention of electricity to advent of factory production, time and again initially disquieting innovations have delivered previously unthinkable advances and we have found the way to make those changes work for all our people.”

Is the risk real?

Google’s former chief Eric Schmidt told the BBC he did not believe mass job losses would occur.

“There will be some jobs eliminated but the vast majority will be augmented. “You’re going to have more doctors not fewer. More lawyers not fewer. More teachers not fewer.”

Facebook’s AI chief shares Schmidt’s view, and believes there is little chance of a Terminator-style future, where robots will destroy humanity. Society, he said, will develop the “checks and balances” to prevent a Terminator movie scenario.

Professor Stephen Hawking, however, has famously warned that AI could “spell the end of the human race”. And Tesla’s Elon Musk has suggested that there is a “good chance” of a  universal basic income, as jobs become increasingly automated.

>See also: How can the UK tackle the AI skills gap?

Expert view

“We welcome the scale of ambition in the Prime Minister’s Davos speech,” said Julian David, techUK’s CEO.

“The next generation of digital technologies including artificial intelligence present huge opportunities for jobs and investment in the UK and will make a real positive difference to people’s lives. The UK is already a global leader in AI and the Government’s clear backing for the sector through its industrial strategy will help to consolidate that leadership.”

“However, the rapid development of powerful new technologies also raises new ethical questions. We are therefore pleased that the Government is pushing forward on our recommendation to set up a Centre for Data Ethics and Innovation. This will help to develop a new framework to guide ethical innovation and good governance which are crucial for building public trust. The UK has an opportunity to be a world leader not just on technology development and but also ethics and governance.”

Terror content

In her speech later, the prime minister is also expected to tell investors to put pressure on tech giants in the fight against extremist content on social media.

She will say that social networks need stop providing a platform for terror, extremism and child abuse.

The content, she will explain, needs to be “removed automatically”.

“Earlier this month, a group of shareholders demanded that Facebook and Twitter disclose more information about sexual harassment, fake news, hate speech and other forms of abuse that take place on the companies’ platforms.”

>See also: Facebook vows to help fight extremism 

“Investors can make a big difference here by ensuring trust and safety issues are being properly considered. And I urge them to do so.”

“These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities.”

A real concern

“Issues such as extremist content and child online safety are of real concern to the sector and companies are far from complacent about the work that needs to be done,” explained David.

“Tech firms may not always agree with Government on the means but there is no disagreement on the objective to make online platforms hostile environments for illegal and inappropriate content. Much has already been achieved by working in partnership with Government and tech firms are committed to keep working to ensure the safety and security of their users.”

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...