European politicians vote to regulate rise of robots

Despite, the fact that fully functioning, free-thinking robots are a long way off, European politicians have felt it necessary to regulate this rise of robots and artificial intelligence in society and business.

The vote by politicians also seeks to establish the liability for the actions of robots including self-driving cars.

This is probably a wise move as the effects of AI and robots (and robotics) begin to be felt in different businesses across different sectors.

Jobs will change and jobs will be lost. So, it is absolutely essential that an ethical framework and regulation is put in place. It would be irresponsible not to.

The vote took place in France on Thursday and was based on a report from on a report from the Legal Affairs Committee.

The vote passed 396-to-123, with 85 abstentions.

>See also: How robots in the workplace will change organisational culture

“MEP’s (Members of the European Parliament) voted overwhelmingly in favour of the report,” said a spokesperson for the European Parliament.

“The report is not legislative but provides recommendations for the Commission. Now it goes to the Commission to act upon.”

Robot regulation

As part of new regulation and ethical framework, the report called for a European Agency to monitor robotics and AI, as well provide compensation for people involved in accidents with autonomous cars.

However, MEPs rejected demands the idea of a basic income for workers who lose their jobs and a tax on robots, Politico reports.

Mady Delvaux, the author of the report and the Socialists and Democrats member in the Legal Affairs Committee, commented after the report that “Although I am pleased that the plenary adopted my report on robotics, I am also disappointed that the right-wing coalition of ALDE, EPP and ECR refused to take account of possible negative consequences on the job market.”

This fear seems to be justified, with the main concern from many that robots, AI and automation will replace millions of jobs across the world.

Rejecting an universal basic income seems contradictory.

“The next generation of robots will be more and more capable of learning by themselves,” Delvaux said in an interview published on the European Parliament website.

“The most high-profile ones are self-driving cars, but they also include drones, industrial robots, care robots, entertainment robots, toys, robots in farming,” said Delvaux.

“We have to monitor what is happening and then we have to be prepared for every scenario.”

Other experts, however, such as Japp Zuiderveld, VP of EMEA at NVIDIA suggested that this was not the case in an interview with Information Age.

>See also: Everybody is using artificial intelligence ‘without knowing it’

He said that AI was in its “infancy” and that a robots ability to copy a human brain was not even close. Sentience is not on the table.

Delvaux remained firm, however, and said: “We always have to remind people that robots are not human and will never be. Although they might appear to show empathy, they cannot feel it. We do not want robots like they have in Japan, which look like people. We proposed a charter setting out that robots should not make people emotionally dependent on them. You can be dependent on them for physical tasks, but you should never think that a robot loves you or feels your sadness.”

Delvaux also believes that a separate legal status should be created for robots.

“When self-learning robots arise, different solutions will become necessary and we are asking the Commission to study options.”

“What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.”

Andrew Joint, commercial partner at Kemp Little LLP has prepared the following legal comment surrounding robots place in the legal system.

>See also: Robots could replace 250,000 public sector workers by 2030

“Now that robots might be given the ability to ‘own’ or be responsible for something, our usage of them and ultimately our interaction with them may need to fundamentally change. If you knew that your robot might own the shopping you asked it to buy for you, or that you might have a duty of care as to how you treat your robot, would that force you to think differently about how you use it?”

“This move to consider holding robots liable for their actions, particularly for those who can show true autonomous actions and cognitive behaviour, is a bold and positive step. I believe we will see such advances in technology that revisiting how the law works will be essential – and the EU’s drive to look to agree some fundamentals of the legislative approach now is the right thing to be doing.”

“It is an exciting time as the areas of law which will need re-visiting are some of the most critical and fundamental aspects that govern our society and any sort of material change will likely be more revolutionary than evolutionary.”

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...