“Meaningful” AI regulation needed from UK government, says report

A University of Cambridge report states that “solid legal and ethical AI regulation” will be key to trust from UK industry and public.

With the UK set to host the first global AI Safety Summit at Bletchley Park next month, Cambridge researchers have said that long-term economic benefits won’t be gained without proper regulation trust or incentives for start-ups, reported The Times.

According to the study, a focus on real-world use cases is needed from UK companies dealing in AI, with examples including diagnostic developments and mitigating a shortage of software engineers.

As well as regulation reform to cover AI innovations nationally, the Cambridge report also recommends the roll-out of tax breaks and credits for artificial intelligence-facing firms, and an improved seed enterprise investment scheme to bolster capital supply.


UK government calls for AI infrastructure access ahead of global summitAs big tech AI innovation continues, government officials push for under-the-hood access to key start-ups’ technology ahead of the world’s first AI safety summit.


“A new bill that fosters confidence in AI by legislating for data protection, intellectual property and product safety is vital groundwork for using this technology to increase UK productivity,” said Professor Gina Neff, executive director of the Minderoo Centre for Technology and Democracy, which led the research.

Report co-author Sam Gilbert added: “Generative AI has been shown to speed up coding by some 55 per cent, which could help with the UK’s chronic developer shortage.

“In fact, this type of AI can even help non-programmers to build sophisticated software.”

Since the public launch of OpenAI’s ChatGPT last year, generative AI has ushered regulation of the technology up the agenda globally, with the UK aiming to take a lead in the worldwide innovation race.

Prime Minister Rishi Sunak had previously lobbied for a global regulatory and research body to be housed in Britain.

Related:

Why big tech shouldn’t dictate AI regulationWith big tech having their say on how artificial intelligence should be monitored, Jaeger Glucina discusses why we need to widen the AI regulation discussion.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.