AI Safety Summit: what to expect

Government officials and tech companies are set to come together for the world's first AI Safety Summit on the 1st and 2nd November

UPDATE: Prime Minister Rishi Sunak is set to lead publication of safety reports relating to development of AI, in aid of discussions during the upcoming AI Safety Summit at Bletchley Park, reported the FT.

The papers will explore the capabilities and risks of AI technologies including frontier models underpinning platforms such as ChatGPT, Bard and Claude.

Anticipating “models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”, one scenario being examined is wider automation of whole industries, which government sources say will usher in needed debate around “the future of education and work”.

While this may prove technically safe, it’s said that “they are nevertheless causing adverse impacts like increased unemployment and poverty”.

Summit discussions

Spread over two days, the AI Safety Summit is set to explore how artificial intelligence can be kept secure and safe for users through regulation as it evolves, as well as how businesses in the space can stay compliant.

The first day, which most tech firms and government officials are set to attend for, will be hosted by UK technology secretary Michelle Donelan, with the second day being led by Prime Minister Rishi Sunak and focused on political implications of AI, reported The Times.

The event guest list includes leaders from the most prominent AI start-ups — including ChatGPT developer OpenAI, Google‘s DeepMind and Claude creator Anthropic — as well as representatives from key AI investors like Amazon, Meta and Microsoft.

Additionally, government officials set to attend include Ursula von der Leyen, head of the European Commission, as well as US vice-president Kamala Harris, and French President Emmanuel Macron.


UK government calls for AI infrastructure access ahead of global summitAs big tech AI innovation continues, government officials push for under-the-hood access to key start-ups’ technology ahead of the world’s first AI safety summit.


How it came about

Regulation of AI innovation, while already seeing substantial discussion across governments previously, has surged up the legislation agenda alongside increasing use of generative AI tools such as ChatGPT, which saw public release in November last year.

This has led to the set-up of a global summit dedicated to ensuring long-term safety regarding the technology, with risks including misinformation and bias to be addressed.

A statement by the Department for Science, Innovation and Technology said the conference at Bletchley Park “builds on a wide range of engagements leading up to the summit to ensure a diverse range of opinions and insights can directly feed into the discussions”.

Desired outcomes for tech firms

According to Nicklas Lundblad, director of public policy at Deepmind, two key outcomes should be sought after: “an international understanding of the opportunity and risk; and mechanisms to co-ordinate.”

Lundblad added: “It’s hard for issues such as climate change and poverty — but if we can at least get to a first understanding between the participating countries that these are the mechanisms, these are the principles, that would be a huge win.”

Natalie Cramp, CEO of Profusion, commented: “The AI Safety Summit is a very welcome initiative and it has the potential to be a very productive event, however, it really should just be the start of ongoing serious debate in the UK about how we want AI to develop.

“It’s critical that we move forward with putting adequate rules in place now to reduce the risk of AI getting out of control. We saw the damage that has been done through lax regulation of social media – it’s very hard to put the genie back in the bottle.

“If the UK Government is serious about using AI to drive forward an economic revolution, businesses, innovators and investors need certainty about what the rules of the game will be. Otherwise, the most exciting AI tech start-ups will simply go to the EU or US where there is likely to be much more legal clarity.”


The future of private AI: open source vs closed sourceAs regulation of artificial intelligence evolves, the future of AI could be private in nature – here’s how adoption of open and close source capabilities would compare.


Key AI academics call for industry responsibility

Ahead of next week’s AI Safety Summit, 23 experts in artificial intelligence have co-signed policy proposals calling for liability for harms caused by AI vendors, reported The Guardian.

Academics involved include two of the three 2018 Turing Award winners and “godfathers of AI”, Geoffrey Hinton and Yoshua Bengio.

Hinton resigned from his position at Google Brain earlier this year in order to more freely discuss the possible risks of AI, while Bengio previously stated that AI development needs state control in an interview with Information Age.

Policies recommended in the open document, addressed to governments globally, include:

  • Making tech companies liable for avoidable harms.
  • Compulsory safety measures to mitigate any dangerous capabilities found by AI firms.
  • Allocation of one-third of government R&D funding, and the same proportion of company resources, towards system safety and ethics measures.
  • A licensing system for cutting-edge AI models.
  • Independent audits into AI labs.

Stuart Russell, professor of computer science at the University of California, Berkeley, said: “It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”

Related:

Protecting against cyber attacks backed by generative AIWith threat actors turning to generative AI capabilities to evolve social engineering and other cyber attacks, here’s how businesses can stay protected.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.