UK government calls for AI infrastructure access ahead of global summit

As big tech AI innovation continues, government officials push for under-the-hood access to key start-ups' technology ahead of the world's first AI safety summit

Aiming to better understand the current cutting-edge capabilities of artificial intelligence in preparation for November’s Global AI Safety Summit, government officials are in negotiations with key vendors over possible examinations of their large language models, reported the FT.

If successful, it would mark the first time that AI vendors have revealed their inner technological operations to a government.

While DeepMind, OpenAI and Anthropic agreed to let government officials under their hoods for research and safety purposes back in June, the extent and technical details of the access were not concluded.

Vendors are reportedly hesitant to allow governmental analysis of their systems due to possible leaking proprietary information, which may compromise continued innovation, or make them vulnerable to cyber threats.

This comes despite calls for stronger regulation of artificial intelligence technologies, including generative AI.

Why big tech shouldn’t dictate AI regulationWith big tech having their say on how artificial intelligence should be monitored, Jaeger Glucina discusses why we need to widen the AI regulation discussion.

The sharing of model weights — the parameter within the network that transforms input data — was discussed, but an Anthropic spokesperson said this would have “significant security implications”, with the company choosing instead to look at “delivering the model via API and seeing if that can work for both sides”.

A source close to governmental discussions told the FT: “These companies are not being obstructive, but it is generally a tricky issue and they have reasonable concerns. There is no button these companies can just press to make it happen. These are open and unresolved research questions.”

With LLMs in general being powered by proprietary and open source data, there are security risks to consider, which can lead to disinformation and other harms towards users.

The Global AI Safety Summit is set to take place on the 1st-3rd November, at Bletchley Park.

Meta to launch celebrity avatar chatbots

Another demonstration of intent to make AI capabilities more immersive and personable to consumers was revealed, as Meta chief Mark Zuckerberg announced the possibility of speaking to AI chatbots over Facebook, Instagram and WhatsApp, reported The Times.

At Meta’s annual developer conference, Zuckerberg presented avatars of celebrities including Snoop Dogg, Paris Hilton and Tom Brady, with the chatbots set to have their own user profiles.

“This isn’t just going to be about answering queries,” he said. “This is about entertainment and about helping you do things to connect with the people around you – we thought this should feel fun, and feel familiar.”

Musk and Zuckerberg upscale AI effortsThe ongoing battle between Elon Musk and Mark Zuckerberg has gone beyond social media and the Octagon, and into the surging AI space.

Initially, a vaguely gradual roll-out is planned across Meta’s social media ecosystem, with tools that allow developers to create their own chatbots cited as future offerings.

While chatbots powered by LLMs are capable of aiding delivery of insights, and in turn sharing recommendations on how to increase post engagement and other performance factors, there is the danger of harmful messages and information being spread if not monitored properly.


The future of private AI: open source vs closed sourceAs regulation of artificial intelligence evolves, the future of AI could be private in nature – here’s how adoption of open and close source capabilities would compare.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.