Key AI stakeholders establish regulation forum

ChatGPT developer OpenAI has ditched its AI text detection program and formed a regulation body alongside Google, Microsoft and Anthropic

The new Frontier Model Forum reportedly aims to ensure the “safe and responsible” development of AI models that end up becoming more complex than those on the market currently, with the regulation of artificial intelligence development ascending regulation and business agendas.

The main objectives, according to executives involved in the project who spoke to The Guardian, entail promotion of AI safety research, encompassing climate crisis mitigation, cancer treatment, and other global societal issues.

In addition, the group is looking to discuss prospective AI safety protocols with governmental figures and academics.

Forum members state that membership is open to all developers of “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.


Mitigating the organisational risks of generative AIRisks including bias and hallucinations abound across generative AI projects. Jeff Watkins explains how businesses can mitigate them long-term.


Anna Makanju, vice-president of global affairs at OpenAI, said: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance.

“It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible.

“This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”

Brad Smith, vice-chair & president of Microsoft, added: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

AI text detection tool dropped by OpenAI

In the midst of regulation measures, OpenAI conversely ceased development of its AI-generated text classifier, due to a “low rate of accuracy”, reported The Times.

Established in January to mitigate “automated misinformation campaigns, academic dishonesty and positioning an AI chatbot as a human”, the program was found to yield a mere 26 per cent true positive rate for AI-written text, according to an OpenAI spokesperson.

Aimed for the education and wider online spaces, the shortfall of accurate detection of misinformation means that the challenge of combatting AI model hallucinations and biased information — yet to be truly acknowledged in the Frontier Model Forum’s initial statement — remains a mountain to climb.

Related:

What top tech leaders are saying about artificial intelligenceWith artificial intelligence (AI) rising up the corporate, regulatory and societal agendas, we gauge the views of the key tech leaders in the space.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.

Related Topics

OpenAI
Regulation