Why big tech shouldn’t dictate AI regulation

With big tech having their say on how artificial intelligence should be monitored, Jaeger Glucina discusses why we need to widen the AI regulation discussion

The list of open questions about artificial intelligence remains significantly longer than the list of those which have been decisively answered. For instance, to what extent can we explain AI’s decision-making? How will AI impact overall employment levels and national financial health? And how much of AI’s future capabilities will come about through sheer computational muscle?

These are all vital questions with wide-ranging consequences for businesses, governments, and society alike. Some may not be conclusively answered within our lifetime, while responses to others may soon emerge organically as a consequence of widespread AI usage. Either way, they will have a formative effect on what everyday life looks like in the decades to come.


How generative AI regulation is shaping up around the worldWith generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world.


And yet, there is a very real sense that these questions do not matter – or, at least, only matter on a theoretical level – to the most prominent players in the AI boom. Big tech companies have been quick to lend their voices to the AI regulation debate (and even quicker to sound the alarm as to the technology’s potential dangers), but whilst they line up eagerly to advise Congress on its legislative approach, they simultaneously lobby for regulatory leniency in the EU. It seems that AI regulation is welcome, but on the terms of big tech alone.

Big tech intervention

The recent announcement of a Frontier Model Forum should be both welcomed and carefully scrutinised. Formed initially of Anthropic, Google, Microsoft, and OpenAI, the Forum is presented as an industry body which will ensure the ‘safe and responsible development of frontier AI models’. While not defined by the Forum’s initial press release, ‘frontier AI models’ can be understood to be general-purpose AI models which, in the words of the Ada Lovelace Institute, ‘have newer or better capabilities’ than other models.

The forum’s objectives include undertaking AI safety research; disseminating best practices to developers; and collaborating with parties like academics, policymakers, and civil society bodies to influence the design and implementation of AI ‘guardrails’. Membership, meanwhile, will be restricted to organisations which (in the Forum’s eyes) both develop frontier models, and are committed to improving their safety.

Admittedly, questions around the safe and effective development of AI will not arrive without investment, so it is encouraging to see a commitment to this collaborative approach amongst prominent AI vendors. Likewise, effective AI regulation will rely on input from those with real domain expertise: the industry’s doors must remain open to governments and policymakers.

Widening scope and discussion

At the same time, the Frontier Model Forum carries a risk of drowning out the voices of the broader AI community when it comes to shaping the future relationship between AI and society. We cannot allow the AI conversation to be monopolised when an approach like the Forum is just one route among many towards making AI more beneficial and productive. And other important innovations will surely emerge from businesses which don’t exist yet.

We should also be wary of focusing resource, as the Forum appears to advocate, on general-purpose AI models alone. Specialised, discrete models designed to solve more specific challenges can be applied to high-value, highly sensitive workloads sooner and more safely. If we consider a highly knowledge-based and specialised sector such as law, then the need for specialised AI with close domain knowledge is obvious. We’ve already seen the disastrous potential of generalist AI in the legal sector after a New York lawyer submitted case law invented by ChatGPT to court. This infamous example of AI misuse also demonstrates why an industry-by-industry approach to regulation is so important.

Further, a one-size-fits-all approach to regulation, which only follows the lead of big tech, will invariably have an anti-competitive impact – something we can’t be shy in suggesting is the aim of initiatives like the Frontier Model Forum. This limits the diversity and impact of research and investment. It will be tailored to the needs of generalist models which rely on massive capital investment, failing to create proper space and consideration for emerging alternatives.


84 questions to ask before training an AI modelAI experts and data scientists have published a checklist of questions developers should ask before embarking on training an AI model.


Staying compliant

Unfortunately, this imbalance is not merely theoretical or speculative. Smaller and mid-sized AI companies must bear the burden of attempting to comply in good faith with laws, including the GDPR and the newly passed EU AI Act. The industry’s giants can, for example, afford to build and release products using whatever data they please, and simply pay the fines if they later turn out to be on the wrong side of intellectual property law. For them, it is a sound business decision to rapidly pursue market share today and assess the broader economic impact tomorrow.

If big tech players ride over the top of AI regulation and adjust their practices retrospectively, this only adds greater friction for smaller businesses – precisely the opposite of a framework which encourages entrepreneurial innovation.

There are many open questions in the world of AI, and that means there is no easy sourcebook for policymakers to turn to for insight and explanation. As they increasingly seek to regulate this transformative technology, they will need ongoing, considered input from industry experts. But more importantly, they will need a diversity of insight to inform rules which cater to a diversity of solutions. And they will need to collaborate with all those for whom the big questions in AI are pressing matters. An innovative, productive future for the technology relies upon it.

Jaeger Glucina is managing director at AI legal processing company Luminance.

Related:

Mitigating the organisational risks of generative AIRisks including bias and hallucinations abound across generative AI projects. Jeff Watkins explains how businesses can mitigate them long-term.