Yoshua Bengio – ‘Powerful tech will yield concentration of wealth’

Professor Yoshua Bengio, one of the godfathers of AI, on which sectors will be revolutionized by AI, the need for tighter regulation, and whether AI poses an existential threat

Computer scientist Yoshua Bengio began working on artificial intelligence in the Eighties and Nineties, and he’s been called one of the three godfathers of AI, alongside fellow scientists Geoffrey Hinton and Yann LeCun.

Yoshua Bengio focused specifically on neural networks, the idea that software can somehow mimic how the brain functions and learns. Fellow researchers were sceptical. But Professor Bengio and other colleagues persevered, and their research led to advances in voice recognition and robotics. ChatGPT is also built on the foundation that Bengio helped to build. Their perseverance paid off in 2018, when Yoshua Bengio shared the Turing Award, often called the ‘Nobel Prize of Computing’, with Dr Hinton and Prof Yann LeCun.

‘Some people believe that it’s too late, that the economic pressure has gotten us on a slippery slope from which we cannot back out’

However, Yoshua Bengio has become increasingly disquieted by his career, telling the BBC in May that he feels “lost” over his life’s work. He told the BBC, “It is challenging, emotionally speaking,” he said, as researchers are increasingly unsure how to “stop these systems or to prevent damage”.

In March, Bengio, a professor at the University of Montreal, joined more than a thousand technology experts in calling for a six-month pause on training of AI systems more powerful than GPT-4 — the large language model behind San Francisco-based OpenAI’s ChatGPT. Co-signatories included engineers from Amazon, Google, Meta and Microsoft as well as Apple co-founder Steve Wozniak.

Information Age spoke to Yoshua Bengio about how IT leaders can safely develop artificial intelligence inhouse, whether AI regulation will ever work, and will artificial intelligence be the saviour or destroyer of mankind?

On the one hand, we have AI evangelists telling us that artificial intelligence will free up our lives, help create revolutionary drugs and help lead the battle against climate change; on the other, Cassandras who say we have invented our own destruction which will cause mass unemployment and possibly the extinction of humanity. Where do you sit on this sliding scale?

I sit on a different axis, which is that of healthy Bayesian agnosticism and the precautionary principle, i.e., we do not know, there is a lot of uncertainty, but there are indeed multiple scenarios that have been brought forward that are hard to reject off-hand while staying rational.

As a consequence, and given the gravity of the potential catastrophes, I believe that we need to invest at least as much in understanding and mitigating all the risks as we are currently investing in making AI systems more competent (driven by the appeal of profit, for the most part).

The panic over Generative AI – apart from the legitimate fears over mass unemployment when it comes to AI replacing white-collar jobs – seems to tap into an almost gleeful apocalyptic wish fulfilment that somehow, we have created our evolutionary successor. Your colleague Geoffrey Hinton has mused on whether ‘humanity is just a passing phase’ in the evolution of intelligence. Do you see artificial intelligence as an enabler or the ultimate successor to humanity?

As far as I can see, and I do not have a crystal ball, both scenarios can be argued reasonably. My guess is that we need to figure out a path that gives us the benefits of AI while minimizing the risks.

In every other industrial sector, we take a lot more precautions than in AI, so I believe it is completely unreasonable to continue with the current free-for-all. If that means slowing down some of the technological progress in exchange for greater safety, I’ll take that path.

There are people who believe that it’s too late, that the economic pressure has gotten us on a slippery slope from which we cannot back out and that AIs will inexorably become smarter and smarter (because this makes them more useful and profitable to us), eventually surpassing humans. They believe that, from then on, it will be difficult to prevent the arrival of a kind of new species more powerful than ours, with great uncertainty about our future, whose control we may then have lost.

I believe that while there is life, there is hope, that we still have individual and collective agency in this process, and that the morally right thing to do is to continue working towards reducing catastrophic risks (starting by better understanding them), just like climate activists continue fighting to steer us towards better outcomes while they could certainly feel discouraged and ‘too late’.

China had decided to license the development of artificial intelligence, the EU is working towards tight oversight, while the US seems to have a more laissez-faire attitude. Is any attempt to regulate deployment of AI doomed to fail, as nefarious users are going to do what they want anyway? Or maybe we should go further, and totally ban AI research except in tightly controlled circumstances, just as we would for biological weapons?

We can certainly improve on the regulation front. We have been able to heavily regulate planes, pharmaceuticals, food, chemicals, etc to better protect the public. I don’t see why we could not regulate AI as well, given that it is becoming a very powerful tool, and thus a potentially very dangerous one, because tools are dual use by nature, and with greater power comes greater responsibility. We clearly need to invest more in designing appropriate governance, and those studies may well conclude that some forms of unsafe AI systems should be banned and managed under heavily controlled conditions, just as we do for very dangerous weapons like nuclear bombs or bioweapons.

On a practical level, which sectors do you think AI is going to have the most practical positive impact on? Drug discovery? Climate change? Software development?

It is hard to say, but all three you named are areas where the potential is huge. Beyond drug discovery, the understanding of disease and thus the development of better medical therapies is something I see prominently in my AI radar screen, because even single cells are extremely complex machines and we are now acquiring the kind of large-scale high-throughput data that only AI will be able to systematically piece together in order to clarify the causal mechanisms of cells, and later of whole bodies.

One thing that there does appear to be consensus on is, despite of or because of lots of people being made unemployed, there will be gains in business productivity – something we’re woeful on here in Britain. So how should any government pay for people being long-term unemployed? Taxing the AI productivity gains of large corporations?

I don’t think it makes economic sense to only tax the AI productivity gains. However, in our economic system, it is plausible that new powerful technologies will yield concentration of wealth, and this is not good from both the viewpoint of market efficiency and democracy (which by definition means the opposite of power concentration, i.e., power to the people). So yes, we need to strengthen the antitrust laws (and actually apply them) and the fiscal power to redistribute wealth to steer society back towards a healthier democracy and greater well-being for all. So, yes to additional taxation, but in ways that are not explicitly targeting the AI industry and that do so while maintaining our ability to efficiently innovate.

What’s the one thing you would say to a corporate IT leader given the task of training and developing an AI model in house? What should be their north star when it comes to deciding on a safe AI policy?

Actually, training an LLM from scratch remains very expensive, so it makes more economic sense, especially for smaller companies, to work with companies specializing on pre-training them and sharing the resulting system (in some form that is safe by design) to multiple companies.

However, fine-tuning an existing pre-trained model is cheap and could be done by a small team with affordable computing resources and not necessarily gigantic datasets, typically for a specialized task and domain.

Those companies (like OpenAI) have a crucial responsibility in terms of safety, but the companies deploying AI systems also have one. By default, and I think it should stay like this, they will also be liable for the decisions they make, if they yield harm. This means either hiring people with the right socio-technical (not just engineering) expertise or again, working with companies (a few are coming out) providing such safety services.

Corporations should also welcome the arrival of regulation (rather than systematically oppose it, as usual) because it will make the rules and responsibilities clearer and level the playing field, removing the advantage which careless companies currently have.

Yoshua Bengio is full professor at Université de Montréal, founder and scientific director of Mila – Quebec AI Institute, Canada CIFAR AI Chair and 2018 A.M. Turing Award co-winner

More Tech Leader Q&As

Elizabeth Renieris – ‘Our robot overlords aren’t quite here just yet’ – Elizabeth Renieris is a renowned artificial intelligence ethics expert, who believes that Big Tech is being disingenuous when it calls for a global AI super-regulator. Existing laws cover AI, she says, we just need to leverage them

Ashish Gupta – ‘You can’t be an averagely talented programmer’Ashish Kumar Gupta, head of EMEA for global IT services company HCLTech, believes AI will make truly skilled programmers even more valuable. The key to surviving the AI jobs purge will be to combine your tech skills with another business vertical, he says

Avatar photo

Tim Adler

Tim Adler is group editor of Small Business, Growth Business and Information Age. He is a former commissioning editor at the Daily Telegraph, who has written for the Financial Times, The Times and the...