Late adopters of AI could be left behind

Opportunity and crisis: A report from McKinsey has warned that late adopters of AI might be left playing catch-up indefinitely.  A professor who is shorty set to take over as the President of the British Science Association has said that AI is the biggest issue of our time, and that he is concerned we are not discussing it enough.

According to a report by McKinsey, by 2030,  around 70% of companies will have adopted one category of AI. But, says McKinsey, less than half will have adopted what it has described as the five categories of AI, by that date. The report also predicted that AI could be worth $13 trillion to the global economy between now and 2030.

>For more on how AI could boost economy by $13 trillion  

Breaking AI down into: computer vision, natural language, virtual assistants, robotic process automation, and advanced machine learning, it said: “AI has the potential to deliver additional global economic activity of around $13 trillion by 2030, or about 16%  higher cumulative GDP compared with today.”

It also issued a warning to late adopters, reminiscent of the lesson of innovators’ dilemma; the theory advanced by Harvard professor Clayton Christensen, to explain how companies can lose their market dominance through a failure to learn how to adopt certain new technologies until it is too late. McKinsey said: “Late adopters might find it difficult to generate impact from AI, because front-runners have already captured AI opportunities and late adopters lag in developing capabilities and attracting talent.”

>For more on innovators’ dilemma

It continued: “It is possible that AI technologies could lead to a performance gap between front-runners (companies that fully absorb AI tools across their enterprises over the next five to seven years) and non-adopters (companies that do not adopt AI technologies at all or have not fully absorbed them in their enterprises by 2030).”

Meanwhile, Jim Al-Khalili, Professor of physics and public engagement at the University of Surrey, and incoming president at the British Science Association has said that “our government has a responsibility to protect society from the potential threats and risks,” from AI.

Then again, contrary to certain media reports taking his words out of context, he doesn’t think AI poses a bigger risk than climate change or terrorism.

Professor Al-Khalili was quoted in the Telegraph saying: “Until maybe a couple of years ago, had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty.

“But today I am certain the most important conversation we should be having is about the future of AI. It will dominate what happens with all of these other issues for better or for worse.”

From these words the Telegraph headlined: “Artificial Intelligence is greater concern than climate change or terrorism.”

That’s not what I meant, said the prof, the Telegraph “twisted the word concern to mean more concerned about AI than climate change or terrorism” he tweeted.

All the same, he did tweet confirmation that he thinks AI is the “most important issue of our time (and is concerned we are not debating it enough.”

Avatar photo

Michael Baxter

.Michael Baxter is a tech, economic and investment journalist. He has written four books, including iDisrupted and Living in the age of the jerk. He is the editor of Techopian.com and the host of the ESG...