The artificial intelligence (AI) space has been growing and evolving across business sectors, influencing organisational and societal processes at scale. As employee and customer needs continue to evolve, the race is on to properly commercialise deployments in the long-term. Generative AI in particular has seen a surge in investment across big tech, since OpenAI publicly released its ChatGPT chatbot in November 2022, leading to corporations like Microsoft increasing funding in the start-up, while the likes of Google look to compete in the market.
With AI innovations showing no signs of slowing down, we spoke with the founders of prominent AI start-ups, to gauge their insights on the biggest artificial intelligence trends emerging across business verticals this year.
How generative AI regulation is shaping up around the world — With generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world.
The commercialisation shift
“We’re now in the midst of the commercialisation of artificial intelligence, where it moves from being predominantly a technology in development to one in deployment,” said Nigel Toon, co-founder and CEO of microprocessor developer Graphcore.
This shift is bound to have many implications, but two that Toon cites as especially worthy of note are the infrastructure that enables hardware and software deployment; and the efficiency of compute systems being ran on. After all, models alone do not make an enterprise-ready AI solution.
Toon continues: “You need cloud compute; developer tools; a built-out software stack; and companies providing AI-as-a-Service products.
“People who think of Graphcore as a ‘chip company’ are often surprised that we have more people working in software than hardware and that we dedicate so much energy to developing these enterprise ecosystem partnerships, but that’s what it takes to be successful.”
When it comes to that all-important efficiency, meanwhile, compute costs become more important when you move from large hyperscalers and research labs developing foundation models, to commercial organisations looking to build AI into their business – serving thousands or millions of users.
“If you can find 20 per cent, to maybe 50 per cent savings on the cost of your hardware and the running costs based on performance, that has a massive material difference in terms of what AI can do for you,” said Toon. “So that move towards commercially deployed AI and the significance of performance/$ is hugely significant.”
The advent of GenAI has been hailed by many tech leaders as the next great frontier in technology that will impact society. This makes a balance struck between innovations that optimise in-house and customer-facing processes, and mitigation of possible societal harms like misinformation and data breaches, paramount. This lucrative area of AI is predicted to become a $1.3tn market by 2032 — from a mere $40bn in 2022.
Its already prominent role in society as well as business means that “AI is no longer viewed as just a ‘hype’ technology, but one that is very much at the center of our reality”, according to Imam Hoque, co-founder of decision intelligence start-up Quantexa.
Hoque continues: “It is, put simply, one of the biggest technological breakthroughs of our time, and it’s accelerating at an incredibly rapid pace.
“We’ve seen that generative AI has the ability to support human capability, automating otherwise routine tasks, ranging from providing support to customer help teams in the IT department, to assisting in extracting data from otherwise operationally challenging processes.”
Quantexa announces over £200m in AI research funding — Decision intelligence unicorn Quantexa is planning a London AI Innovation Centre and new global investments in enterprise and governmental innovation.
Alex Housley, founder and CEO of machine learning deployment vendor Seldon, added: “LLMs are one of the biggest game changers overall in the market. One reason for that is the shifting in level of skill sets that are required to create extremely powerful models, and to suddenly factor use cases away from highly skilled data scientists, to less technical stakeholders. It’s a more diverse group of model developers.”
The presence of these models — from smaller, early-stage models for one particular purpose; through mid-size models like AI21 and Cohere; and then larger scale models like GPT-4 — are nothing new, but risks remain that call for caution.
Internal vs external use cases
While room for further rapid growth remains in this market, risks of hallucinations and other pitfalls still to be addressed within the largest LLMs mean that complete reliance on them for external use remains impractical, in general.
“Only a select few companies — OpenAI and Microsoft, for example — are running larger-scale models,” Housley explained. “Organisations will generally tend to move some internal workloads to OpenAI via API’s. But for processes like turning unstructured data into a knowledge base, to help your team, they wouldn’t necessarily put this in front of customers, until we can really be clear on risks like hallucinations, and set appropriate boundaries in terms of its outputs.
“Internal use cases have less chance of having negative repercussions in the market. But this will shift over the next few years to more external cases.”
Another area of artificial intelligence being explored by start-up founders in the space is embodied AI. Specialised for controlling machinery such as a robot arm or an autonomous vehicle, the technology entails computer learning that bridges the gap between virtual and real-world robotics, to solve system communication challenges.
Elizabeth Renieris – ‘Our robot overlords aren’t quite here just yet’ — Renowned AI ethics expert Elizabeth Renieris believes that Big Tech is being disingenuous when it calls for a global AI super-regulator. Existing laws cover AI, she says, we just need to leverage them.
Alex Kendall, co-founder and CEO of self-driving system vendor Wayve, believes that embodied artificial intelligence will grow to be transformational to society. He says: “At Wayve, we’re building the foundation model for embodied AI, which provides the driving intelligence for our self-driving vehicles.
“This allows them to integrate seamlessly into fleet operations and drive in cities they’ve never experienced before. We’re seeing the same breakthroughs in AI that apply to chatbots and copilots apply to Embodied AI too, such as the generative AI example above. This is enabling increasing levels of intelligence and generalisation in our embodied systems.
Going forward, Kendall sees the technology becoming able to unlock new possibilities for people to interact with embodied systems, and advance autonomy breakthroughs.
Growing usage of synthetic data
With LLM developers set to gradually run out of human-generated information to use in the near future, many start-ups are turning their attention towards training with synthetic, or computer-generated structured data. One company that has been particularly focused in this area is Hazy, which uses generative techniques in the core of its product, from generative adversarial networks (GANs) through to Bayesian networks.
“Many of our customers are in financial services, and they tend to have transaction data sets, with trends and patterns around demographics and behaviour patterns to track through time,” said Hazy’s co-founder and CEO, Harry Keen.
“We’ve tried to emphasise the advantage of having your own private versions of data sets that retain statistical values — the ability to access, use and analyse assets with greater speed and efficiency, as well as lower risk.
“The big difference we’ve seen in the last year is a greater focus on the technology by regulators like the FCA and the ICO. There’s a level of maturity that needs to be achieved, from early stage, through the proof-of-concept stage, and then to the scale-up phase.”
FCA to launch Digital Sandbox to help boost business innovation — Effective from the 1st August, businesses will be able to use the Financial Conduct Authority (FCA)’s Digital Sandbox to test new digital solutions.
Graph neural networks
Elsewhere within AI trends, another emerging strain of the technology in the form of Graph Neural Networks (GNNs) is also making its mark. A class of artificial neural network for data processing that is tailored for graphs, this kind of deep learning allow for modelling of irregular structures. Notable use cases that are being explored through this include object detection, machine translation, and speech recognition.
Toon explained: “Think of the connections between people and things in a social network. That capability is also extremely useful for other applications like novel drug discovery.
“Computationally, GNNs are quite challenging for legacy compute systems like GPUs, but happily for companies like Graphcore, they perform extremely well. The ‘Graph’ part of our name isn’t a coincidence. So we can expect not just to see more AI in these very serious, high-stakes fields — but also new types of leading-edge AI techniques, such as GNNs.”
Staying compliant with regulation
As regulation of artificial intelligence continues to take shape — notable examples including the voting-in of the EU AI Act, and legislation developments in China — organisations of all sizes will need to keep tabs on continued drafting, to stay compliant.
Key AI stakeholders establish regulation forum — ChatGPT developer OpenAI has ditched its AI text detection program and formed a regulation body alongside Google, Microsoft and Anthropic.
Housley says: “A lot of the companies that we’ve been working with have been wanting to get ahead of the game, and ensure that systems are set up so that they don’t have this sort of reactive phase of trying to respond to the regulation that just come into force.
“When you read the GDPR, there have been many companies, at the last minute, trying to put the data protection systems in place — this can be a much harder thing to do with live AI systems.
“There’s a feeling of needing to not miss the opportunity, and use our common sense as much as we can, but also not try to eliminate the bureaucracy that can build up around these types of approval processes, because the cost of that may be more significant to the business overall.”
Hoque advises: “AI tools must be trained with accurate and up-to-date data, so the insights drawn are effective regardless of industry, to ensure that businesses avoid supporting ‘hallucinating’ machines.
“This data must be representative of the real world while lacking bias. If AI is learning from data founded on an out-of-date version of the world’s economic landscape, traditionally disadvantaged people will continue to be held back. The main responsibility of the government is to ensure the quality and quantity of data that machine models are being exposed to can render accurate analysis.
“From an ethical perspective, it’s also key that the data is protected and being used for good. The decisions made by AI technology will have an impact on our futures and will set the standard on how governments work with the public.”
Writing your company’s own ChatGPT policy — Giving your staff unfettered access to ChatGPT could have disastrous results, especially when it comes to unwittingly releasing confidential data. Here are some pointers as to how to write your company’s own generative AI policy.