From generative to agentic AI – now the real transformation begins

Node4's Mark Skelton takes us through the move from generative to agentic AI and how to approach it in your organisation

In November 2022, the world was introduced to generative AI en masse with the global release of ChatGPT. Since then, AI has been the buzzword on everyone’s lips. Over the years, the capability of generative AI has come on in leaps and bounds, graduating from basic written content to complex images, sounds and even videos. As it evolved, the question everybody began to ask was, “What comes next?” We now have the answer: agentic AI.

Unlike traditional automation tools or generative models that simply respond to prompts, agentic AI systems are goal-driven. They are proactive, can reason through a task and act to achieve a defined outcome, often across long and detailed processes and through multiple decision points. Indeed, this ability to act autonomously provides companies with the opportunity to completely revolutionise and transform their business models.

How have we arrived here?

The early benefits of generative AI have been well-documented, with 78 per cent of companies now utilising it across multiple business functions. Implementation covers a wide range of business functions: 56 per cent of companies use it for customer service, 51 per cent for cybersecurity and fraud prevention, 47 per cent for digital assistants, and 40 per cent for inventory and supply chain operations. The list goes on.

But the shift towards agentic AI signals something more ambitious. Rather than simply making people more efficient, it enables the complete automation of business processes in ways that Robotic Process Automation (RPA) and other earlier systems failed to deliver. Where rigid formulas and conditional rules limited previous approaches, agentic AI is flexible by design. It can assess imperfect or incomplete data, determine the best course of action and carry out tasks that previously required human judgement. The result is not just faster workflows, but autonomous execution across entire processes.

In more advanced deployments, these agents don’t operate in isolation. Organisations are exploring orchestrated multi-agent systems, where several agents coordinate tasks, pass information to each other, and adapt dynamically to changing environments or inputs, among other advanced capabilities that are still being explored.

This opens up a wide range of use cases. On the front end, for example, organisations are beginning to explore how agents can manage sales orders from receipt through to fulfilment. Elsewhere, in back-office environments, finance teams are using agentic AI to automate time reporting, payroll and other repetitive tasks, allowing staff to focus on higher-value activities.

It’s also just as important to understand what agentic AI isn’t. These systems are not just smarter chatbots or APIs with memory; they are designed to operate independently, pursue goals and adapt their behaviour based on context and feedback. As such, the underlying point is that these are early days in the agentic AI journey, but momentum is building fast.

But, if industry predictions are to be believed, spending on agentic AI could reach $155 billion (£115 billion) by 2030, representing a significant shift in enterprise priorities: away from investing in standalone tools and towards building autonomous systems that can operate, adapt and collaborate with minimal human oversight.

Approaching risk and responsibility

Adding significantly more autonomy to the mix also raises the stakes for how organisations approach risk and responsibility. While previous waves of AI-based automation focused on outputs that were clearly defined and more easily audited, agentic AI challenges these assumptions. By design, agents make decisions in fluid, often ambiguous contexts. This raises fundamental questions about how those decisions are monitored, governed and owned.

Equally important is the design of human-in-the-loop systems, where humans can inspect, override or adjust agent decisions. This not only builds trust but also establishes a feedback loop that enhances performance and ensures compliance.

To a greater or lesser extent, and depending on each implementation, responsibility is moving from human-led processes to autonomous systems. As a result, companies need to rethink oversight and not just implement agentic AI with the same guardrails seen over the past few ‘Gen AI’ years.

For example, it is no longer enough just to review results; organisations must understand how agents reach decisions, what data they rely on and how outcomes are validated. Without clear frameworks for accountability, the benefits of autonomy risk being undermined by a loss of control or visibility, with almost inevitable performance and compliance consequences.

At the same time, traditional AI performance metrics, such as latency or model accuracy, are no longer sufficient. Measuring the effectiveness of agentic AI requires new approaches that track task completion rates, contextual decision quality, and consistency over time.

This makes readiness a broader issue than technology innovation alone. Success with agentic AI depends not only on infrastructure, but on whether an organisation’s culture, processes and leadership are equipped to manage delegation at scale. Ultimately, those that treat agents as collaborators will be best placed to unlock their potential while meeting all the other important requirements that these advanced technologies bring.

Key takeaways

  • Agentic AI systems are goal-driven. They are proactive, can reason through a task and act to achieve a defined outcome.
  • In more advanced deployments, these agents don’t operate in isolation. Organisations are exploring orchestrated multi-agent systems, where several agents coordinate tasks, pass information to each other, and adapt dynamically to changing environments or inputs.
  • Agents make decisions in fluid, often ambiguous contexts. This raises fundamental questions about how those decisions are monitored, governed and owned.
  • Organisations must understand how agents reach decisions, what data they rely on and how outcomes are validated.
  • Measuring the effectiveness of agentic AI requires new approaches that track task completion rates, contextual decision quality, and consistency over time.
  • Success with agentic AI depends not only on infrastructure, but on whether an organisation’s culture, processes and leadership are equipped to manage delegation at scale.

 Mark Skelton is chief technology officer at Node4.

Read more

Why AI needs a kill switch – just in case – Artificial Intelligence is progressing fast, leaving organisations open to threats. Here’s how to put the kill switch on AI when you need to

How to leverage AI tools for your 2025 job search – If you want to use AI tools to make the most of your job search, Jobbio’s Aoibhinn McBride will show you how