AI continues to reshape how we engage with the world and how organisations operate at unprecedented speed. This growth continues to increase the pressure on organisations to implement responsible (and reasonable) governance. But where is that oversight coming from, and how can organisations align themselves with a best practice and balanced approach?
Many organisations operate in an environment where strong influences are coalescing around the increased use of AI, particularly the influence of confusion and rapid change. The International Organization for Standardization (ISO) recognised that these challenges were coming, prompting the development of ISO 42001, which is a governance and management system standard for the AI lifecycle.
More specifically, ISO 42001 sets out a structured, risk-based framework for an AI Management System (AIMS), much like ISO 27001 does for information security. Crucially, it is designed to ensure that AI development, deployment and maintenance adhere to principles of safety, fairness, and accountability. As AI becomes more embedded in business processes, this standard helps organisations address key challenges such as transparency, decision-making and continuous learning.
What is ISO 42001?
But, we may ask, why bother with AI governance at all, and what does ISO 42001 offer that other frameworks or voluntary principles do not?
At their core, AI technologies bring additional risks and considerations compared to traditional IT systems – notably the ability to learn, adapt, and make autonomous decisions. These raise a wide range of fundamental ethical and societal questions around how these systems are developed, deployed and controlled.
For example, poorly trained models can entrench harmful biases and discrimination, while a lack of accountability makes it difficult to determine who is responsible when things go wrong. Inadequate safeguards can also lead to privacy violations and open the door to security threats, from deepfakes used for social engineering and disinformation to AI-enabled cyberattacks.
At the same time, any perception that AI is untrustworthy, opaque, or unsafe could erode public trust, damaging confidence in the technology and those deploying it. Add in legal uncertainty and the potential for unintended consequences in high-stakes sectors such as government, healthcare, or finance – it’s not hard to see why careful, considered, reasonably applied governance must underpin the use of AI going forward.
Risk vs trust
As a result, we see an enormous scope for developing AI systems that could be considered risky. These manifest in a variety of ways, including AI systems whose complexity, autonomy or impact potential introduces a higher level of concern across operational, ethical and societal dimensions. While some AI applications handle low-stakes tasks like document automation, others are rapidly evolving into decision-makers embedded deep within business processes and public systems. These more advanced models bring emergent risks in their behaviours or outcomes that might not have been visible during development.
These risk levels shift over time as models are retrained, integrated into new environments or connected to sensitive functions. As a result, AI that may seem low-risk today can become high-risk tomorrow, especially when its outputs influence democratic processes and decisions, legal rights, or public safety. This fluid risk landscape highlights why static controls aren’t enough and, instead, governance must be continuous, adaptive, and informed by both technical controls and societal impact.
With these issues front of mind, responsible organisations are focused on building trust in the use of AI – which requires far more than meeting baseline compliance requirements. While regulations provide a starting point, organisations that go beyond them by prioritising transparency, ethical development, and user empowerment are better positioned to foster confidence in these systems. Being transparent about how AI is used, what data it relies on, and how decisions are made are key. Moreover, giving users control over when and how AI capabilities are enabled, along with assurances that their data won’t be retained or reused for training, plays a critical role in establishing that trust.
Equally important is a commitment to fairness, privacy and human oversight. Trustworthy AI should be trained on diverse, representative datasets and continuously tested to minimise bias and prevent harm. Communicating this clearly, whether through architecture diagrams, certifications or usage policies, helps demystify AI for end users and reinforces accountability. Ultimately, trust is earned when organisations are open about their methods and ensure that human judgement remains central to decision-making.
ISO 42001 and the technology supply chain
In this context, ISO 42001 is particularly relevant for organisations operating within layered supply chains, especially those building on cloud platforms. For these environments, where infrastructure, platform and software providers each play a role in delivering AI-powered services to end users, organisations must maintain a clear chain of responsibility and vendor due diligence. By defining roles across the shared responsibility model, ISO 42001 helps ensure that governance, compliance and risk management are consistent and transparent from the ground up. Doing so not only builds internal confidence but also enables partners and providers to demonstrate trustworthiness to customers across the value chain.
As a result, trust management becomes a vital part of the picture by delivering an ongoing process of demonstrating transparency and control around the way organisations handle data, deploy technology, and meet regulatory expectations. Rather than treating compliance as a static goal, trust management introduces a more dynamic, ongoing approach to demonstrating how AI is governed across an organisation. By operationalising transparency, it becomes much easier to communicate security practices and explain decision-making processes to provide evidence of responsible development and deployment.
For organisations under pressure to move quickly while maintaining credibility, trust management frameworks offer a way to embed confidence into the AI lifecycle, and in the process, reduce friction in buyer and partner relationships while aligning internal teams around a consistent, accountable approach. ISO 42001 reinforces this approach by providing a formal structure for embedding trust management principles into AI governance. From risk controls and data stewardship to accountability and transparency, it creates the foundation organisations need to operationalise trust at scale, both internally and across complex technology ecosystems.
Matt Hillary is SVP of Security & CISO at Drata.