Are you really ready for AI? Exposing shadow tools in your organisation

Without clear AI governance, employees often misuse publicly accessible external tools, known as 'shadow AI'

When it comes to security risks, the era of AI presents a new battlefield for data breaches and leaks. However, while AI is undoubtedly playing a major part in cyberattacks and defence alike, it’s the hidden, unregulated tools within your organisation that may pose an equally significant data loss risk. Unsanctioned use of external tools by employees, dubbed ‘Shadow AI’, like Shadow IT before it, has become one of the top five emerging risks facing organisations globally, according to Gartner’s Quarterly Emerging Risk Report.

This isn’t a distant scenario as your teams are currently using unsecured tools to boost productivity, regardless of existing IT policies.

This ultimately leads your organisation straight to critical data exposure. Nefarious cyber actors don’t need to steal sensitive data when your employees are giving it away to publicly accessible tools. Businesses must urgently implement AI policies and focus on training their workforce, not just to effectively capitalise on new technologies, but also to mitigate risks currently being introduced to their network.

The hidden threat of shadow AI

When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not, but without a proper ruleset in place, everyday employee actions can quickly become security nightmares.

This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. Third-party vendors are already, quietly, integrating AI-boosted features into software your teams may already use, without formal notification. From a security perspective, individuals and entire teams alike are choosing to integrate custom AI solutions to solve immediate problems, ignoring company cybersecurity reviews entirely.

The numbers agree. Gartner’s recent 2025 Cybersecurity Innovations in AI Risk Management and Use survey highlighted that 79 per cent of cybersecurity leaders suspect employees are misusing approved GenAI tools, and yet 69 per cent reported that prohibited tools are still being used anyway. Perhaps most alarmingly, 52 per cent believe custom AI is being built without any risk checks, a recipe for intellectual property leakage and severe compliance breaches.

Most organisations lack awareness

The root cause of turning to shadow AI isn’t malicious intent. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done.

Teams are constantly being pushed to increase output and efficiency. But where there is trust from companies in their employees to perform, that doesn’t always equate to clear AI governance and visibility of access from IT teams. Yet even with more prohibitive policies in place, employees will still find workarounds to make ends meet. Shadow AI isn’t just a problem with technology, but a problem of process and culture as well.

Building a proactive AI-first strategy

A balanced, strategic approach to address these challenges requires more than just direction from your IT team; it must come directly from the C-suite. Codifying your AI governance policies should be a priority; you cannot manage what you haven’t defined. Establishing clear, practical rules for what tools are acceptable in your organisation, and what aren’t, including AI-specific data handling rules and embedding AI reviews into third-party procurement.

Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security.

AI literacy must come in tandem with this, integrated directly into company culture. This means educating teams on the real-world risks and the ways to innovate operational lines responsibly, not just efficiently. The most effective way to combat Shadow AI use in your organisation is to provide a better, safer and more secure alternative. Enforcing a collaborative culture that can openly share best AI practices is also essential; don’t just say ‘no’ to public tools, but provide an avenue of ‘yes, and here’s how you do it securely.’

The first step is assessing readiness

A professional readiness assessment must be your first step, as it identifies the gaps in your organisation and allows a path to building the right, resilient foundation. This includes an overview of your current technology and AI environment, including any hidden risks, reviewing existing policies and monitoring capabilities. Prioritising AI use cases that can deliver tangible value without compromising control is key.

Building your AI roadmap that balances innovation with governance and security is critical before opening the floodgates and bringing Shadow AI into the light. When it comes to new and emerging technologies, your business mindset shouldn’t just be thinking about what these tools can do, but how you can best control them within your organisation.

Jon Bance is chief operating officer at Leading Resolutions.

Read more

Only 22% of IT staff fully understand capabilities of AI tools – AI is being explored across multiple sectors, but IT staff surveyed by SolarWinds are found to be struggling to use tools to full capability

Why knowledge is the ultimate weapon in the Information Age – Learn how to build a human knowledge-first approach to AI, so that your organisation can run on the best information possible