Decisions at every level now hinge on timely, accurate information, making knowledge the ultimate weapon in business. For that reason, digital tools are no longer optional, but daily necessities for individuals and organisations alike. Victory goes to those who can collect, analyse, and act on relevant intelligence faster than the competition.
Yet technology has also opened the hidden battlefield of information warfare. Over the past three decades, the explosive growth of the web and smartphones has democratised not only markets but also misinformation. Today, anyone can effortlessly reach customers, as detractors can just as easily do the same.
The modern business landscape increasingly resembles a Clausewitzian ‘total phenomenon of conflict’, where information warfare plays a major role. We are literally witnessing ‘the rise of an information warfare in cyberspace’, where disinformation has become a weapon. In such a hyper-connected world, even a minor rumour can rapidly escalate into a strategic threat.
AI is a powerful ally and a double-edged sword
Into this volatile mix has stepped AI, propelled by a lightning-fast democratisation of large language models (LLMs). With AI, decision-makers can now digest mountains of unstructured data in real-time. While this can be tremendously empowering, it also introduces new risks. Chief among them is AI’s tendency to offer polished answers without revealing its reasoning. Users are rarely shown the underlying sources or the level of uncertainty involved, creating a potentially dangerous illusion of accuracy.
The notorious LLMs’ hallucinations are a case in point, as AI will confidently present false or unsupported claims if those seem statistically plausible. When the training data is biased or incomplete, those flaws are projected as if they were facts. As noted, such errors ‘undermine the reliability of AI-generated content, affecting trust [and] decision-making’. Also, each AI response is the product of billions of data points synthesised without clear attribution or footnotes.
So, while AI extends the reach of decision-makers, it also conceals the inherent ambiguity of the data it draws from. This opacity makes users particularly vulnerable to misinformation. Ironically, the more we rely on AI to think for us, the more we must question its outputs. In this context, trust becomes not just important: it is the defining issue.
Building a human-centric knowledge cycle with AI
To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management.
At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge.
In practice, this means treating AI as just another tool in the toolkit. Leaders must develop procedures and mindsets that compensate for AI’s blind spots, leveraging it to accelerate learning while mitigating its errors. A practical way to structure this is in a three-stage loop – access, verification, and learning – all driven by human oversight. In this model, AI augments decision-making, but responsibility and critical thinking firmly remain in human hands.
Step one: curated access to information
Rather than allowing AI to process unverified content indiscriminately, organisations should rely on a curated set of trusted data sources. These may include industry databases, peer-reviewed journals, market intelligence platforms, or vetted internal documents. By establishing a defined universe of reliable inputs, organisations reduce the risk of overlooking critical information while excluding untrustworthy content.
This step echoes Sun Tzu’s emphasis on intelligence gathering: ‘Enhance situational awareness’ by systematically collecting data on all relevant factors. It means mapping out who holds key knowledge, where vital information resides, and how it flows, and updating that map continuously as new content and contributors emerge. AI should support this effort, not replace it.
Step two: verification – trust classification and bias control
Next, question everything. Once information enters the system, its validity and provenance must be rigorously assessed. The classic intelligence practice assesses both the credibility of the source and the accuracy of the content with a simple five-tier rating system. Similar frameworks can be embedded into AI tools.
Assume that no answer is 100 per cent certain. This mindset reframes AI not as an oracle, but as a capable assistant that recognises its own limits. By systematically classifying and cross-checking inputs, organisations can significantly reduce the risk of falsehoods seeping into their decision-making processes.
To guard against misplaced confidence, people must also be trained to approach AI-generated responses with healthy scepticism. Without this cultural shift, the very automation we value could quietly erode trust.
Step three: continuous learning and human insight
Finally, use verified information to generate fundamental knowledge. AI can now be seamlessly integrated into daily decision-making, drafting reports, visualising trends, or simulating scenarios. But the most valuable element at this stage remains the human touch. AI can synthesise what is known, but only human minds can explore what is unknown.
The most significant strategic threats often lie in the ‘unknown unknowns’, i.e., the blind spots we don’t even realise we have. To uncover them, organisations must foster divergent thinking, curiosity, and a willingness to challenge assumptions. Encourage people to ask, ‘What if?’ and ‘Why not?’ as often as ‘How?’. Cultural practices such as cross-functional brainstorming, red-teaming ideas, and rewarding experimentation broaden situational awareness and sharpen creative edge. This mindset not only surfaces new insights but also reinforces the two previous steps by prompting a search for new sources and a more critical approach to accepted data.
Defining strategy with human-centred AI
In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional.
More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.
Ultimately, by aligning technology with human insight and continuous education, AI becomes a force multiplier, not a risk. Those who master this disciplined approach won’t just manage knowledge more effectively; they will define the strategic frontier of the Information Age.
Key takeaways
- Large language models (LLMs) are empowering, but come with risks, such as not showing sources for the information it provides.
- Structure your knowledge cycle with a three-part loop: access, verification and learning.
- The ability to critically assess AI-generated content is essential, not optional.
Paulo Cardoso do Amaral is the author of Business Warfare.
Read more
AI vs AI – are cybercriminals or organisations winning? – Cybercriminals are using LLMs to enhance their attacks, making it harder for security professionals to even know they’re under attack. However, security teams and researchers are using GenAI to make themselves smarter and faster at finding security flaws at scale, argues Michiel Prins, co-founder at HackerOne
Why synthetic data is pivotal to successful AI development – Geoff Barlow explains how synthetic data is helping businesses to overcome the barriers to AI development
Why ISO 42001 sets the standard for responsible AI governance – With the use of AI increasing inall areas the development of effective governance is paramount. ISO 42001 is the latest standard helping businesses build trust moving forward