Over two-thirds of IT leaders concerned about deepfake attacks

68 per cent of IT decision makers surveyed by Integrity360 express concerns about threat actors using deepfakes to target their businesses

Artificial intelligence is seeing a rise in usage among cyber criminals looking to catch out business security measures, with 59 per cent of participants in research from cybersecurity specialists Integrity360 agreeing that AI is increasing frequency of attacks.

Additionally, 61 per cent of tech leaders surveyed expressed apprehension over the increase in AI, indicating that threats such as deepfake attacks are concerns within the industry.

So-called ‘offensive AI’ is being utilised particularly to generate malware, as well as creating more authentic-looking phishing emails in line with the target company’s language, tone, and design.

War of the AI algorithms: the next evolution of cyber attacksExploring the next AI powered evolution of cyber attacks.

“The use of AI for cyber attacks is already a threat to businesses, but recognising the future potential and the impact this can have, is just the start,” said Brian Martin, head of product development, innovation and strategy at Integrity360.

“We’ve already seen the potential for deepfake technology with the video of Volodymyr Zelenskyy telling Ukrainians to put down their weapons and spreading disinformation. This is just one example of the nefarious means in which it can be used, and businesses need to be prepared for how to defend against this and discern what is and isn’t real, to avoid falling victim to an attack.”

C-Suite communication on security

Meanwhile, nearly half (46 per cent) disagreed that they lacked understanding of the impact of AI on cybersecurity.

However, regarding tech leadership roles involved in the survey sample, CIOs were found to have the least understanding of AI’s impact on security, with 42 per cent indicating disagreement with this notion.

This particular finding demonstrates a possible gap remaining in many businesses’ communication of, and education pertaining to the cybersecurity strategy — especially among the C-Suite.

Benefits of AI

Looking to the other side of the coin, however, 73 per cent of respondents said that AI is becoming an increasingly important tool for security operations and incident response, with the possibility of decreasing strain on security staff remaining in mind.

What’s more, 71 per cent agree that AI is improving speed and accuracy of incident response, likely due to the technology’s ability to analyse vast amounts of data and identify threats in real-time.

67 per cent also believe that using AI improves the efficiency of cyber security operations, a notable example being automation of administrative tasks.

Use cases for AI and ML in cyber securityLooking at how artificial intelligence and machine learning can bolster security processes.

“As AI technologies continue to evolve, their integration into cyber security will follow. Organisations must remain proactive in embracing AI while also addressing the challenges it presents, ensuring that their cyber security defences keep pace,” added Martin.

205 IT security decision makers were surveyed by Integrity360, alongside Censuswide, in August 2023.


Jake Moore – deepfake is the next weapon in cybercrimeESET cybersecurity specialist Jake Moore on what safeguards every business should have to combat cybercriminals, how CTOs can make their job easier, and why deepfake video is the next front in the cyberwar.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.