Protecting against cyber attacks backed by generative AI

Threat actors are turning to generative AI capabilities to evolve social engineering and other cyber attacks — here's how businesses can stay protected

As generative artificial intelligence (AI) tools, including ChatGPT, become more integrated into business operations, the use of AI applications is increasing, with one-third of organisations employing them regularly in at least one business function. This widespread adoption is leading to a rise in cybersecurity threats and risks. As these tools gain traction, the likelihood of data breaches and data misuse increases, highlighting the importance for organisations to adopt a proactive approach to safeguarding sensitive data. 

In fact, a recent Darktrace study revealed a 135 per cent increase in novel social engineering attacks from January to February 2023. This spike corresponds with the widespread adoption of ChatGPT and the reality that cybercriminals are using the technology’s capabilities to deliver more convincing attacks.

As the AI race powers on, organisations need to ensure they are protecting their businesses from generative AI-powered cyber attacks, and the new challenges generative AI and new tools like GPT-4 will bring.


What is generative AI and its use cases?Generative AI is the is a technological marvel destined to change the way we work, but what does it do and what are its use cases for CTOs?


The rise of generative AI-powered cyber attacks

Cyber attacks, including phishing, malware, and brute force attacks, are on the rise with no signs of slowing down. Anti-Phishing Work Group (APWG) states that a total of 4.7 million phishing attacks occurred in 2022 alone, representing an over 150 per cent per-year increase since the start of 2019. Attacks are continuing to grow in volume, frequency, and sophistication. As OpenAI unveiled its latest iteration, GPT-4, IT professionals face significant deliberations on whether to adopt such technology given the security risks it brings.

Beyond its innovative capabilities, a notable area of concern lies in generative AI’s potential to bolster the effectiveness and scale of cyber attacks by providing automated content generation producing highly convincing text, leading to spear phishing and social engineering attacks rising. This has prompted experts to closely examine the implications for corporate cybersecurity strategies, and bolster IT defence mechanisms.

Specifically, the accessibility of ChatGPT might be lowering the barriers to entry for bad actors, enabling inexperienced or amateur criminals to launch more sophisticated cyber attacks. Generative AI has meant inexperienced cyber criminals can target organisations with ease. The workarounds to facilitate this process for ChatGPT are very straightforward. This means there will not only be more attacks, but more seasoned attackers will also use phishing and other tactics as one part of more sophisticated campaigns.

In response, the Federal Trade Commission (FTC) launched an investigation into the potential overall harm ChatGPT could pose to consumers. While its potential to amplify fraudulent strategies is recognised, the broader implications of generative AI — such as its potential to facilitate more sophisticated attacks — should also be a major concern for businesses aiming to protect their most sensitive assets and intellectual property.

Rising costs and complexity of cyber attacks

Given the increasing frequency and complexity of cyberattacks, it’s not surprising that the cost of a single security breach is on the rise. For enterprises, there is an array of potential threats that could pose a significant challenge to IT security teams.

According to McKinsey, the number of distinct malware variants has surged from under 10 million in 2010 to over 130 million a decade later and newer, more complex types of malware such as “fileless,” malicious code embedded in native scripting language or written directly into memory. These attacks allow malicious code to move laterally in the environment.

Not only this, but social engineering attacks prey upon human error and target unsuspecting users across the network. Organisations should have complete visibility and be able to understand their network traffic patterns and identify social engineering attacks in real-time, ensuring they can remove both large volumetric and small resource attacks.


Over two-thirds of IT leaders concerned about deepfake attacks68 per cent of IT decision makers surveyed by Integrity360 express concerns about threat actors using deepfakes to target their businesses.


Make every second count

When cyber attacks are becoming more complex to defend against, every second counts. Time-to-mitigation is a critical factor in an organisation’s decision-making process, and an always-on solution capable of defending against even the largest of attacks provides companies with the best defence.

Cyber attacks can be detrimental to businesses causing financial setbacks, disruptions in production, and a damaged reputation. However, this can be prevented by having the right solutions in place, effectively warding off these attacks and safeguarding businesses from damage.

With always-on cyber protection, organisations can be assured “bad” actors will be blocked, keeping businesses up and running.

Preventing cyber attacks will only be successful if companies have complete control over what’s happening across their network, and understand what traffic is going to certain applications. Visibility on all network traffic and operations is essential to navigating the new challenges facing companies today in the world of rising generative AI tools and cyber attacks.

Tom Major is vice-president, solutions consulting EMEA at GTT.

Related:

16 cybersecurity predictions for 2024Check Point Research has revealed its top predictions for cybersecurity in 2024, covering topics including AI developments, ransomware and cyber insurance.