How AI will shift the security landscape in 2024

The impact of AI within cybersecurity is expected to grow significantly this year, says Alex Holland, senior malware analyst at HP

One of the ways you can understand the constantly shifting dynamic between attackers and defenders is through the lens of external factors. When conducting this type of analysis, we usually abbreviate the main factors into ‘PESTLE’, an acronym that stands for political, economic, social, technological, legal and environmental forces. And without a doubt, the technological factor that we see causing the biggest waves through the cyber threat landscape is the rapid evolution of AI.

Cybercriminals are increasingly using AI to boost the effectiveness and efficiency of their attacks. Meanwhile for defenders, AI is proving to be a powerful ally for security teams, improving threat detection and remediation. The need for AI to automate security tasks couldn’t come sooner. A recent report from the British government found the UK has over 160,000 vacant cybersecurity roles that need to be filled.

We expect the impact of AI within cybersecurity to grow significantly this year. It will advance phishing techniques and accelerate vulnerability discovery in systems. AI will also expedite the development of – and response to – network intrusions. To stay ahead of adversaries who are already starting to adopt AI, security teams must understand the threat AI poses, but also the opportunities it promises to improve network defence.

In 2024, organisations should focus on three areas where AI is ripe to transform cybersecurity:

#1 – AI-enhanced social engineering attacks

AI will escalate the scale and effectiveness of social engineering attacks in 2024, creating highly convincing phishing lures in just a few moments. Cybercriminals will use AI to produce personalised phishing messages using data from social media and breached email accounts. These sophisticated lures will be tough to detect, even for trained employees.

What’s more, generative AI will turn cybercriminals into better con artists. AI will help attackers to craft well-written, convincing phishing emails and websites in different languages, enabling them to widen the nets of their campaigns across locales. We expect to see the quality of social engineering attacks improve, making lures more difficult for targets and security teams to spot. As a result, we may see an increase in the risks and harms associated with social engineering – from fraud to network intrusions.

We may also see large AI-powered phishing campaigns around major events like the UK general election, sporting events (e.g. the Paris Olympics, The Champions League), and shopping events (e.g. Black Friday, Boxing Day sales). With AI-generated emails becoming almost indistinguishable from legitimate ones, relying on employee training alone to protect users won’t be enough. Instead, security teams should consider isolation technologies, like micro-virtualisation, that don’t rely on detection to protect employees. This technology opens risky files and links inside isolated virtual environments, preventing malware and software exploits – even zero-day threats – from infecting devices.

#2 – Local large language models (LLMs)

As computational power increases, a new generation of ‘AI PCs’ will be able to run local LLMs without the need to rely on powerful external servers. This will allow PCs and users to benefit from AI fully, redefining how people interact with their devices.

These local LLMs promise enhanced efficiency and productivity, while offering security and privacy advantages by working independently of the internet. However, the local models and the sensitive data they process, if not properly secured, could make endpoints a bigger target for attackers.

What’s more, many businesses are deploying chatbots that are built upon LLMs to elevate and scale their customer service. But the underlying AI technology can create new information security and privacy risks like potentially exposing sensitive data. This year, we may see cybercriminals try to manipulate chatbots to bypass security measures to access confidential information.

#3 – Advanced firmware and hardware attacks

AI is driving the democratisation of technology by helping less skilled users to carry out more complex tasks more efficiently. But while AI improves organisations’ defensive capabilities, it also has the potential for helping malicious actors carry out attacks against lower system layers, namely firmware and hardware, where attack efforts have been on the rise in recent years.

Historically, such attacks required extensive technical expertise, but AI is beginning to show promise to lower these barriers. This could lead to more efforts to exploit systems at the lower level, giving attackers a foothold below the operating system and the industry’s best software security defences.

Over time, we expect to see an increase in the frequency of malware targeting system firmware, such as MoonBounce and CosmicStrand. These attacks rely on exploiting weaknesses below the operating system. Focusing on closing the hardware and firmware attack surface will be essential to counter these threats.

A new frontier for cybersecurity

AI is set to transform cybersecurity, offering substantial opportunities for security teams to improve their threat detection and response. For example, AI copilots will help defend users by automatically detecting targeted phishing attempts.

The introduction of AI PCs in 2024 will also offer security benefits, allowing users to use AI securely on-device without the risks of storing data in the cloud. These devices will also enhance data privacy with features like automatic locking and privacy screens.

However, to use AI effectively, enterprises should prioritise security first. This means knowing your data and environment, threat modelling to understand your risks, selecting a balance of prevention and detection technologies, and implementing zero trust principles to keep data secure. Finally, organisations should consider working with trusted AI security providers to get the most out of AI, while minimising their security and privacy risks.

Alex Holland is the senior malware analyst at HP.

More on AI and cybersecurity

NCSC releases guidelines for secure AI developmentNew guidance from the National Cyber Security Centre (NCSC) aims to help businesses securely innovate with AI and minimise risks