Social engineering attacks remain among the most prominent and threatening types of cyber attacks to organisations and their networks, with threat actors consistently evolving their techniques at scale. Gradually, rogue messages into corporate communication networks including email, LinkedIn and SMS are becoming more authentic-looking with the goal of catching recipients out – and attackers aren’t going on intuition, alone.
As well as threat actors examining tone and communication styles in detail, Darktrace found a 135 per cent surge in ‘novel social engineering’ attacks in 2023, in line with the widespread availability and increasing usage of ChatGPT. Indeed, security teams and whole business workforces need to stay vigilant and continue adapting their training and strategies to keep up with the techniques of cyber criminals.
16 cybersecurity predictions for 2024 — Check Point Research has revealed its top predictions for cybersecurity in 2024, covering topics including AI developments, ransomware and cyber insurance.
Types of social engineering attacks
Here are some of the top types of social engineering attacks that security staff and individual business users should look out for, and how to effectively avoid allowing infiltration into the company network.
The threat of ransomware – demanding money (often in the form of cryptocurrency such as Bitcoin) in return for stolen and encrypted company data – remains rife across business networks. Predicted to impact over 70 per cent of businesses worldwide, the average ransomware payment globally was found by Sophos to total $1.54m this year – almost double the 2022 figure of over $800,000.
“Ransomware attacks are still number one, in terms of the types of attacks targeting the majority of the companies today. Ransomware attacks have become more orchestrated and organised over the past five years, but the headlines are the same,” said Bernard Montel, technical director EMEA at Tenable.
“With ransomware-as-a-service being used more today, cybercrime is now very much its own industry, with its own competitions between specialised groups. Some of these groups will target a specific sector, whether it’s education, financial services or healthcare.”
Ransomware-as-a-service (RaaS) is a subscription-based business model dedicated to facilitating the selling or renting out of malware, on the part of malicious developers.
Phishing attacks involve the general distribution of messages that masquerade as a colleague or business offering a service, which contain malicious links or files that when clicked on, can lead to the breach of details such as log-in credentials or sensitive personal or financial information. These can be sent via emails, text messages, phone calls or even social media.
Many kinds of phishing exist in cyberspace, ranging from widespread sending across a workforce, to more specific, targeted techniques.
The rising role of LLMs
Large language models (LLMs) are increasingly being utilised in order to generate more authentic-looking messages. Once trained on digital communication and domain data that reveals an organisation or individual’s tone style, typical message presentation and other specific details, cyber attackers can then prompt creation of messages that request something of the target in a manner that is close to how the real person would express that wish.
“Now, a threat actor doesn’t need to do the creative work in drafting convincing messaging – they can simply ask ChatGPT to create an email, for example, that sounds like it’s from a Microsoft employee asking about a password.
“As well as convincing phishing emails, it can now create very convincing domain names. This can make it incredibly difficult to spot a malicious site. It’s also not inconceivable for a layman to become proficient at creating whole malware sites.”
An IBM X-Force study revealed near-parity between human and AI phishing attempts, demonstrating the capabilities of tools like LLMs to create authentic-looking messages that employees must stay vigilant of.
Protecting against cyber attacks backed by generative AI — Threat actors are turning to generative AI capabilities to evolve social engineering and other cyber attacks — here’s how businesses can stay protected.
Spear phishing and whaling
Two more targeted and common kinds of phishing social engineering attacks are spear phishing and whaling.
Spear phishing entails attempted account or financial credential theft from an individual or organisation. As with other phishing types, threat actors utilising this method will pretend to be a trusted individual like a colleague.
“The preparation for a spear phishing attack, compared to a regular phishing attack, is completely different,” explained Montel. “With spear phishing, you know who specifically in an organisation you are targeting, and this calls for a lot of reconnaissance – a mapping out of the organisation.
“When the attacker knows as much as they can about the target, the chances of them opening it will be much higher.”
Whaling, meanwhile, is reserved for the very top of the org chart – the CEO and other senior executives. The target in this situation is likely to have sensitive trade information that is of use to the threat actor, and messages tend to emphasise business importance while masquerading as a customer or partner firm.
Business Email Compromise
While phishing takes aim at individual employees or businesses to steal their sensitive information, business email compromise targets whole processes, with the aim of infiltrating and benefiting from them. Aimed at senior managers and budget holders, attackers dealing in BEC attempts pretend to be a business email account holder, tricking email recipients into transferring money. Spear phishing may be involved in such attacks.
“In the in the world of business email compromise, text messages along the lines of ‘I need you to buy me some Amazon gift cards, to give out to the staff’, are still seen time and time again,” said Kevin Breen, director of cyber threat research at Immersive Labs. “That leans into people’s good nature, and how we as humans want to please everybody, especially when you start to apply pressure.
“This is what threat actors are really good at. They understand the human condition, so in these social engineering attacks, you’ll see them craft those messages in special ways. The message could say they are on a plane, or a train, and can’t access to the network, so would need the recipient to help them.
“They’re very particular, in the language they use to put this, and this sense of urgency. Then, we get into the whole ‘quid pro quo’ of human psychology.”
Comparing different AI approaches to email security — Exploring and comparing the ways in which AI aids security of email infrastructure.
A more emerging kind of social engineering that also utilises AI is that of voice cloning. Carried out over phone calls, this attack attempts to steal company funds using vocal biometrics, and “could redefine the landscape of social engineering,” according to CTERA CTO Aron Brand.
“Harnessing advancements in artificial intelligence, voice cloning can craft eerily accurate voice reproductions,” said Brand.
“Imagine a scenario where an attacker can mimic the voice of a CEO, a close colleague, or even a family member. The implications are profound: unsuspecting employees or family members could be lured into divulging sensitive information or conducting unauthorised transactions.
“That is why I anticipate voice cloning to become a major security concern in the near future. Its nuanced approach surpasses traditional methods and preys on the inherent human trust associated with familiar voices.”
How to prevent social engineering attacks
To avoid being caught out by social engineering attack attempts, security teams need to ensure that the whole workforce are properly trained and ready to realise the signs of fraudulent activity, and prevent any possible infiltration accordingly. Users should never click on links without first hovering over them to verify that addresses are valid, and the email address of the sender should always be double-checked for authenticity.
Design and support
“The first is in design of systems and making them human-centric. For this, we can break what is often seen as a trade-off between usability and security with a concept called ‘usable security’. This means working with established workflows and ensuring measures are practical following user experience and user interaction (UX/UI) principles.
“The second is in ensuring staff are well supported. A blame and shame culture leads to lack of early reporting; whereas clear lines of communication with the right support means early awareness and early mitigation. Training and awareness are also essential, and we are also seeing human-centric approaches (such as encouraging security behavioural change by using personalised engagement) having success.”
How IT operations can be more tied to end-user experience — Looking at how IT operations across the organisation can be more tied to UX.
A zero trust approach to cybersecurity, including social engineering attempts across communication channels, is a must for businesses of all sizes. The proactive notion of consistent verification regardless of the user trying to gain access to a network can make the difference between success and failure regarding security.
“Businesses need to be proactive in their defence strategy. Educating employees about this impending threat is paramount. They must be trained to verify any unusual requests, even if they come from familiar sources,” said Brand.
“In essence, the adage “trust but verify” will take on heightened significance in our increasingly digitised world, as we grapple with the implications of deepfake technologies.”
Embedding into company culture
Above all else, security protocols need to be embedded into organisational culture, with all employees operating with cybersecurity at the heart of all tasks. A proper security strategy to deal with social engineering attacks is a constant journey, with multiple steps to be taken and revisited regularly as threats evolve.
“Businesses can reduce the risk of network infiltration with a security-aware culture, establishing stringent protection policies, and ensuring policy adherence,” said Gary Orenstein, chief customer officer at Bitwarden.
“This involves continuous education on emerging threats, utilising multi-factor authentication, managing credentials from passwords to developer secrets, and regular security audits. By providing regular training and user-friendly security tools, businesses empower employees towards better security habits.
“Security teams should also stay updated on emerging social engineering trends through reliable sources and researchers. Finally, C-suite and board leadership should engage employees at all levels in regular simulated phishing attacks to maintain organisational awareness of threats and identify gaps in security measures.”
Over two-thirds of IT leaders concerned about deepfake attacks — 68 per cent of IT decision makers surveyed by Integrity360 express concerns about threat actors using deepfakes to target their businesses.