Phishing attacks — can AI help people provide a fix?

Phishing attacks rely on one common flaw we all suffer from — we are human.

As humans, we suffer from common frailties — across an organisation, people, human beings, are susceptible. A spear phishing campaign may put the fear of God, or at least fear of being fired up us. An aggressive email from the CEO demanding that a junior member of staff transfers some money, or sends sensitive information, may reduce the unfortunate recipient to a bag of nerves. Being rational, why would the CEO do that? Being rational and getting an aggressive email from the boss’s boss, don’t always go together.

“Depending on who you look at, 90-95% of most of the very serious breaches have come through spear phishing attacks,” says Paul Chapman from Cybershield.

Spear phishing “are the really targeted ones, when the cyber criminals or state actors have done some research, they know a few of your habits, hobbies, and have crafted an email, taking the time so that it will appeal to you. It is very difficult for ordinary users to pick these out, because users can be distracted, they could be busy at work or on a mobile device, they won’t see things, or they could be just helpful people — but from a cyber security sense, that could work against them, that is the really big problem.

>Read more on the role of artificial intelligence in cyber security

The traditional fix?

“Most consumers can pick out the Nigerian 419 scams — some will get clicked, but it is like an umbrella, as the scams get more sophisticated it gets harder to detect.

“What you really need is to design systems that can solve these things. You can go down the simulator route, and send fake phishing emails to users, and then get metrics on that — how many they have clicked.”

Unfortunately, simulated phishing has come under a lot of criticism. As Paul says: “The evidence is that they don’t really help. It may seem like you are making progress, you can say for example ‘last week 65 people clicked on the email, but this week it is only 35’. But if you make them tough, no one will pick them out. The companies who make these simulated phishing emails have a vested interest in making sure they are not so hard that no one spots them. However, we don’t know if people have really learned how to spot the simulation, they may know that if they scroll down they will find a message saying that this was a simulation. So, they might, so, ‘right thought so, now I can get on.’

>See also: Cyber security best practice: Training and technology

“Also, the systems are often used for blaming users, and get into sticky legal issues — is this a form of entrapment? If you are sending fake phishing emails in order to catch people out, and say ‘we are going to blame you for this, and three strikes and you are out.’l

But this approach doesn’t create trust.

“Even the very best experts will not pick-out really good spear phishing emails, we can all be fooled. It depends on the time and place.”

The other way is the system recognition way, they will pick out a level of phishing threat, based on old signatures, where people have reported mass phishing campaigns, and used a particular template, to have a signature. This can be very good at mitigating against things that go around and around and around, but for the more carefully targeted ones, they struggle.

>See also: Artificial intelligence: a force for good or bad?

The possible fix

Paul proposes a bold plan: “What we really need to do is think about how we provide a cyber security expert on people’s shoulder.”

It seems that AI, and machine learning in particular, could be the answer.

Cybershield applies three pillars

  • Meta — for example looking at an email and asking “does this email appear to be sent from your domain but originated somewhere else? Or have they changed the display name, or have they done a domain look alike, so for example, instead of Google they say Gaggle? If you are looking at this on a desktop or a mobile, it is harder to pick things like that out.”
  • Technical — asking where does this originate from? If, say, it was Russia at 4AM, then you can ask if this is logical? It could be legitimate, but now have a soft indicator to take into account.
  • Linguistic — this can involve looking at sentence structure, are there key words asking you to do something quickly or urgently? Is there is a use of emotion? Is there a threat? ‘If you don’t do this you will be sacked’.

>See also: Five cyber security trends for 2018

When it comes to those tricksy types we call people, one technique could be to get people out of panic mode, divorcing them from the ancient part of the brain, what Daniel Kahneman calls thinking fast. And instead get people to think slow, logically. An AI assistant that warns that ‘this threatening email that is causing so much stress’ might be a phishing attack, can achieve this. Things like a coloured bar, red for serious warning, amber for maybe a threat, can help people apply a more logical thought process.

Cybershield is part of LORCA. LORCA is the UK’s dedicated space for industry-led cyber security innovation. It supports the most promising cyber security innovators in scaling and growing solutions to meet the most pressing industry challenges.

LORCA brings together innovators, corporates, investors, academics, and engineers to maximise the commercial potential of great cyber solutions, minimise the barriers to scale and increase speed to market. By 2021, it will have stimulated the growth of at least 72 high-potential companies, grown up to 2,000 jobs, and secured £40m in investment.

Avatar photo

Michael Baxter

.Michael Baxter is a tech, economic and investment journalist. He has written four books, including iDisrupted and Living in the age of the jerk. He is the editor of Techopian.com and the host of the ESG...