Are there solutions to the AI threats facing businesses?

AI threats facing businesses are growing. How can businesses respond? Should they fight fire with fire or is deception technology the answer?

The cyber security landscape is complex and constantly evolving, but AI threats are just one of many security consideration for business and IT leaders. Organisations of all sizes are first off, not doing the basic things right; how many have been caught out for failing to patch (or update) software? Equifax, among many, springs to mind. ‘Doing the basics right’ will help organisations develop a much better defensive position. Continuing failings in this, however, combined with the rise of AI threats, mean businesses “are getting themselves into a bit of a jam,” believes James Plouffe, strategic technologist at MobileIron and technical consultant on the hit TV show, Mr Robot.

“Increasingly, cyber adversaries are capitalising on AI and machine learning technologies, in order to use automation to exploit the consistent weaknesses that are found in a lot of organisations,” he says.

AI: the greatest threat in human history?

In 2016, the late Stephen Hawking warned at an event marking the opening of the Centre for the Future of Intelligence that the successful creation of AI could be the last event in human history. Read here

AI threats automate attacks

One of the strengths of AI and machine learning, something it does exceptionally well, is pattern recognition. And, in the not-so-distant past, one of the challenges — from a hacker’s point of view — was the manual nature of cyber attacks.

Plouffe explains: “They would have to recon on different networks, identify targets and create profiles of these organisations that they were targeting.”

But, attackers can now use machine learning to do pattern recognition. This gives them the ability to target lots of organisations in the same vertical (healthcare or finance, for example), which often have similar network structures and often use the same technologies — it allows hackers to scale attacks like never before. And, unfortunately for defenders, it is very difficult to identify whether AI or machine learning is a component of a cyber attack.

Fighting fire with fire

Using AI to combat AI threats is a good idea. If AI and machine learning can help a hacker scale their attacks, then organisations can use the same technology to help mount a scaled defence of their networks — indeed, the machine learning that’s being used in some of the new endpoint detection and response products is to be admired.


Endpoint solutions using AI and ML

Crowdstrike Falcon Endpoint Protection.
Symantec’s Endpoint Protection (SEP).
Palo Alto Traps 6.0 and XDR.
Trend Micro Apex One Endpoint Security.
McAfee MVISION EDR Endpoint Security.


Darktrace, the UK-based unicorn, is also a good example of how organisations can use AI to fight AI, among other attacks, through autonomous response.

However…nothing is perfect.

There is always a chance an organisation will be breached. And, so while defence and prevention is of the highest importance, it’s also necessary to be able to contain a breach effectively: incident response. “You need to be able to keep the breach contained within certain parts of your network to make sure it doesn’t spread,” continues Plouffe. “An organisation’s network should be treated like a building, where areas can be automatically shut off in the event of an emergency.”

Prevention, detection and response: an inside look at the cyber security industry and where it’s heading

The cyber security landscape is as shifting and unpredictable as the cyber attacks that threaten it. Read here

Deception technology

Deception technology, an offensive countermeasure, is an emerging category of cyber security defence.

The notion is that these technologies, products and services can be deployed and act as a next generation ‘honeypot’. These systems are intended to specifically confound things like recon, lateral movement and techniques that attackers are going to rely on whether they’re carrying out an attack manually or in an automated fashion. This can nullify the AI threat, according to Plouffe, because currently, “machine learning has a long way to go in dealing with things that change rapidly”.

He provides an analogy: “the golden rule of of AI is that the thing that you train on has to roughly approximate the stuff you’re going to be doing in real life. Following this logic, if you’re practicing and learning French, it does not put you in a particularly great position to learn Russian. From a defenders point of view, if you know the the attacker is expecting you to speak French, and then all of a sudden, it looks like you’re speaking Russian, that will hamper the attack, whether there’s a human at the other end or some sort of machine learning algorithm.”


5 deception technology vendors

TrapX
Illusive Networks
Attivo Networks
Acalvio
CyberTrap



Related articles

Artificial Intelligence: the possibilities and the threats posed

Cyberwarfare: the danger and potential answers

AI in cyber security: predicting and quantifying the threat

Threats posed by AI must be addressed, experts warn

How artificial intelligence can stop the malware threats of the future


Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Security Threats