Using AI intelligently in cyber security

Bold claims have been made about the potential for cyber security solutions to detect and block attacks with little to no human involvement. During the last 12 months in particular the volume has been turned up on the potential of increased AI and automation helping to win the ongoing battle against cybercrime.

What AI has to offer is undoubtedly impressive, but it shouldn’t be taken as an indication that AI can be left to its own devices, fixing problems and eliminating threats without us lifting a finger.

>See also: EXCLUSIVE: Cyber security predictions from CUJO AI 

Even if these AI security solutions can work independently – a notion that should be taken with a huge pinch of salt – a machine-only approach to cyber security will still leave organisations vulnerable to attack. AI algorithms and neural nets are complex and revolutionary, but that doesn’t make them the remedy to every ill.

So what can security professionals realistically expect from AI in the battle against cyber criminals?

Rise of the machines?

Market interest in AI is explosive. Worldwide spending on AI systems is forecast to reach $12.5bn in 2017, and Gartner predicted that AI technologies would be used in almost every new software product by 2020.

However this optimism needs to be tempered with a degree of realism as, in its 2018 predictions, Forrester states that 75% of AI projects will underwhelm. This will be driven by a failure to build these solutions holistically, and failing to consider operational needs.

>See also: How AI has created an arms race in the battle against cybercrime

This means the notion that AI systems could identify and nullify all threats on their own should be approached with a healthy dose of scepticism. Furthermore there is a strong argument that the question shouldn’t be “can an AI operate independently”, but rather, should it?

The human touch

The answer is probably not. While AI will enable organisations to begin automating some of the network security workload, enabling security teams to focus on strategic and threat reducing initiatives, this will be its limit.

After all, true intelligence is human, and the true victories in cyber security will emerge from human intelligence. Rather than acting as a replacement for cybersecurity expertise, AI will be able to rationalise normal network activity against information provided by other expert monitoring systems, enabling professionals to make more intelligent decisions about where their attention should be focused at any given time.

>See also: The era of cyber attacks: AI’s role in cyber insurance

As such, while AI will be helpful, it won’t solve the challenging shortage of cyber security professionals. So instead of fearing for our job prospects, it’s important to remember that skilled people are needed to work with AI solutions. In threat hunting, for example, AI can be paired with human intelligence to proactively identify and mitigate threats faster and more reliably.

Furthermore, the capabilities of AI solutions are only as good as the data they process – which will of course be largely provided by humans. This means that, in cyber security, it will always be playing catch-up to a uniquely human understanding of the threat landscape. The truly intuitive solutions will be human focused, and will blend AI and machine learning techniques with the skills of expert analysts.

Avoiding the pitfalls of an AI oracle

So, given AI will not be effective without expert input, what is required is a humanised approach. AI should help accelerate the capabilities of IT and security teams, taking on the tasks that slow teams down and leaving the experts to the higher reasoning. The idea that AI should be implemented as part of a semi-automated solution in support of a human expert isn’t new, but there’s more to it than that.

To support this, the cyber security community should resist trying to create a single, all seeing AI oracle, and instead look to utilise a community of AI agents that each has its own expertise.

>See also: What will the data security landscape look like in 2027?

These agents can integrate with all other existing security solutions and infrastructures, feeding back intelligence to a ‘command’ agent that collates ongoing intelligence from across the network against normal network activity. This approach provides a range of AI voices working together to make the security team’s job easier.

In addition to this, AI’s ability to recall data will be instrumental in improving organisations’ security posture. As AI is capable of accurately recording network activity over long time periods, it can be used very effectively to trawl through historic data to identify and highlight previous attack trends and representing them concisely for human interpretation.

The AI enabled human

However sophisticated, no single AI entity can handle the breadth of concerns necessary to act alone against the barrage of human and artificial cyber threats everyone faces. AI and automation may have huge potential, but rather than liberating us from work, the machines will be there to support organisations and strengthen existing teams.

>See also: Machine learning: The saviour of cyber security?

As such the future doesn’t hold a place for an invisible horde of artificial defenders, crushing threats before they impact organisations. The reality will be more powerful, and more human. Machines will elevate human ability, accelerate productivity and fight alongside human cyber security hunters and responders.

This in turn will support faster and better decision making based on deeper insights: enabling threat hunters to find and eliminate threats before they can impact the business.


Sourced by Gene Stevens, co-founder and CTO of ProtectWise

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...