Hyper-connected workplaces and the growth of cloud and mobile technologies have sparked a chain reaction when it comes to security risks. The vast volume of connected devices feeding into networks provide a dream scenario for cyber criminals — new and plentiful access points to target. Further, security on these access points is often deficient.
For businesses, the desire to leverage IoT is tempered by the latest mega breach or DDoS attack creating splashy headlines and causing concern.
However, the convenience and automation IoT affords means it isn’t an ephemeral trend. Businesses need to look to new technologies, like AI, to effectively protect their customers as they broaden their perimeter.
The question becomes, how can enterprises work with, and not against, artificial intelligence?
>See also: How AI has created an arms race in the battle against cybercrime
The emergence of AI in cyber security
Machine learning and artificial intelligence (AI) are being applied more broadly across industries and applications than ever before as computing power, data collection and storage capabilities increase. This vast trove of data is valuable fodder for AI, which can process and analyse everything captured to understand new trends and details.
For cyber security, this means new exploits and weaknesses can quickly be identified and analysed to help mitigate further attacks. It has the ability to take some of the pressure off human security “colleagues.” They are alerted when an action is needed, but also can spend their time working on more creative, fruitful endeavours.
A useful analogy is to think about the best security professional in your organisation. If you use this star employee to train your machine learning and artificial intelligence programs, the AI will be as smart as your star employee.
Now, if you take the time to train your machine learning and artificial intelligence programs with your 10 best employees, the outcome will be a solution that is as smart as your 10 best employees put together. And AI never takes a sick day.
It becomes a game of scale and leveraging these new tools can give enterprises the upper hand.
AI under attack
AI is by no means a cyber security panacea. When pitted directly against a human opponent, with clear circumvention goals, AI can be defeated. This doesn’t mean we shouldn’t use AI, it means we should understand its limitations.
>See also: Machine learning: The saviour of cyber security?
AI cannot be left to its own devices. It needs human interaction and “training” in AI-speak to continue to learn and improve, correcting for false positives and cyber criminal innovations.
This hybrid approach already has proven itself to be a valuable asset in IT departments because it works efficiently alongside threat researchers.
Instead of highly talented personnel spending time on repetitive and mundane tasks, the machine takes away this burden and allows them to get on with the more challenging task of finding new and complex threats.
Predictive analytics will build on this by giving security teams the predictive insight needed to stop threats before they become an issue as opposed to reacting to a problem. This approach is not only more cost effective in terms of resources, but also is favourable for the business due to the huge reputational and financial damage a breach can cause in the long term.
Benefits of machine learning
Alongside AI, machine learning is becoming a vital tool in a threat hunter’s tool box. There is no doubt machine learning has become more sophisticated in the past couple of years and will continue to do so as its learnings are compounded and computing power increases.
Organisations face millions of threats each day, so it would be impossible for threat researchers to analyse and categorise them all. As each threat is analysed by the machine, it learns and improves. This not only helps protect organisations now, but compiles this valuable data for use in predictive analytics.
However, just staying ahead of the hackers and the threats they pose is not enough to protect organisations as the new vulnerabilities and new devices that come online will make this more and more difficult.
>See also: Revenue for AI systems to top $47 billion by 2020
The continued and enhanced standardisation on data formats and communication standards is crucial to this effort. Once data flows and formats are clearly defined, not just technically but also semantically, machine learning systems will be far better placed to effectively police the operations of such systems.
The industry needs to work towards finding the sweet-spot between unsupervised and supervised machine learning so that we can fully benefit from our knowledge of current threat types and vectors and combine that with the ability to detect new attacks and uncover new vulnerabilities.
Much like AI, machine learning in threat hunting must be guided by humans. Human researchers are able to look beyond the anomalies that the machine may pick up and put context around the security situation to decide if a suspected attack is truly taking place.
For the security industry to get the most out of AI, they need to recognise what machines do best and what people do best. Advances in AI can provide new tools for threat hunters, helping them protect new devices and networks even before a threat is classified by a human researcher.
Machine learning techniques such as unsupervised learning and continuous retraining can keep us ahead of the cyber criminals. However, hackers aren’t resting on their laurels. Let’s give our threat researchers the time to creatively think about the next attack vector while enhancing their abilities with machines.
Sourced by Hal Lonas, CTO of Webroot