Is the structure of AI holding back the true tackling of cybercrime?

The term ‘artificial intelligence’ is widely used and keenly contested in the cyber security sector at the moment. Or should that be augmented intelligence. Or machine learning?
The issue is that there are dozens upon dozens of cyber security companies utilising AI to various degrees – with most of these falling under the subcategory of machine learning systems. These are intelligent and can perfect a task under a particular sub-set of rules, but are not truly intelligent and independent-thinking.

Think about a robot designed to be a world class driver. Specifically engineered to achieve this task, they will be able to analyse and adjust at a rate simply beyond that of a human and post unbeatable times.

However, ask it to load some shopping into the boot of said car, and it will either sit there unable to process the information, or in the best-case scenario, be poorly equipped for the task. AI systems are specialists, not all-rounders.

>See also: How AI has created an arms race in the battle against cybercrime

The problem is, cyber criminals are. Those that seek to manoeuvre past security systems and help themselves to the data of others are not constrained by the rules that govern AIs.

This means that quite often, new breakthroughs and tactics are being used in order to beat the system. And as important as AI is as a breakthrough, it is constantly being tested by these attempts, all the while learning from past experience. This is why the concept of a hive structure to AI systems is so interesting to many within the industry. If you think about attacks, hacks and breaches, it would be fair to estimate that around 80% of attacks follow the same patterns and executions: phishing, ransomware, Trojans and so on.

Although the attacks may improve in terms of sophistication and use different delivery methods, most fall into these tried and tested categories.

Therefore, an approach by which different organisations can share the information collected by their intelligent security systems would certainly be beneficial. Once a certain amount of data was collated and analysed centrally, you’d almost certainly be able to see patterns and common occurrences with attempted breaches and hacks. Knowledge is power, and being able to share this would undoubtedly give many companies a resource which would aid them in the fight against cybercrime.

Separately, those attacks that seem to be brand new wouldn’t stay brand new for long; rather than businesses only finding out about these by keeping up to date with the findings of security researchers or being targeted by the methods themselves, their security systems would have already alerted them to the trend.

>See also: Cyber crime: an unprecedented threat to society?

The impact of the hive mind could have a substantial reach. As with many technologies, the system would need to be initially built on a smaller scale – between a few companies in the first instance perhaps – and then gradually built out. But it certainly has the potential to impact multiple countries, and may even uncover a deeper level of analysis through doing so.

Cyber security is no different to any other aspect of life, with different cultures and groups of people responding to it in different ways. What may seem a daily nuisance to one company could be seen as a major compliance and reputation issue for another in a different location. Clearly, an aggregated approach combined from different countries would result in best practice being shared on a far wider scale, with information being collated that encompasses many different approaches and techniques.

Of course, if it were as easy as drawing together this network of communicating AI systems then it would be something already constructed. As previously mentioned, there are dozens of security companies developing various levels of AI – and all without a universal language. This is the key component for a hive AI system, an interchangeable format or language for the AI to operate.

Truth be told, the artificial intelligence market will need to mature before this becomes a possibility. But one only has to look at the way in which viruses that are tracked in the wild are afforded a unique code and uploaded to a central database to see that these kinds of systems aren’t a mere pipe dream.

>See also: The role of artificial intelligence in cyber security

Of course, even if this universal language is achieved and implemented, careful consideration needs to be given to the kind of information that is being analysed. It’s clear that even a network of a few different companies would generate a huge amount of data; a national network would quickly amass petabytes of data. It’s an old security phrase but one that very much still rings true: ‘garbage in, garbage out’.

So, the second technological hurdle to overcome would be to refine the way in which this data is captured and analysed. Far too many security systems already leave users with an insurmountable amount of data to sift through. However, in much the same way that combining AI systems to learn from each other would help them learn around security threats, the way that they analyse data in itself could also be refined. It’s a lofty goal, but one that would represent a huge leap forward in network data analysis.

And of course, it will be vital to ensure that the data being analysed is held in a secure manner; with such a sharing of data comes great responsibility.

If we could overcome the varying intelligences of artificial intelligence currently in play, ensure that there was a common language and the network capability for these systems to share information, and make sure that analysis was refined to the most actionable state possible, then cyber security is set to benefit tremendously.

>See also: Top 10 predictions for low-level cybercrime in 2017

Of course, security technology, at the very heart of the matter, is simply a sticking plaster over the fundamental architecture issues that make up the bedrock of the internet and connectivity.

But what if a hive mind began to influence the very way that we have assembled our online world? Now that, is a question worth asking. For now, security should be the aim for a system like this, but grander ambitions may well one day be the aim.

 

Sourced by Dr Jamie Graves, CEO, ZoneFox

Avatar photo

Nick Ismail

Nick Ismail is the editor for Information Age. He has a particular interest in smart technologies, AI and cyber security.