In response to the increasing use of artificial intelligence (AI) technologies to defend against cyber attacks, malicious actors are now discussing their potential application for criminal use.
Research from Control Risks, the global risk consultancy, has shown that the development of techniques to use these technologies and tools to enhance their capabilities is now increasingly on the agenda of cyber threat actors.
>See also: How AI has created an arms race in the battle against cybercrime
Nicolas Reys, associate director and head of Control Risks’ cyber threat intelligence team, explained: “More and more organisations are beginning to employ machine learning and artificial intelligence as part of their defences against cyber threats. Cyber threat actors are recognising the need to advance their skills to keep up with this development. One application could be to use deep learning algorithms to improve the effectiveness of their attacks. This shows that AI and its subsets will play a larger role in facilitating cyber attacks in the near future.”
There are currently no known attacks using AI, according to the report, but these technologies could assist threat actors in a number of ways.
In the targeting of a criminal campaign, threat actors could use algorithms to generate spearphishing campaigns in victims’ native languages, expanding the reach of mass campaigns.
>See also: Cyber security and AI predictions 2018
Similarly, larger amounts of data could be automatically gathered and analysed to improve social engineering techniques, and with it the effectiveness of spearphishing campaigns.
In the post-infection phase, clusters of compromised devices that have the ability to self-learn, so-called dubbed ‘hivenets’, could be used to automatically identify and target additional vulnerable systems.
Extensive, customised attacks
Based on its assessment of the target environment, AI technology could tailor the actual malware or attack in order to be unique to each system it encounters along the way. This would enable threat actors to conduct vast numbers of attacks that are uniquely tailored to each victim. Only bespoke mitigation or responses would be effective for each infection, rendering traditional signature- or behaviour-based defence systems obsolete.
>See also: The role of artificial intelligence in cyber security
Advanced obfuscation techniques
Threat actors could evade detection by developing and implementing advanced obfuscation techniques, using data from past campaigns and the analysis of security tools. Attackers may even be able to launch targeted misdirection or ‘noise generation’ attacks to disrupt intelligence gathering and mitigation efforts by automated defence systems.
“The use of AI is not likely to become widespread soon, given the financial investment that is currently needed,” continued Reys. “However, as more research is produced and AI technologies become more mature and more accessible to threat actors, this threat will evolve. Organisations should be aware of the potential for these types of attacks to emerge in the course of 2018. Staying informed and being able to identify relevant emerging attacks, technologies and vulnerabilities is therefore just as important as being prepared in the event of an attack.”