Automated hacking, deepfakes and weaponised AI – how much of a threat are they?

It is a vexing paradox that while emerging cyber technologies provide valuable benefits, their malicious use in the form of automated hacking, deepfakes, and weaponised artificial intelligence (AI), among others, prove a threat. Along with existing threats such as ransomware, botnets, phishing, and denial of service attacks, they make information security hard to maintain.

It will become even more challenging as more devices and systems get internet-connected, massive amounts of data that needs securing are generated, and newer technologies such as the Internet of Things and 5G gain ground. The democratisation of powerful computing technologies, such as distributed computing and the public cloud, only accentuates the issue.

Indeed, cyber threats can become a major, enduring threat to the world, says the World Economic Forum.

How real the threat is can be gleaned from the formation of the European Union’s (EU) law enforcement agency, Europol’s Joint Cybercrime Action Taskforce, facilitating cross-border collaboration to combat cyber crime by 16 EU member countries, as well as the U.S., Canada, and Australia, among others.

A Forrester study said 88% of respondents believe offensive AI is inevitable, with nearly half of them expecting AI-based attacks within the next year. With AI-powered attacks on the horizon, the study notes it “will be crucial to use AI as a force multiplier.”

Automated hacking

Increasing automation, a reality of the modern age, provides advantages such as speed, accuracy, and relief from monotonous tasks. Perversely, it has also sparked off automated hacking or hacking on an industrial scale in the form of multiple and more ‘efficient’ attempts that can cause massive financial losses and destroy the organisational reputation. They are completely automated, from reconnaissance to attack orchestration, and speedily executed, leaving little time for organisations to detect them and respond.

Here, publicly available information from websites of businesses and social media is collected quickly using automated open source tools to create convincing profiles for use by hackers as a prelude to attacks.

Organisations are implementing automated tools that reveal exploratory hacking and secure their digital footprint on the cyber landscape. Being proactive, including the use of decoys, counts for almost everything in mitigating and countering automated hacking by the distribution of attacks with different IP addresses and geo-locations. With such tactic, attacks are advanced and sophisticated by leveraging the power of cloud and to hide the real identity of the hackers.

The next wave of cyber adversaries, and how to protect against them

Adam Meyers, senior vice-president of intelligence at CrowdStrike, discusses how organisations can protect themselves from the next wave of cyber adversaries. Read here

Deepfakes

A blend of deep learning and fakes, deepfakes are a misuse of machine learning (ML) and AI that use similarities in photos, audio, and video to fabricate synthetic photos, audio, and video to impersonate, threaten, misinform, defame, extort and blackmail. Deepfakes are more convincing than shallowfakes, which started with morphing, but the motives remain the same.

In 2019, a U.K. energy firm was defrauded using an audio deepfake before executives got suspicious and blocked more attempts at fraud. Understandably, organisations may not reveal they were duped, masking the prevalence of deepfakes.

Automated security tools and products incorporating AI, especially in forensic software, can detect and counter them, and along with cyber security protocols and employee education, are common protective measures against deepfakes which, using social media, gain vast reach and immediacy.

Weaponised AI

Here the use of AI is up against its misuse and one in which the time taken to launch new attacks is very little and the time available to counter them even less.

Weaponised AI self-seeks vulnerabilities in systems and networks. It uses concealed malicious code or ‘intelligent’ malware to laterally move within the network, executes at specific times, or acquires system knowledge to vary attacks accordingly. It can generate legitimate traffic and create more noise in the systems in an organisation’s network, self-learn to evade security controls, and resort to AI or ML ‘poisoning’ to resemble benign behaviour while unleashing malicious code. It can use AI to learn why previous attacks were unsuccessful, and even to evade AI-based monitoring solutions, or discover when networks are most susceptible to attack.

The use of AI to counter cyber attacks lies in modelling and monitoring user behaviour to flag deviations, in automated network monitoring and analysis and the use of AI-enabled solutions for anomaly detection. To counter AI-based attacks organisations need to know how these attacks work, not an easy task given the inadequacy of trained personnel in such techniques, or for that matter, in AI itself.

However, a combination of IT hygiene, recourse to authorisation and authentication and unified visibility and control of digital assets is a good defence against a cyber threat. The use of only necessary network assets and decommissioning the rest so as to expose a minimal digital footprint and continuous network monitoring also help. Clearly, battling against attacks that are gaining in sophistication by using AI means organisations must invest in AI-based solutions and products to detect and frustrate them.

Written by Vishal Salvi, chief information security officer & head of cyber security practice at Infosys

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com