Malware has continued to evolve.
As cyber defence technology becomes more intelligent, sophisticated and proactive, attackers continue to improve on attacks that look to evade and penetrate these defence systems.
Malware is beginning to embrace low level artificial intelligence (AI) techniques, so that the software begins to autonomously make basic decisions based on environmental factors it faces.
This is what we refer to as “evasive malware.”
Today’s attacks are often playing a game of deception.
A malware sample pretends to be a perfectly benign program when analysed by a defensive tool, and performs the malicious activities it was programmed for only when running on an actual user’s device.
These are programs with a split personality, sort of Dr Jekyll and Mr Hyde of programs.
They are programmed to display very different behaviours depending on the environment they are executing.
This is an effective technique for malware programs to “stay under the radar” for as long as possible.
Defences classify the program as benign so it does not get blocked; meanwhile it can continue to perform its malicious activities and, as a consequence, its authors continue to profit.
Interestingly, this is the area where most of the “innovation” in malware has gone in the last few years (if you exclude malware like Stuxnet or similar).
The actual malicious activities performed by malware have pretty much remained the same over time – they continue to steal financial data, to encrypt personal files and ask for ransom, to destroy systems.
Where we see new things and new techniques being experimented is in the area of beating defences, evading them by playing this deception game – acting a lot like the most sophisticated artificial intelligence that currently exists today.
How do they do it?
Modern malware detection is done by observing the “behaviour” of a program and determining if it behaves in a malicious way.
In practice, samples are executed inside a so-called “sandbox”, a specially instrumented system where (ideally) all the actions performed by the sample can be observed (what files they create, what kind of network communication they perform, etc.).
Malware authors want their programs to display a benign behaviour inside one of these systems, and to behave maliciously only when running outside of these systems (e.g., on a user’s laptop).
So, the problem for malware authors is to distinguish between a real user’s device and a sandbox.
To do that, they insert what we call “evasive checks” in their programs.
These are tricks that allow a program to determine if they’re likely running inside a sandbox; if that’s the case, the program will do nothing bad (for example, it will exit right away or simply waste time), otherwise, it will do all the malicious activity.
There are different types of checks. Environment checks probe the system to detect indications that it may be a sandbox.
For example, malware collects the serial numbers of machines where they execute.
If the same serial number is seen a lot of times, it may correspond to an unlucky user that gets infected over and over, but more likely the machine is an analysis sandbox that analyses a large number of samples.
Malware also checks for the hardware configuration of the machine: if it is in any way strange (for example, it does not have a mouse, or the installed memory is low), it may be a user using an old computer, but more likely it’s a sandbox using a minimal hardware configuration to spare resources.
A second group of checks relies on stalling the execution.
Sandboxes are used to analyse thousands of programs per day, so they usually monitor any given program for just a few minutes.
A simple evasion technique then consists of just waiting for, say, twenty minutes before launching all the malicious activities.
Malware authors have come up with creative ways of waiting or wasting time for several minutes.
A third group of checks we see very often is based on determining if an actual human being appears to be using the computer.
For example, malware may check if the mouse moves, if there are items in the clipboard (indicating that a user copied and pasted some text there), or if any document was recently opened in Word.
It really is a multitude of different techniques and tricks, where the only limit is the malware author’s creativity.
How can we keep up?
There is certainly an element of an arms race with the criminals coming up with a new technique and the defence in many system approaches playing catch up.
The key is having a defence technology that operates low enough in the system architecture that it gives the defence system the visibility that an evasion may be occurring.
This method of early detection allows you to put in place a countermeasure with relatively little effort.
You have a chance to see that something unexpected is happening (which may in itself be enough to conclude that the sample is suspicious).
Then, with relatively little modifications to your system, you can bypass the evasion attempt (that is, you can outsmart these intelligent features used by malware, if you will) and force the malware to show its real behaviour.
What is important to remember is that new low-level AI malware techniques are not limited just to “advanced” malware.
There is a real trickle down effect, where techniques that are first used only by sophisticated actors are being shared out much like the non-criminal open source software markets.
You will see these malware components incorporated in more traditional, run-of-the-mill malware (for example, the type that is distributed in large spam campaigns) and more broadly disseminated in the years to come.
This is why it’s important to understand these advanced threats more quickly, and set up proper advanced malware detection and remediation strategies before bad actors target your organisation this year.
Sourced by Marco Cova, Senior Security Researcher at Lastline