Every cyber attack is looking more sophisticated than before, or so security teams would have us believe. Breaches in general may be complex, but the complexity is only apparent when you try and rebuild the story of the attack, and this context, or storyline is important. Those complicated storylines often start at the endpoints in a company’s system.
Endpoints are where an employee might have plugged in a USB device they found in the parking lot, curious to know what’s on it. Or maybe an employee opened a malicious PDF attachment they got in an email. It makes sense to look to endpoints, where so many attacks happen, to gain visibility. Endpoints are where network and process activity are available, and where you can even do external device monitoring. Like, say, who was it that plugged in that USB, and when, and where?
We’ve got a lot more visibility into attacks than we had in the years, with EPPs (Endpoint Protection Platforms): products that relied on virus signatures but were utterly blind to memory-based malware, lateral movement, fileless malware or zero day attacks.
Should CEOs take responsibility for cyber-physical security incidents?
But here’s the problem: EPP may protect endpoints, but it doesn’t give organisations visibility into the threats. First-generation EDR (Endpoint Detection and Response) tools were a byproduct of the need for the visibility that EPPs simply didn’t offer. This generation of EDR – let’s call it Passive EDR – on the other hand, provides us with data but no context. We have the pieces of the puzzle, but no overall picture to pull them all together.
What runs through a CISO’s mind isn’t a hunger for each and every scrap of disconnected data from an attack. Rather, it’s more like a game of Clue: Was it Colonel Mustard in the drawing room? A contractor with a USB drive? A state-sponsored threat group? Has the threat been mitigated yet, and if so, how long was it active? Which of the SOC’s all-too-few analysts are analysing that tsunami of data flooding in from their passive EDR?
What is behavioural AI, and how can it help?
What happens after an attack? The story can go two ways, and most likely you’re familiar with the first, seriously problematic way: namely, security analysts have to sift through all of the alerts and anomalies produced by passive EDR. Those investigations take time and skill: a rare commodity, given how hard it is to find, train and retain personnel who have the expertise to operate the security platforms and the know-how to separate the wheat from the chaff, the real exploits from the random bugs.
There is another way the story can go, and, fittingly enough, it involves storylines: the contextualisation of all the disparate data points into a succinct narrative. This is a behavioural AI model that not only frees an organisation from relying solely on difficult-to-source analyst skills, but which also does so around the clock, constantly recording and putting context around everything that happens on every device that touches the network.
Modern adversaries have figured out a way to cut out their former reliance on files and instead leave no footprint, using in-memory, fileless malware to evade all but the most sophisticated security solutions. But because the behavioural AI model tracks it all, it gives you a way to detect attackers who may already have credentials in your environment and who may be doing things like living off the land (LotL): a term that describes fileless, malware-less attacks that use a system’s own, perfectly legitimate, native tools to do their dirty work, thereby blending into the network and hiding among the legitimate processes to pull off a stealthy exploit.
Clearly, having an AI assistant on hand—in fact, an AI agent resident on every device that touches the network—saves a lot of time. It relieves an organisation of having to rely solely on people to analyse things that sometimes amount to nothing at all.
AI use cases in healthcare for Covid-19 and beyond
Isn’t it time to stop scrambling? Now, you can.
Behavioral AI can be used to mitigate automatically—a seriously powerful gamechanger. The technology is capable of making a decision on the device, without relying on the cloud, or on humans, to tell it what to do.
Monitoring behaviour is a tricky, complex problem, and you want to feed your algorithm robust, informative, context-rich data which really captures the essence of a program’s execution. To do this, you need to monitor the operating system at a very low level and, most importantly, link individual behaviours together to create full “storylines”. For example, if a program executes another program, or uses the operating system to schedule itself to execute on boot up, you don’t want to consider these different, isolated executions, but a single story.
Training AI models on behavioural data is similar to training static models, but with the added complexity of the time dimension. In other words, instead of evaluating all features at once, you need to consider cumulative behaviours up to various points in time. Interestingly, if you have good enough data, you don’t really need an AI model to convict an execution as malicious. For example, if the program starts executing but has no user interaction, then it tries to register itself to start when the machine is booted, then it starts listening to keystrokes, you could say it’s very likely a keylogger and should be stopped. These types of expressive “heuristics” are only possible with a robust behavioral engine.
The reality of automatic mitigation with behavioural AI: no data exfiltration, no headlines, and no call from the FBI.