Turning the tables on computer hackers

With the holidays approaching, online merchants, banks, and government watchdogs are emphasising the need to be “cyber security aware” – but is there anyone left in the connected world who is not aware of the dangers of lax cyber security?

Approaching this on a one-for-one basis, a whopping 40% of the world’s population fell victim to the infamous Yahoo data breach in 2013. New revelations indicate that some 3 billion accounts were compromised – nearly 100% of those worldwide who have internet access. Of course, that doesn’t mean that everyone in the world had a Yahoo account (some users may have had two, three, or more) – but there’s no question that the breach brought home to many, many people the importance of protecting oneself online, and the vulnerability of our collective online identities.

But like with many other scourges, the mounting damage isn’t noticed because it’s incremental; it’s only when one stands back and looks at the whole picture that one realises the enormity of the problem.

>See also: Hackers: who are they and what drives them?

Nearly 1,100 major (and heaven knows how many minor) data breaches were reported in 2016, 40% more than a year earlier, according to the Identity Theft Resource Center. 2017 is on track to beat that; high-profile hacks this year included the compromising of huge troves of data from River City Media, Dun and Bradstreet, the IRS, OneLogin, Verizon, Equifax, the SEC and many others.

A study by Ponemon and IBM says that data breaches cost companies some $4 million apiece; by 2019, losses to cyber crime will exceed $2 trillion, Forbes believes. And as the frequency – and costs – of cyber-insecurity grows, spending on cyber-defense grows, and is expected to reach some $90 billion in 2018.

Why is this happening? Why do hackers always seem to have the upper hand? There are only two possibilities; either the hackers are smarter, savvier, and more intelligent than the people protecting systems from their attacks, or the victims are taking the wrong approach to protecting themselves. The former can be dismissed outright; a perusal of the requirements of the many open cyber security positions available on a major online job site indicates that applicants are expected to have advanced degrees, a slew of certifications, experience, management skills, leadership skills, the ability to work as part of a team, self-direction – in short, they are expected to be highly professional, certainly as competent (if not more so) than the hacker “geniuses” they are expected to face off against.

>See also: Cyber security from a hacker’s perspective

If that is the case, then it must be the latter; the conception and execution of the concept of “cybersecurity” is missing some important element, the lack of which is obviating our efforts to stem the rising tide of data breaches. What that factor could be is anyone’s guess – and I believe it would take us a long time to make that guess. Instead of looking at the trees, we should be examining the forest. The “tree” in this case is the “detection” concept that many companies use to prevent attacks.

The detection/response model has been in use for years, and for many, it has been the primary approach to cyber security. And as such – given the poor state of cybersecurity – it’s fair to say that technologies like anti-virus, sandboxes, and even EDR (Endpoint Detection and Response) may need a boost, if not a replacement. By definition, an attack can be detected and/or responded to only after it takes place – and unfortunately, that opportunity to attack is all today’s sophisticated hackers need.

An anti-virus system, of course, needs to be aware of malware in order to prevent it, and even with a sandbox in place, there are new strains of malware that can hide their true “intentions” when they are in the sandbox (by detecting an environment where non-standard activity is taking place), and then activate themselves when they get into a working network. EDR constitutes a major improvement over these two approaches, but even here, there are drawbacks; in an EDR system equipped with sophisticated tools that sift through data and intelligently analyse it to determine if a specific connection is a threat, zero-day threats can still get through if the malware is cleverly-enough designed. And, the likelihood of false positives that eat up CISO resources grows.

What, then, will work? An enhanced prevention approach, together with the application of artificial intelligence and machine learning could do a great deal to keep hackers at bay. In a prevention approach, hackers are kept away from opportunities to attack systems; if they can’t get in, they can’t compromise security.

>See also: Nation State hacking: a long history?

A good example of this is web isolation, in which web content is rendered in an isolated area before being passed onto the endpoint. According to Gartner, “information security architects can’t stop attacks, but can contain damage by isolating end-user internet browsing sessions from enterprise endpoints and networks.”

Another prevention technology that could keep bad actors out altogether is CDR – Content Disarm and Reconstruction – which dissects and analyses components of files before they are passed onto the endpoint. If any rogue code is found, it is eliminated, and then the files are reconstructed and passed onto the system, keeping all functionality intact.

On the detection side, machine learning could be used to enhance anti-virus systems; many zero-day attacks are derived from existing malware, so a system that can detect similarities between a new attack and one that has occurred in the past could make up its own “mind” and determine the new malware to be a threat – without having to wait for anyone to update its signature file.

Right now, anti-virus companies collect data from users and evaluate it for anomalies to determine if a zero-day attack has been discovered – at which point it will be added to the anti-virus signature file and distributed to customers.

In a machine learning scenario, the server would collect that data and automatically learn the anomalies, “educating” itself each time new data comes in about the finer points of anomaly activity.

After a few passes, the system will be smart enough to immediately update connected customers based on a possible virus that has shown up on even a single computer in its network. Microsoft is working to build out a network like this.

>See also: Nation State hacking: a long history?

Firewalls are a tried and true prevention strategy, but plenty of companies with top-drawer (as in expensive) firewalls get “hit” as well. That could be because of common administrative problems, like failing to change default passwords, failing to patch firewall systems with the latest updates against vulnerabilities, or even the misconfiguration of files, which could compromise a firewall.

An artificial intelligence-powered survey could help fix those problems; there are too many details in any security system for even a top administrator to follow, and besides firewalls there are likely a dozen other security systems IT teams must keep track of. Here, too, AI will be able to identify anomalies and harden perimeters in real time, analysing the small details that could allow for breaches as it learns what attacks and anomalies on a network look like.

Ditto for detection/response systems. EDR was mentioned as an up and coming security solution that has some drawbacks, but an EDR system that incorporates machine learning – allowing it to more easily detect patterns of activity, and step in to thwart attacks before they can get off the ground – could prove an effective roadblock to hacker activities. These are just a few ideas on how the security industry can turn the tables against hackers. Current tools, processes and strategies aren’t working.

 

Sourced by Itay Glick, CEO of Votiro

Avatar photo

Nick Ismail

Nick Ismail is the editor for Information Age. He has a particular interest in smart technologies, AI and cyber security.