When Robert Tappan Morris became the first person convicted under the US Computer Fraud and Abuse Act in 1989, the world entered a new era of cyber threats that is now a part of daily life.
While the motivations of early hackers were to demonstrate the inadequacies of the then current security measures, the objectives of their successors have become progressively more sinister, organised and covert.
Threats are now sophisticated, ubiquitous with almost any organisation, irrespective of size, and targeted with intent – from small businesses to the White House.
Compounding the issue is that perpetrators continue to retain the ascendancy, at least in part, because successful attacks continue to deliver significant dividends. This is irrespective of whether the perpetrator is an inquisitive competitor, a sovereign state or simply a criminal.
It results in organisations being constantly subjected to a barrage of new threats that evolve faster than the security solutions and processes can adapt to defend against them.
Essentially, attackers only need to find a small crack in the defence of an organisation for success. §Security protection teams, on the other hand, must maintain constant vigilance across the enterprise – an asymmetry that is highly significant.
Preventing attacks has proven to be nearly impossible, so the security industry’s focus is turning to improved detection and response times. The challenge for security teams is spotting which, out of the veritable flood of potential threats, are the ones that pose the greatest risk to the organisation (if at all).
With so many of these attacks being completely new, and so unknown to SOC teams, detecting any sign of a breach and responding quickly is increasingly problematic. That means that the longer a threat remains undetected, the more time at risk and the more damage it can potentially cause.
The tasks necessary to achieve acceptable response times are beyond the reach of many organisations. This is confirmed by a number of observations.
The Ponemon Institute / IBM 2015 Cost of Data Breach Study showed that malicious attacks can take an average of 170 days to identify. And managed security service providers note that it can take over 600 hours to collect and prioritise the amount of information needed to resolve just a single complex incident.
According to Verizon’s 2015 Data Breach Investigations Report, 99.9% of the exploited vulnerabilities had been compromised more than a year after the associated [vulnerability] was published.
In short, even when threats are known and published, the threat information overload faced by security professionals would seem to limit their abilities to effectively identifying and protect against those threats in timely manner.
At best then, evidence shows that analyst teams are dealing with so many warnings and false alarms that breaches can remain undetected and the organisation remain at risk for months before threats are satisfactorily identified and rectified.
The worst case is even more alarming, where the sheer flood of information facing security teams results in threats going entirely unnoticed – until attackers have stripped the key assets and moved on or slipped up and been caught some time down the track.
Roots of the problem
There are two major factors contributing to security analysts’ information overload. First is that threat information is still too often gathered in isolated silos – from both inside and outside the business. This can be due to separate security tools, independent business units or a desire to draw in intelligence from the outside world to aid detection.
As a result of monitoring and analysing these disparate information sources, the threat investigation process becomes a painstaking, manual and time-consuming process of data aggregation and investigation, which necessarily prolongs response times and, as a result, the time at risk.
Second, dealing with sheer amounts of information without the right tools is enormously labour-intensive and demand for these kinds of skills is on the rise. In fact, despite analysts’ professional numbers rising “much faster than the average”, according to the U.S. Department of Labor, Cisco estimates there is a shortage of more than 1 million specialists world-wide. Put simply, there just aren’t enough skilled analysts to go around.
It’s clear that the system is, if not broken, certainly creaking. Acquiring more technologies that have similar resourcing demands to the current technologies cannot solve the information overload problem.
Each time a new solution, used to detect another type of cyber threat, is deployed, information volumes increase and the work rate is impacted. Security analysts need to be free to concentrate on what they do best: isolate a threat, interpret the information around it and determine the risks posed and the root cause.
To achieve this in a timely manner, relevant information needs to be extracted from silos, collated and centralised for ease of comparative analysis and investigation.
Analysts need a single window that presents the available information from every relevant source. Manually collating threat information for subsequent investigation adds work and randomness.
Big data, machine learning and automation of some of these routine tasks, on the other hand, delivers significant time savings (and reduced operational risk) by dramatically simplifying workflow processes for SOC operators.
The more that potential threats can be automatically analysed and prioritised the fewer threats analysts will have to investigate and respond to in depth.
Further, if false positives can be eliminated and real threats automatically verified through high-speed matching of threat artifacts in the monitored environment, the analyst is able to focus their attention only on the threats that matter.
This doesn’t mean removing the human powers of deduction, insight and situational awareness from security management process – but rather ensuring that analysts can focus on threat resolution and reducing the time at risk.
Sourced from Piers Wilson, Huntsman Security