Incident response – how late is too late?

Cyber threat awareness is not limited to an obscure crowd of coding geeks—and hasn’t been for a long time. In fact, just earlier this year, U.S. intelligence director James Clapper announced that cyber attacks top the list of threats facing the U.S.

The sheer number of breaches reported is testimony to the fact that IT teams are struggling to detect and fight off attacks despite increased efforts to keep up with the rising scale and complexity of threats. It appears that no business – regardless of its size and resources – is immune.

> See also: The growing threat of DDoS attacks on DNS

Why is it that many victims fail to successfully protect their assets despite a huge host of preventative measures? The answer is that, quite literally, time is of the essence. The following examines why time is critical for successfully fending off an attack, the reality most organisations face and most importantly, what tools and strategies are available to help.

A race against time

It is impossible to generalise how long it takes a cyber-attack to pass the critical stages of compromise and data exfiltration – there are simply too many variables involved. Unfortunately, in most cases, exfiltration begins in a few minutes after the infection has occurred. Ideally, catastrophic consequences are averted because threats are detected, investigated, and stopped as soon as detection occurs.

According to the 2014 Verizon Data Breach Investigations Report, nearly 90% of point-of-sale intrusions saw data exfiltration in minutes or seconds after compromise—and more than 90% of web app attack incidents required days or longer to contain.

Any delay in incident response literally means more lost records, lost revenue, and losses of customer goodwill. It’s clear that rapid response should be a high priority, but often it’s difficult for organisations to address. My conversations with responsible IT managers indicate that the processes carried out by large organisations can take up to 14 days to complete.

Delayed response time is due to the many steps required to move from detection to containment and resolution. Legacy incident response involves manual effort, manual data entry or transfer, and even variable human analysis that often requires double-checking for accuracy.

These steps include : security alert notification and centralised collection; data gathering about the targeted user and system; service desk appointment setting and local system data gathering for the targeted endpoint; unified analysis of system and target data; research across domain registration, antivirus detection systems, and intelligence systems; response decision analysis; and finally the enforcement action, which may also involve ticketing, change control, and interdepartmental negotiation for final action.

For global organisations this legacy incident response process can vary depending on time differences across geographically separated locations as well as the availability of staff across different departments, such as infrastructure, messaging, firewall, etc.

If large organisations, who can afford the time and resources to put dedicated measures in place, are struggling—smaller businesses fare even worse without the means to actively invest in protection.

It’s no coincidence that some of the larger more recent breaches were initiated through smaller partners of the targeted firms, allowing cyber criminals to gain a foothold before moving to attack the larger target. Law enforcement notification is usually how many businesses realise their networks are compromised. Clean-up alone can take more than a month. The Ponemon Institute estimates the time needed to resolve an attack is 45 days.

Leaving networks open and vulnerable for extended periods of time to clean-up is embarrassing at best, crippling at worst. Take the Sony breach for example – even ahead of the big PlayStation Network break, an extra 25 million customer data sets were stolen undetected.

The Global Cyber Security Center (GCSEC) states that this due to the fact that both internal incident response plans and security assurance practices proved to be ineffective. Too much time passed between intrusion detection and the acknowledgement that millions of records were stolen.

The Target breach tells a similar story. In this case, the intrusion was detected and security teams alerted – yet the organisation stood by, watching 40 million credit card numbers leave their network before they interfered. The initial alert was missed.

Why do organisations struggle to contain threats?

At the core of the problem is the sheer scale, complexity and sophistication of the evolving threat landscape. The annual rate of new malware is quickly outpacing the ability to keep up with defensive measures and skilled personnel. In addition, most organisations are struggling to find the time and resources required to effectively invest in and operationalise new security technologies.

Once installed, there can be additional hidden challenges and costs. Equally, businesses can unwittingly find themselves left overwhelmed by complex coding and unable to obtain a meaningful output. In short, even if an organisation has spent hundreds of thousands, or even millions on detection techniques, all the information provided confirms that they do have malware – along with the 70-95% of other corporate networks across the globe.

This is not to say that prevention and detection aren’t necessary. In fact, tools such as encryption, blocking of known threats and employee training to recognise suspicious patterns (such as phishing emails) all contribute to the reduced likelihood of a successful attack. However, these measures need to be constant, 24/7 and always up-to-date with the latest attack vectors – a task virtually impossible to carry out manually or with limited in-house resources.

Third-parties, such as SIEM and intelligence vendors, have contributed to the identification and monitoring of new and unknown threat vectors. Unfortunately, relying on third-party code for new functions and integration can leave IT teams vulnerable and overwhelmed if they do not have the ability to customize and apply the code to their specific environment.

In fact, some companies have revealed writing as many as 500 rules in order to filter out the ‘noise’ of their security processes – and the end result lacks fidelity and actionable output.

The solution: actionable, automated, integrated intelligence

Even if all detection and prevention systems are working correctly, up-to-date, monitored and acted upon swiftly –it won’t be enough. Numerous reports fundamentally establish that successful attacks will continue to happen – even with the strongest of defences.

Yes, a CEO might believe that investing in a number of costly prevention and detection systems should suffice to keep the business out of harm’s way; however, even the most timely alerts are useless if there is no clear path and information helping the IT team start effective counter measures.

> See also: On-premise or cloud? Rewriting the rules of DDoS defence

For a defence to be successful, actionable insight has to be derived from each of the networks’ multiple, disparate systems. Often, organisations lack the infrastructure and volume of data needed to derive the much needed insight required to determine the appropriate counter-threat measures.

Plus, the reality is that solutions requiring custom module development, integration and maintenance can quickly become as costly as building and maintaining existing dedicated solutions. The result may delay or even hamper an organisations’ ability to act immediately against bad actors.

Organisations need threat response technology that takes data from all threat detection tools and narrows down the alerts with enhanced, automated threat intelligence and context. Once threats are prioritised, this same system then confirms infections and helps IT teams focus resources on protecting the organisation against threats.

The bottom line is that intelligent threat response technology, which combines timely detection, verification and protection, is a necessary security layer for any organisation trying to keep up with today’s malicious threats.

Sourced from Kevin Epstein, VP, Advanced Security and Governance, Proofpoint

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...