What’s missing from the current discussion around incident response is the acknowledgment that security professionals still have to make decisions based on incomplete information.
This is not due to a lack of data. On the contrary, there is so much data created in a typical enterprise that there are no simple ways to make sense of the mountains of it.
Organisations have had big data and the ability to collect data drilled into them for more than ten years now operating under the assumption that gathering and storing data would be sufficient.
This assumption is flawed. If incident response is to evolve as a discipline, we need to shift to a new paradigm – one that emphasises knowledge over data.
For a while now, the playbook that most organisations have followed when it comes to data has three principle directives:
1) Gather every piece of data you can.
2) Store that data in a security-specific database.
3) Give an analyst a username and password.
As such, security professionals began to monitor everything, receive alerts from all over the network and ultimately try to piece together a coherent tapestry for action.
Following the playbook’s directives was certainly better than doing nothing.
However, incident responders still remain some of the least equipped members of security organisations. Why is that?
For starters, they believed that investments in SIEM and other technologies (directives 1 and 2) would deliver the right data, quickly.
They assumed that the primary need was to acquire more data so responders could make decisions. In other words, just keep tossing more data into the IR soup.
While there is a need for knowledge-driven decision making, in order for that to happen, the data that is collected must be analysed instead of merely piled on. With that in mind, understanding the differences between data, information, and knowledge is critical.
Data is a single observation which can contain any number of combinatorial fragments. Data is disconnected, not distilled, and is hopelessly fickle about what it tells us.
It is like having a single word drawn from a 26-volume encyclopaedia – far from definitive. Although data is a good place to begin, it is insufficient for decisions.
Information is a compilation of data. Information gets closer to the goal, but still leaves much to be desired. Information can be misleading and can accentuate a bias.
This natural inclination (the confirmation bias) can keep people from seeing the truth of the matter. This giant pile of data called “information” does not yield the direction needed to make decisions.
It is like having pages pulled from the encyclopaedia at random and assuming they each tie together in a coherent frame.
Finally, there is knowledge. This elusive pot-of-gold is what to strive for. Knowledge provides purpose and direction.
Furthermore, knowledge is strongest when it disconfirms, rather than confirms. Knowledge provides a way to rule out false-positives. Now, the encyclopaedia along with the instructor can guide understanding.
If organisations wish to develop incident response as a thorough discipline, they must become methodologically knowledge-driven.
Going back to the playbook’s third recommendation, it creates the false assurance that all that is needed is smart people with access to the ocean of assembled data.
This is where the opportunity to mature incident response can be choked out. Ask any incident responder or response team: “Do you have access to data?” Then, “what are you lacking?” The answer: a meaningful way to make sense of all that data.
They have been inundated with alerts, alarms, sirens, and notifications. They do not need any more of them. For knowledge to take the lead, intelligent ways to interrogate data, ask it questions, stitch together the picture and take action are required.
First, consider how incident response is presently conducted: Initially, events get coughed out of our systems, these alerts carry various pieces of data, and based on those variables, responders determine whether action should be taken.
>See also: Cyber criminals target schools
This seems pretty straightforward. However, it is important to note that this process of decision making is a highly manual process.
Looking closely at the process, countless links in the decision chain that require an analyst and data scientist to “stitch” together each data element are found.
But if organisations are to become knowledge-driven, it requires automating that stitching work and appropriating the decision aspects to humans. There are three essential qualities involved to automate stitching:
1) Natural language extraction
In the consumer world, there is Google. Since manually searching the web with accurate precision from page to page is a near impossible task, Google takes the work of mining the internet and gives consumers a useful way to find anything.
Google crawls billions of pages, indexing each page’s content, and intelligently presents those to the user.
Type in a few phrases and it brings back all the relevant content from the web. Second, further interaction with the results (along with millions of others) sees Google’s search function get tighter and tighter.
Security must take the same approach with natural language extraction (NLE). Any solution that will automate the stitching process must have an NLE function.
By deconstructing messages and finding the direct and implied content, data stitching becomes an automatic process.
Crawling through datasets and identifying actions (e.g. allow, deny, block, fail), subjects (e.g. address, username), metadata (data about data), and nuanced expressions (e.g. direct objects, prepositions, and adjectives) all give rise to a coherent picture buried in mountains of security data. If the Internet can be tamed, security data can as well.
2) Associations and clusters – triage
Exhaustive search is not feasible, so putting infinite sets into “chunks” is more practical. Chunking creates associations and clusters – a triage.
Triage can happen when NLE reduces variables to discrete clusters based on associative relationships. This is analogous to healthcare triage.
Imagine 500 patients dropped off at the emergency room at once, each with various presenting illnesses and symptoms.
Based on the combination of discrete variables, medical staff can carve up the group into associations with each group receiving specific treatments.
However, this discrete and combinatorial process is taxing on human reasoning faculties. For a computer, it is inherent.
Continuous and routine processing of metadata is serial, systematic, deliberate, and often mundane.
Humans get tired, lazy, and apathetic – computers do not. This combinatorial optimisation is algorithmic in nature, and we humans cannot compare with computers for speed and accuracy.
By coalescing relationships based on any number of variables, incident responders can find the needle in a stack of needles.
With associations, you can search for categories and their endless combinations with other categories (e.g. applications, geolocations, users, addresses, unknown entities).
The final element to automating the stitching process is collaboration. An integrated “social” framework enables operators to tag interesting data, inject context, and work together directly with machine and human generated data.
This social framework produces continuously enriched data that becomes another variable for associative clustering.
For example, my interaction with an interesting event adds to your knowledge and your tagging a separate piece informs my investigations in turn.
This ongoing recursive function gives more attribution to the data being mined, delivering greater intelligence and crucially, providing context.
Collaboration can become a new factor for situational awareness – an individual’s work within the data becomes part of the data itself.
Social media platforms have leveraged this, and now it’s possible for security. Facebook takes activity, interactions, and connections then has an uncanny ability to show the content that is likely to interest users. Using automated collaboration can do the same with security intelligence.
Incident response, as a discipline, is primitive because all the work that leads to a decision to take action or investigate is done manually.
Information is pulled, sliced, and pieced together by hand. Solutions that eliminate that waste, taking data from separate sources and composing a coherent picture, giving responders the necessary information to make the right decision and avoid false positives will help incident responders make the most informed decisions in a pressured environment.
Knowledge is often hidden in the data that is collected. That data is unstructured, loose and disconnected.
In order for incident response to mature, businesses need to automate the stitching process to help guide incident response away from making reactive and uninformed decisions to a knowledge-oriented future.
Sourced by Josh Mayfield, platform specialist – immediate insight, FireMon