The benefits and pitfalls of implementing threat intelligence

With this threat intelligence’s increased role, there needs to be a corresponding increase in quality, context and resolution. A higher percentage of the recipients of threat intelligence will not know what to do with it unless it comes within a larger context and recommended actions.

Some recipients can easily be overloaded if the volume of data is too high. In fact, a Ponemon Institute study conducted in 2016 revealed that 70 percent of security industry professionals believe threat intelligence is often too voluminous and/or complex to provide actionable insights.

>See also: The value of sharing threat intelligence

Before investing in threat intelligence tools, it’s essential to know the benefits and pitfalls. There are seven categories to consider and ranking is required in terms of volume, quality, context and relevance.

Open Source

Summary: Volume=High, Quality=Low, Context + Relevance=Low

Open source is free and open to use by everyone, including malicious actors. Meaning it is open to contributions, use and poisoning from anyone. This source also lacks context, making it difficult to determine the relevance to a particular organisation.

The volume of intelligence from open source is high and it is way beyond the capabilities of applying to firewalls or other enforcement systems for each Indicator of Compromise (IOC).

But even if you had the capability to do enforcement at this scale, there needs to be confidence in the quality of the data. This is a big problem as there are no standards or accountability, and feeds can be adapted or duplicated without verification.

Client sourced

Volume=High, Quality=Medium, Context + Relevance=Low

Client sourced intelligence is integrated into most security products. Information gathered from the product is sent back to the company, processed and turned into signatures to deploy back to the product.

>See also: Critical challenges ahead for threat intelligence sharing

This works well because it integrates without any client intervention. But contextual decisions can be difficult to determine and integrate with other intelligence to make specific decisions.

There are instances where it falls short, such as when information is needed outside of the product, such as correlating with other attack vectors. Or when a deeper understanding of the origin, context and scoring process of the intelligence is desirable. Although, in order to protect the privacy of other clients and intellectual property, this is usually not tracked.

Crowd sourced

Volume=High, Quality=Low, Context + Relevance=Low

Crowd sourced data is very similar to open sourced but done with specific communities or applications. It is typically not open for use but very open in collection. It has a lot of issues in common with open sourced data with an added issue that participants are often unqualified to contribute constructively. Therefore the context and relevance tend to be low and add more ‘noise’.

Internal analysts

Volume=Low, Quality=High, Context + Relevance=High

>See also: What’s next for threat intelligence?

Internal analysts produce the most relevant results. They are also essential in triaging intelligence from outside sources. The problem is that even the most heroic of analysts have limitations that fall short of the volume of information that they need to deal with. While even if an organisation has the budget to hire a large team of analysts, they are difficult to recruit due to the skills shortage.

Automated harvesting

Volume=High, Quality=Low, Context + Relevance=Low

Automated harvesting is a process that involves honeynets, spamtraps, sandboxes, specific crawlers and other tools assumed to have a high percentage of malicious activity. The information is collected, triaged and investigated.

This is an excellent source of data to initially analyse and filter. However, most organisations whose primary business is not computer security-related usually don’t have the resources to invest in it.

Trusted interest groups

Volume=Medium, Quality=High, Context + Relevance=Medium

Security groups that share your context and have analyst teams to triage can be the best way to gather threat intelligence. The data is generally higher quality, the context is relevant and the volume is higher than what you can produce alone and there is no limit to the number of threat sharing groups that an organisation can join.

>See also: Hurricanes, earthquakes and threat intelligence


Volume=High, Quality=Medium, Context + Relevance=Medium

Purchased data is typically high volume, high quality, and has as much context as possible for you to make your own decisions and filtering. The only drawback of purchased intelligence is information on how it can be applied to a business.

Information on what’s normal, what is relevant, and the context of the information, as well as how to turn that into enforcement changes is vital.

With the above in mind, it can be overwhelming to determine what combination of sources will offer the best and most valuable solutions.

One effective approach is to deploy multiple intelligence sources that offer comparison, corroboration, and variety. Comparing sources to each other can be revealing in a number of ways.

Having variety gives more coverage as long as there isn’t extensive overlap. Indicators found in multiple independent intelligence sources suggest that they are both malicious and common.

Automation is also effective, where machine learning can be applied to intelligence, but not with straightforward logic. It requires a given set of classified examples and features to extract in order to train itself and classify things in the future.

>See also: Pay attention to your threat intelligence’s shelf life

This requires a combined effort between analysts and engineers to train and keep in tune. Analysts are essential in providing training sets of data and also defining what they think the key features to classify them with are. They will always help improve the quality of information and relevance to an organisation.

The industry should aim to achieve a level of interactive integration and cooperation between analysts and their tools, so that they seamlessly play off of each other’s strengths to be better than their sum.

The current place where analyst and automation meet are at the SIEM and the threat intelligence platform. The SIEM is the centre of events. The threat intelligence platform (TIP) is where intelligence is managed by the analyst.

Your SIEM and TIP should work well enough together that any events that already correlate to threat intelligence can be viewed in the SIEM while the TIP can still be used to research any probable future threats.

The experienced analyst is central to the process for the steps that require their intuition, given all of the possible information, to make a decision. Once they make or review decisions they can quickly deploy any changes to the appropriate systems or channels.

This, in addition to the right threat intelligence sources, maximises a company’s investment in people, systems and products and results in the ultimate goal of disseminated threat data to provide actionable insights to bolster security defences.


Sourced by Anthony Aragues, vice president of product management at Anomali

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...