Organizations are consuming an increased diversity of Threat Intel feeds from more and more suppliers to counter progressively more sophisticated and numerous threats. They do this for good reasons it works, and it’s cost-effective.
The problem is that this is creating a lot of extra complexity. As a consequence, understanding, measuring and communicating how threat intel represents value for money is getting more difficult as a result. How should users measure the effectiveness (or otherwise) of their Threat Intel program?
Threat Intel works – but make sure you measure success
We know that Threat Intel works, and that companies are avid consumers. According to a 2024 survey by Forrester, organizations paid for an average of 26 commercial threat intel services. Yet large numbers of Threat Intelligence providers (including ESET) interviewed by Forrester said less than half of their collective customers had excellent or good threat intelligence metrics programs.
This points to a significant problem on the horizon: technical leaders and teams have convinced their business leaders to unlock the budget for threat intel spend, but less than half of them are in a position to demonstrate exactly how effective that spending is.
How to overcome measurement challenges
There is, of course, a massive catch familiar to anyone with experience using intelligence in any capacity: how do you attribute the success of an outcome to one or more sources of information?
Threat Intel is often a key ingredient in an effective defense, and so narrowing down its contribution to a positive outcome is both time-consuming and entirely subjective. It’s a bit like trying to value the contribution of salt, garlic or paprika to a really tasty dish. You know these ingredients transform the experience of what you ate, but explaining which each one contributed is quite tricky – and varies according to everyone’s tastebuds.
Qual and Quant need to work together
Forrester recommends breaking this down into two distinct disciplines. Firstly, measure effectiveness with quantitative measurements. Secondly, demonstrate value with qualitative insight.
This approach is particularly helpful because it does two things: help identify areas for investment and improvement on a tactical level and also, demonstrate the impact of good threat intel to business decision makers.
Quantitative measurement and CART
Data quality and effectiveness can be measured using the common CART (Complete, Accurate, Relevant, Timely) mnemonic – although it’s worth noting this doesn’t include a measurement of correct or accurate application. Instead of measuring the amount of data collected, consider (and measure) its impact on decision making. The catch is that what looks good for one organization doesn’t work for another – it’s dependent on your organization’s appetite for risk and level of maturity.
Completeness is a measure of how useful the threat intelligence is – does it correctly identify threats, and provide context? Measure whether the requirements you set for actionable intelligence are met – and that they conform to the stated objectives of the program. Your threat intelligence providers should share their roadmaps with you, particularly around how they anticipate and respond to the cyber security threat landscape.
Look also to measure the richness of your source mix. Multiple sources if information and insight are always a good thing, but look to identify the diversity of the sources you use.
Finally, keep an eye on the number of objects in your central threat intelligence repository: the larger the collection of IOCs (Indicators of Compromise), signatures and other data, the better.
Keep in mind, though, that sheer volume does not signal success.
Accuracy is key in this context: a huge volume of misleading or incorrect information is worse than a small number of accurate sources. The leading metric here is the number of false positives and negatives your team handles. False positives – the classic Type 1 Error – are a distraction and annoyance. These false alarms clutter analysts’ desks, lead to alert fatigue and cloud the picture. False negatives – Type 2 Errors – are even more of a hindrance, creating a false sense of security. Neither of these errors can ever be entirely eliminated, but weeding them out regularly is vital.
Relevance is specific to each organization and its unique profile: the sector it does business in, the environment that surrounds it, its capabilities and effectiveness and the types of threats it faces. Look to measure the number of events and incidents detected and (hopefully) prevented, the volume of compromised assets discovered and how many malicious services and profiles your threat intel providers take down.
Timeliness is a critical facet of threat intelligence, and a metric that can be measured; timely information is critical to success. Look to put a number to the frequency with which data is gathered, combined with the number of sources. Look, too, to put a number to how often IOCs and alerts are delivered by your providers and your own team. Two further measures: the mean time it takes to resolve disruptions – a number your threat intel and managed security service providers should have to hand. The culmination of this, of course, is measurement on incident response time. However, this last number should be carefully defined, as a kneejerk response can be counterproductive. Instead, look to measure both the time to identification of an incident and the time to successful resolution.
This quantitative approach puts a measure to effectiveness, but should be combined with a second set of qualitative measures that demonstrate the value of threat intelligence.
Qualitative measurement: making the case for value
Don’t waste time and effort putting a dollar value to your program: it causes two problems. Firstly, measuring savings is always a finite exercise, with rapidly diminishing returns. Secondly, it can also be hugely misleading if handled poorly or with bias.
Instead, look to provide qualitative insight that demonstrates value, rather than stretching to give a specious monetary measurement.
Tell stories, linking threat intelligence to successful outcomes. That might be describing a particular threat actor or TTP (Tactics Techniques and Procedures) rife in your sector, and showing how your threat intel headed off a potential compromise. Frameworks such as MITRE ATT&CK can be put to work here to show how a strong threat intel practice allows your organisation to identify and deal with attacks at the early stages, before they become incidents.
Measure stakeholder satisfaction – not just IT, but marketing, business leadership and physical security teams all benefit from timely and effective threat intelligence. Bottle this success in the form of customer satisfaction surveys, and you can make a strong case.
Conclusion
It’s easy to fall into the trap of chasing a perfect anecdote or metric to demonstrate the value of Threat Intelligence. Avoid this if at all possible. Threat Intelligence is, by its nature, imperfect, incomplete and subject to error. Despite this, it is, and will always be, an effective tool in the security team’s armory, and one that can and should be relied on to reveal and help defeat cyber threats. Making the case for threat intelligence investment – existing or new – can only happen when you combine quantitative and qualitative measurements, and when you have a frank, honest and factual conversation with its funders and customers.
For more insight, get access to Forrester’s survey and report , integrate ESET Threat Intelligence into your existing threat architecture, or get in touch with ESET’s own Threat Intelligence practice to find out more.







