Error - Could not copy link. Try again
Page link copied

Herding the Cyber Security Market - Part 1 the Problem

One of the first big challenges we set our sights on at ESPROFILER is the difficulty of understanding and defining what security products actually do.

CEO & Co-Founder

Our first blog post

Hello everyone, I am Louis, the CEO and one of the co-founders of ESPROFILER. Welcome to our first official blog post! I know, everyone and their cat has a blog these days, but I promise to make our content worth your time—engaging, insightful, and educational. I'm incredibly proud of the quality of our engineering, and when relevant, I'll share our challenges, approaches, and even dive into the technical nitty-gritty.

This first blog post begins a series dedicated to outlining a critical challenge in cybersecurity. I’ll start by defining the problem, then explore how it manifests for customers based on our experiences, and finally, share our approach to solving it—not just for individual customers but for the broader cybersecurity market.

The Cybersecurity Market is Flawed

So, let's dive in. One of the first big challenges we set our sights on at ESPROFILER is the difficulty of understanding and defining what security products actually do. Some of you might say, “I know exactly what <product x> does!” But I'd challenge you to go a bit deeper. It’s like knowing you have a toolbox in your garage. Sure, you have a toolbox, but do you know what specific tools are in it? Do you know their dimensions, their fit, and how well they perform when you need them most?

For example, it’s one thing to know you have a spanner in your toolbox, but it's a whole other thing to know that this spanner will reliably fit a 20mm bolt and won't snap under pressure. Now multiply the number of toolboxes by 130, the average number of security products for enterprise companies and keep this in the back of your mind whilst reading on (more on this in part 2).

esprofiler-security-products
Yes AI images are here and I will shamefully use them

The Aviation Analogy

Enough with the toolbox analogy. Let me give you another one. I have a love for aviation, and over time, I’ve managed to acquire my pilot’s license. One of the incredible things about aviation is how precise everything is. For any given aircraft, you know its fuel capacity, fuel burn rate, maximum takeoff weight, turn rate, approach speed, cruise speed, range at different airspeeds—the list goes on.

As a pilot, this allows me to plan a flight and know exactly where I can go and when I'll need to refuel. Based on expected fuel consumption, I can even estimate the remaining fuel at any given moment and check if I'm on track. It’s a reassuring system that gives you confidence that everything is performing as expected.

Now, imagine if cybersecurity products were described with the same clarity as aircraft. Sadly, that’s not the case. If the cybersecurity market worked like aviation, we'd likely be flying blind. In fact, I’d argue that the cybersecurity product landscape is more akin to the health supplement or skincare market: full of claims and promises, but with very little transparency about what’s actually inside the tin and what it does.

Unpacking the Problem Layers

The problem is complex, and one blog post can only scratch the surface. Here are some key issues:

  1. Lack of Standardization: There’s no agreed-upon standard for product categories. Analysts like Gartner, Forrester, G2, and even the big four consultancies all have wildly different taxonomies.
  2. Marketing Confusion: Spend five minutes at a security conference, and you'll be bombarded with enough buzzwords to play “security bingo.” Currently, “AI” in all its various forms seems to be everyone’s favorite catchphrase.
  3. Complex Sales Approaches: Many vendors now adopt a “platform” approach, bundling multiple products and features together. What used to be a straightforward product is now a maze of bundled capabilities, making it hard to tell what you’re even buying.
  4. Rapid Change and Evolution: Combine this with the rapid pace of technological change and evolving threats, and you end up with a constant churn of new solutions and rebranding efforts of old. Many products are continuously adding features, meaning that what you looked at a few months ago may now be something entirely different.

The issue, simply put, is that there’s no label on the side of the tin. No consistent aisles in the cybersecurity supermarket. Vendors continually reposition their products, and those products constantly evolve. There’s no clear way to quantify what a product is supposed to do, let alone measure whether it’s actually doing it.

This is an Old Unsolved Problem

Economists have even coined a term for markets like this—a “market for lemons.” When consumers can’t tell the difference between a high-quality product and a lemon, the incentive to produce quality products vanishes. A report titled “Cybersecurity Technology Efficacy: Is cybersecurity the new 'market for lemons'?” highlighted this connection back in 2020. Now, I personally wouldn't go so far as to say that the cybersecurity market is entirely a lemons market. I genuinely believe that most security vendors are striving to build great products for their customers. But the challenges still stand.

The Silver Bullet Fallacy

Prior to the lemons report, an even older paper from 2008 titled "The Market for Silver Bullets," authored by Ian Grigg, explored similar challenges in the cybersecurity market. Grigg critiqued the widespread belief in "silver bullets"—solutions marketed as quick fixes to complex security problems. He argued that these solutions often oversimplify threats, leaving critical gaps in an organization’s defenses.

Grigg highlighted several related problems: vendors promoting tools as one-size-fits-all solutions, buyers over-relying on these tools, and regulatory pressures driving checkbox compliance instead of meaningful risk management. The result? A fragmented and often ineffective approach to security, where marketing frequently trumps actual efficacy.

Both the "lemons" and "silver bullets" analogies point to the same systemic issues: the lack of transparency, misaligned incentives, and oversimplification of complex security needs. If we were to draw a line through the last two decades of cybersecurity, a troubling trend emerges—despite billions invested and countless new technologies introduced, the fundamental challenge of understanding and measuring cybersecurity products and subsequently their efficacy remains unresolved.

The Goodhart's Law Challenge

Ross Haleliuk and Mayank Dhiman, over at Venture in Security, did a great article covering both the "lemons" and "silver bullets" aspects, providing more depth to some of the issues I describe. It's worth a read. However, Ross and Mayank conclude the article by stating, "Trying to quantify security is going to be a losing battle." I understand this argument, which centers around the idea that building any form of metrics for measurement and comparison will likely fall victim to Goodhart's law—a metric that becomes a target, or in our case, a measure by consumers, will become gamified by vendors, reducing its usefulness.

I believe we've seen this recently with MITRE Engenuity ATT&CK Evaluations. Despite the amazing work by MITRE, the evaluations have often been spun by vendors, each claiming "100% coverage," which blurs over the evaluations' actual objectives of modeling specific threat actors.

Learning from Dieselgate

Despite this, I disagree with Ross and Mayank's closing comment, as I feel it oversimplifies a complex, layered problem. Let’s take a look at a recent example of Goodhart's law playing out in the automotive market.

If you haven't heard about the Volkswagen emissions scandal, also known as Dieselgate, the short story is that Volkswagen modified their Engine Control Unit (ECU) to switch from lower fuel consumption and high NOx emissions to low-emission compliant mode when it detected an emissions test. This caused their engines to emit NOx levels far above limits in daily operation, but comply with US NOx standards when being tested. In the real world, emissions were up to forty times higher than in the lab. In the automotive world, this practice even has a name—a "defeat device."

Those familiar with red team tools like Metasploit will know that, for a long time, antivirus vendors spent significant effort ensuring their software could detect red teams using these tools. The same is likely occurring with Atomic Red Team, and I wouldn’t be surprised if this was the case for most commercial breach and attack simulation tools. You could argue this is similar to the defeat device and evidence of Goodhart's law.

But here is the difference: the Dieselgate scandal was uncovered when West Virginia University conducted an on-road test outside of the expected test conditions—something the defeat device wasn't programmed to handle. A key factor in exposing the 40x higher emissions was the availability of foundational units of measurement. For example, NOx is a gas, and as part of the EURO standards, it can be measured in grams per kilometer (g/km).

The Importance of Foundational Units of Measurement

I’m making this comparison for a reason: the cybersecurity market, much like the supermarket analogy earlier, lacks defined aisles. Before we can even begin quantifying product efficacy, we need foundational units of measurement—basic definitions and metrics to establish what products are supposed to do and whether they’re meeting those expectations.

Without these foundations, customers can’t answer critical questions like, "Should this product have stopped this attack at this stage?" This inability to correlate capabilities with outcomes undermines their ability to conduct meaningful post-incident reviews, plan strategies, negotiate renewals, or assess product value.

Yes, metrics can be gamified, as seen in Dieselgate. Volkswagen’s defeat device manipulated emissions tests, but the key to uncovering the truth was clear units of measurement, like NOx emissions in grams per kilometer. These metrics exposed the 40x higher real-world emissions, ultimately costing Volkswagen $34.8 billion and dissuading future manipulation.

The cybersecurity market lacks such transparency. Like the foundational elements of a building, these metrics are critical for creating accountability. Without them, we perpetuate the issues described in the "lemons" and "silver bullets" papers—misaligned incentives, opaque claims, and ineffective solutions.

Establishing these foundations won’t eliminate attempts to bend the truth. But as transparency grows, customers will start to see the delta—the gap between promises and performance. Understanding this delta will help align the market, empower customers, and create better outcomes for everyone.

1b102181-706d-455d-81c4-3de903c4ccc6

Why focus on this problem?

The best way to explain why we chose to tackle this challenge is by linking it directly to the customer experience and the real-world consequences we’ve observed—and continue to see—on the ground. These tangible impacts are what drive our work, and they’ll be the central focus of the next part of this blog.