The average cost of a data breach in 2022 reached a record $3.86 million, yet the average time to identify and contain a breach was 277 days.
How is it possible that events that are so consequential are routinely missed for the better part of a year, given that more and better cybersecurity tools are in use today than ever before?
Too Many Tools Doing Too Much Work
It’s conventional to point to the “arms race” between malicious actors and cybersecurity teams, but there’s another factor contributing to this problem that too often sees little serious action in the industry: alert fatigue.
As early as 2015, Ponemon found that the average organization sees 17,000 internal malware alerts in a week. Seven years later, IDC found that 30 percent of alerts in mid-sized organizations are simply ignored. Placed next to each other, these stats and the time gap between them suggest that the alerts problem has not been solved.
It’s no wonder that alerts are ignored when the same IDC report finds that it takes around 30 minutes, on average, to investigate an alert—the vast majority of which will turn out to be false alarms.
Yet this alert load stems from tools that are generally doing precisely what they were purchased to do—spotting and reporting on anomalies in their respective domains.
In 2020, IBM reported that the average organization was using over 45 distinct cybersecurity tools, yet at the high end of the scale, the number of deployed tools was inversely correlated with success in attack detection and response.
In a nutshell, information is only useful if you’re able to ingest and act on it as an organization—and organizations appear to find themselves caught between a rock and a hard place in the “arms race.”
Deploy fewer tools and generate fewer alerts? Risks and attacks go unnoticed. Deploy the quantity of tools and alerts necessary to cover the attack, risk, and operational surfaces and there appears to be too much information to process.
But is this appearance misleading?
The Right Kinds of Solutions
As a full-service cybersecurity and technology provider, Plurilock routinely sees the lists of cybersecurity capabilities that organizations are seeking, often for which budget, authority, need, and timelines have already been established.
A significant proportion of these sought capabilities involve “the ability to detect and respond to undesired circumstance X,” whether this circumstance is related to authentication workflows, account use, network traffic, service behavior, or some other facet of technology operation that represents a risk surface.
Of course, “the ability to detect” often means, quite simply, more alerts. And ironically, it’s common for organizations to be seeking a tool that is essentially an alerts generator—the data in question is already being stored or logged by one or many systems but simply isn’t being consumed or surveyed. In some cases, they’re not even aware of the fact that this data is already being collected.
If this sounds as though it could be your organization, it’s worth asking whether a new set of alerts will actually reduce overall risk, or whether—as IBM found—you may be entering territory in which the quantity and variety of alerts isn’t actually the problem, and begins to correlate inversely to effective practical detection and response.
In cases like this, it may not be that a net-new tool is needed—or at least not of the kind being sought.
In many cases, what’s actually needed are other kinds of solutions able to better make use of, enhance, or contextualize the already overwhelming amount of cybersecurity data being generated in-house. Examples include:
Properly used SIEM or SOAR solutions that are able to begin to synthesize and make use of the data and alerts already being generated, in bulk and at computational speed, lightening some of the load on human SOC team members
Professional services or cybersecurity engineering work to actually usefully configure and deeply integrate a company’s existing security stack, so that the whole of the cybersecurity organization can be far more than the sum of its parts, massively reducing alert duplication and alerts that are “obviously” false if information from across all systems is taken into account
Simple SOC and cybersecurity service outsourcing via MSPs and MSSPs, who are actually experts at many of the above items and can relieve particularly mid-sized companies and small enterprises of the need to operate outside of core competencies by trying to maintain a cybersecurity function
Toward a World With Less Alert Fatigue
The problem, of course, is that the items above aren’t as glamorous or compelling as yet another highly technical detection solution that dazzles with jargon and promises to shine a bright light on bad guys caught “in the act.”
But alert fatigue remains a real problem—and one that will only accelerate as new technology verticals continue emerge and the pervasiveness of technology across more venerable industries continues to increase. We increasingly know that detection isn’t merely a technical problem, but is in fact—and especially—a human factors problem.
We also know in 2023 that there is more data of higher quality around us than ever before—including in security functions. In a sense, this is precisely the problem.
In 2023 and beyond, it may be time for organizations to begin asking themselves whether they need another edge, on-surface, or detection and response tool that will generate yet more alerts and false alarms, or whether they really need to invest in making the tools they already have work together and work better.
This investment may the form of additional and often neglected integration work, solutions for data enrichment, correlation, and automation, rather than just generation, or specialist services able to make alerts meaningful, rather than merely overwhelming. ■