Lost in the shuffle of the "zero trust" marketing melee over the last several years is the very basic idea that nobody should be trusted.
Nobody.
Yes, many companies today believe themselves to be zero trust companies—but most still make fundamental errors in zero trust authentication practices.
Worse, they lose the thread entirely by doing one simple thing: trusting anyone that can successfully authenticate.
Why Trusting Authenticated Users is Bad
Though the per-study data varies, virtually every study agrees that a double-digit percentage of data breaches today results from insider threats. In many cases, these breaches are caused by previously good employees with authorized credentials that:
-
Decided to become a bad employees
-
Made serious, uncharacteristic mistakes or showed lapses in judgment
-
Silently delegated their access or login to others
Whatever the reason, these kinds of threats are amongst the most worrying for any organization. After all, at least a few employees need at least some regular access to critical systems. Without such access, no work can actually be done.
Yet no amount of credential control can prevent the worst from occurring when a legitimate user—a recognized employee with valid permission and login credentials—does something seriously wrong for the very first time.
Whether that something is willfully malicious, a careless mistake made under unrelated duress, or just cutting corners by having someone else "fill in" for them, the results can be equally catastrophic.
The obvious solution? Stop trusting them. But how might you go about doing this?
Finding the Right Signal in the Noise
To avoid placing trust in legitimate users without leaving them productively hamstrung, what's needed is a technology that can evaluate their ongoing activity in real time and detect things like:
-
Significant changes in mood or agitation level
-
Uncharacteristically careless behavior
-
Valid credentials that are being used by an unintended third party
None of the typical authentication technologies are able to spot these things. In fact, passwords, hardware tokens, mobile SMS codes, fingerprint scans, and other conventional authentication strategies are completely and entirely blind to them.
The same is true for simple logging or macro-behavioral tracking.
Monitoring the tasks that a user performs or the websites that they visit won't enable you to anticipate the moment at which they suddenly deviate from typical behavior and bring down a critical system, or the moment at which they steal critical data—even if they've been planning an attack for months.
The right signal for this kind of intelligence would capture information about pre-conscious mental states in some way—whether a user is uncharacteristically nervous, for example, or is not in their right mind at the moment, or is indeed not the same mind at all, but actually someone else.
As it turns out, such a signal is available in today's cybersecurity world. Where?
Detecting Strange and Out-of-Character Behavior
Authentication technologies that rely on behavioral biometrics—the technology that we specialize in at Plurilock™—are the right kind of signal to address these sorts of risks.
Behavioral biometrics analyzes micro-patterns and variations in user movement in the background, as users work. Recognition is based on a user's evolving typing cadence and unconscious patterns in pointer movement, rather than on passwords, codes, tokens, or fingerprints.
But what does a technology for authenticating users have to do with finding ways to avoid trusting them once recognized?
In fact, stress, mental incapacitation or intoxication, and the substitution of one user for another all change the characteristic movement patterns that behavioral biometric systems are designed to identify. These changes are often noticed within seconds—and the user in question can immediately be blocked.
Recall the list presented a few paragraphs ago:
-
Significant changes in mood or agitation level (such as when a user is about to commit a crime)
-
Uncharacteristically careless behavior (as might be caused by intoxication or mental incapacitation)
-
The presence of an unintended third party (for example, due to user substitution)
Behavioral biometrics can often spot such cases, and quickly end access because—in a very real and practical way—the user is behaving strangely, and strange behavior and critical systems should never accompany one another.
Plurilock clients report cases of just such detection and rapid remediation in the wild.
Once they get over the surprise, they’re then pleased to know that this kind of protection is possible—and that it's ongoing.
The Right Level of Trust
It’s true that as a fundamentally statistical technology, behavioral biometrics is unable to state with certainty that—for example—a user is intoxicated or up to no good. It's not a breathalyzer or a mind-reader, after all—and for that reason it's not an evidentiary tool.
It is, however, a stunningly good preventative one.
On one hand, it can detect and act astonishingly quickly on slim indications of "odd" behavior. On the other hand, administrators who deploy it can tweak threshold levels to their suit their organization’s own needs.
So it is that Plurilock products in particular offer a way to set an organization's "just-right" level of trust (or, indeed, studied lack of trust) for known users.
For this reason, we argue that products like ours represent the only true path to a deep zero trust implementation—one that doesn't trust users even after they've passed through multiple layers of authentication, yet can maintain this skepticism without preventing everyday work from getting done.
At Plurilock, we think this is what zero trust really ought to mean. ■