Robotic Process Automation (RPA) is taking enterprise computing by storm.
As something like a cross between machine learning and old fashioned macros or scripts, RPA bots automate complex, multi-faceted tasks without relying on APIs, scripting languages, or software engineering techniques.
Naturally, RPA has also created new cybersecurity concerns. Much of the discussion thus far has been focused on how to manage the credentials, authentication workflows, and privilege escalations that the RPA bots themselves must navigate as they carry out their duties.
As is too often the case, however, the human side of the cybersecurity equation is less well considered. What do the human cybersecurity issues look like when it comes to robotic process automation?
Let's take a look.
-
Human users creating bots inappropriately. Barriers to RPA creation are disappearing as RPA software platforms evolve and iterate toward ever-more user-friendliness. But as the pool of users able to create bots grows, so too does the risk of unauthorized automation that negatively impacts cybersecurity, resources, policy compliance, or critical business areas.
-
Human users secretly turning good bots bad. Insider threats are bad enough when renegade users have to point and click their way through sabotage, data exfiltration, or other malicious tasks. When renegade users have the privileges necessary to create or modify bots, the risks and potential damage are orders of magnitude greater.
-
Human attribution being lost in the fog of automation. With long chains of operations now being carried out by interlinking "teams" of RPA bots, each relying on bot-exclusive credentials for privileged tasks, audit trails can rapidly get confusing. But it's as important as it's ever been to know who's ultimately responsible for what happens to critical data.
Each of these items points to an RPA security difficulty that concerns the human users around bots, rather than the bots themselves—and because of this, each is likely amenable to a behavioral-biometric solution.
Inappropriate or Unauthorized RPA Deployment
There's a growing buzz about CIOs or CISOs discovering and then having to put the breaks on ad-hoc automation projects, implemented by well-meaning go-getters, in places where automation either doesn't yet belong or won't ever belong, for whatever reason.
With RPA automation technology rapidly trending toward a world in which almost any UI can be automated without extensive technical training, what's needed in a growing number of cases is a way to cordon off particular systems or UIs and prevent them from being automated without approval and planning.
Continuous behavioral biometrics agents likely have a role to play here, as behavioral biometrics technologies are in many ways well-suited to differentiating between known human users and anyone or anything else—including a bot.
Under this approach, security staff would use behavioral biometrics agents to "cordon off" particular accounts or workflows both from unauthorized users and from automated RPA bots—and if bots were then detected trying to use them, they would lock themselves and simply refuse to operate until an authorized user returned to the controls.
Protecting RPA Bot Integrity
No matter how many layers of automation are in place in a workforce of bots, at some fundamental level, human users are creating, managing, and maintaining all of them.
The efficiency, productivity, and presumed authorization of deployed bots, however, creates unusually large reservoirs of risk. In particular, a privileged human insider with malicious intent could modify a bot in some small way to carry out an illicit step or action, potentially thousands or even millions of times.
By deploying behavioral-biometric authentication to protect the "workshop" environs where bots are trained, built, modified, or deployed, the bots and their integrity can be protected.
Thanks to the properties of behavioral biometrics, bot modifications or other key administrative actions can be made unavailable to anyone other than real intended individuals with proper clearance and expertise—rather than merely to anyone who gets ahold of administrative account credentials.
Human Attribution for Robotic Process Automation
Just as importantly, behavioral biometrics can be used to ensure that any modifications or key administrative actions that do take place are authoritatively tied to real human individuals.
This creates a kind of biometric chain of custody for RPA bots themselves—enabling both compliance and an incident audit trail if anything goes wrong. Similar attribution can exist for the initiation of bot workflows, if the users triggering them are authenticated with behavioral biometrics.
Deployed carefully in this way, behavioral biometrics can serve to concisely illuminate all of the human touchpoints and actions that underlie RPA behavior, resulting in a clear trail of RPA responsibility rather than difficult-to-traverse oceans of forensic logging data.
More than Macros, Less than Human
The buzz about RPA is that these aren't "just macros" any longer—they're smart robots, able to automate complex, UI-driven tasks in ways that are adaptive and intelligent.
And this may well be so—but the bots in the exploding population of RPA-land remain unable to exercise judgment, protect themselves from misuse, take responsibility for their actions, or be held liable for damage.
Those things fall to human users. And for identifying human users in day-to-day tasks, including RPA tasks, behavioral biometrics remains the best available technology. ■