The first half of 2023 has seen explosive growth in AI adoption. Employees across every sector of the economy are finding ways to use platforms like ChatGPT to accomplish rote tasks in record time—but this growth has generally run well ahead of governance and guardrails at most organizations.
Our customer base has told us that AI is too pervasive and too promising to simply block—but that the leakage of confidential data into AI systems as employees work is a huge concern.
At Plurilock, we’ve heard these concerns and have been working on a way for businesses to put guardrails around generative AI use without negatively affecting productivity.
Introducing Plurilock AI PromptGuard
Plurilock AI PromptGuard is a new kind of security tool for generative AI platforms—one that:
Detects sensitive data items in prompts
Anonymizes or redacts these data items before the prompt is delivered to the AI platform
Unredacts these data items when the answer is returned, before it is shown to the user
The result is a way for employees to continue to engage in productive, back-and-forth work sessions with generative AI systems—without leaking confidential data, and without seeing their workflow or user experience obstructed.
PromptGuard’s ability to hide confidential data from AI systems without hiding data from the user is the basic guardrail that businesses and government agencies have been searching for.
Early Access to Plurilock AI PromptGuard
Excitement around PromptGuard is already high. As a result, we’re onboarding to a closed beta via the Plurilock Early Access Program (EAP), in order to to manage both load and our ability to collaborate with customers on beta testing and refinement.
If you’re interested in PromptGuard, here’s how to get started:
Customers: Visit https://plurilock.com/ai-beta/ to request a beta invitation
Existing partners: Inquire with your Plurilock representative about becoming EAP-certified
Potential partners: Visit https://plurilock.com/partner/
We are excited to add AI safety capability to the Plurilock AI platform—and excited to support businesses in their need to find strong guardrails for AI that don’t obstruct AI use for their employees. ■