Pangea Unveils Suite of AI Security Guardrails to Address LLM Software Risks and Accelerate AI Development; Debuts $10,000 Jailbreak Competition
Pangea introduces AI Security Guardrails to mitigate LLM software risks and boost AI development; launches $10,000 Jailbreak Competition to enhance model robustness.

Pangea, a leading provider of security guardrails, today announced the general availability of AI Guard and Prompt Guard to secure AI, defending against threats like prompt injection and sensitive information disclosure. Alongside the company’s existing AI Access Control and AI Visibility products, Pangea now offers the industry’s most comprehensive suite of guardrails to secure AI applications.
“As companies race to build and deploy AI apps via RAG and agentic frameworks, integrating LLMs with users and sensitive data introduces substantial security risks,” said Oliver Friedrichs, CEO and Founder of Pangea. “New attacks surface daily, requiring countermeasures to be rolled out equally fast. As a proven and trusted partner in the cybersecurity industry, Pangea constantly identifies and responds to new generative AI threats, before they can cause harm.”
“I’ve seen firsthand how vulnerabilities in computer systems can lead to damaging real-world impacts if left unchecked. AI’s potential for autonomous action could amplify these consequences,” said Kevin Mandia, Founder of Mandiant, and Strategic Partner at Ballistic Ventures. “Pangea’s security guardrails draw from decades of cybersecurity expertise to deliver essential defenses for organisations building AI software.”
Accelerating Secure AI Software Delivery
Pangea AI Guard prevents sensitive data leakage and blocks malicious and unwanted content like profanity, self harm, and violence. Pangea employs over a dozen different detection technologies to inspect and filter AI interactions, including over 50 types of confidential and personally identifiable information. Threat intelligence is provided by partners Crowdstrike, DomainTools, and ReversingLabs, providing millions of threat intelligence data points to scan files, IPs, and domains.
The system can redact, block or disarm offending content and also offers a unique format preserving encryption feature that protects data while maintaining its data structure and schema so it does not break database formats.
Pangea Prompt Guard analyzes user and system prompts to block jailbreak attempts and organisational limit violations. Using a defense-in-depth approach, it detects prompt injection attacks through heuristics, classifiers, and custom-trained large language models that can reliably detect attack techniques such as token smuggling, alternate language attacks, and indirect prompt injection with over 99% efficacy.
Grand Canyon Education chose Pangea to secure its internal AI chatbot platform from the risk of sensitive data leakage. “What I love about Pangea is I can provide an API centric solution out of the box to developers that automatically redacts sensitive information at machine speed without any end user impact or user experience change,” said Mike Manrod, Chief Information Security Officer at Grand Canyon Education. “If you try to put a fence around AI to block its use people will find workarounds, so instead we created a path of least resistance with Pangea to make secure AI software development an easy and obvious choice.”
“The introduction of Pangea’s new offerings is a significant development in the field of AI security, particularly given the increasing importance of robust guardrails,” said Karim Faris, General Partner at GV. “The team has taken a comprehensive approach to the OWASP Top Ten Risks for LLM Applications and has established expertise in security innovation, including the creation of SOAR. We are highly optimistic about Pangea’s future.”
Registration Now Open for Pangea’s AI Virtual Escape Room Challenge
To showcase the complexity of generative AI security threats, Pangea is launching “The Great AI Escape’ Virtual Escape Room Challenge, an online competition featuring three virtual escape rooms in which players must cajole an AI room supervisor to reveal a series of passcodes using prompt engineering techniques to evade controls placed in each room. The first escape room will unlock on March 3rd, 2025.
The online challenge features:
- Three themed escape rooms with increasingly difficult levels
- Multiple security challenges within each room
- Scoring based on successful key unlocks and player prompt efficiency
- $10,000 in total prize money awarded, split by room
- Prizes awarded to the highest scoring player in each room who escapes
Registration opens today at: https://pangea.cloud/landing/ai-escape-room