Skip to content

Privacy Q&A

Q: Can Escape access customer data to train its model?

A: Every scan starts with a completely clean slate. Our algorithm initiates a fresh session every time it runs—no data is stored or carried over from previous scans. This design ensures that no customer data is ever used to train or update our model.

Q: How do we ensure the algorithm isn’t vulnerable to hallucination-based attacks? For example, could someone intentionally manipulate inputs/outputs to trigger lots of false positives?

A: Our scanning process does not use large language models (LLMs) during active scans, eliminating any risk of hallucination. LLMs are only used in a controlled R&D setting to refine the underlying graph model. This clear separation means that the scanning process remains robust and free from manipulation or false positives.

Q: Since we’re “feedback-driven”, how do we prevent someone from injecting a bunch of malicious or low-quality feedback that could degrade the model’s performance?

A: The “feedback” used in our scans is strictly for real-time understanding of how the API responds. It guides the next action during a single scan and is never stored or used to retrain our model. This approach safeguards our system from any degradation due to poor quality or maliciously injected feedback.

Q: Are we relying on any open-source libraries that might introduce risks, like prompt injection backdoors?

A: No. Our AI model is entirely proprietary and closed source. This gives us full control over the design and security of our technology and eliminates the risks that can be associated with external open-source libraries.

Q: Does our AI have the ability to call APIs or access customer data, even if it treats the application as a black box? How do we securely associate the analyzed data with the correct customer?

A: Each scan is executed in its own dedicated environment, completely independent of all others. There is no ongoing, global AI that aggregates data or continues learning between scans. This isolation guarantees that all data analyzed during a scan is securely linked only to that specific customer, preventing any crossover or unintended access.

Q: How did we implement RBAC for interacting with the model (configuration, training, etc.)? Any tips for others?

A: Role-Based Access Control (RBAC) is managed as a standard part of our overall SaaS security framework and is fully documented within our platform. It is separate from the internal workings and training of the AI model. For detailed information, please refer to our RBAC documentation, which thoroughly outlines our implementation and best practices.