AI Policy¶
Escape uses AI models across AI Pentesting, Escape Copilot, and AI Remediation. Our AI Policy spells out which models run where, which data they see, whether that data is ever used for model training (it's not), and what retention controls apply.
Read the Full Policy¶
The signed, dated, full-length AI Policy lives as a PDF here:
Download the Escape AI Policy (PDF)
The PDF is the authoritative version. Any discrepancy between the summary on this page and the PDF is resolved in favor of the PDF.
Summary¶
- Models used: frontier models from OpenAI, Anthropic, and Google, plus internal models for specific tasks. The current list, version numbers, and their hosting locations are in the PDF.
- Data isolation: customer scan data is not used to train any model Escape consumes or operates. No customer data is sent as a training sample to any third-party provider.
- Prompt handling: prompts sent to third-party providers are covered by each provider's zero-retention policy where available (for example OpenAI's Zero Data Retention terms). Where zero-retention isn't offered for a given model, we name the model in the PDF and describe what is retained.
- Model output review: AI-generated output (remediations, agent-chosen attack paths, chat responses) is reviewed by deterministic validators before surfacing to users, to catch hallucinations against the live scanner state.
How This Connects to Your Compliance Program¶
- Privacy and Security documents the broader data-handling posture. The AI Policy is a subset.
- Rotating Keys documents the encryption that protects any AI-generated artifact at rest.
- Private Tenant pins model invocation to a tenant scope so outputs don't flow into shared infrastructure.
Questions¶
Policy questions go to privacy@escape.tech. Legal questions route from your procurement contact.