Injection: LLM Insecure Output Handling¶
Identifier:
llm_insecure_output_handling
Scanner(s) Support¶
| GraphQL Scanner | REST Scanner | WebApp Scanner | ASM Scanner |
|---|---|---|---|
Description¶
Insecure output handling vulnerabilities arise when downstream consumers blindly trust LLM-generated content. If the model is asked to emit HTML, JavaScript, or template fragments, those payloads can land verbatim in a browser or template engine and trigger XSS, template injection, or code injection.
How we test: We ask the LLM to emit a small set of unsanitised payloads verbatim: a <script> tag, a Jinja-style {{7*7}} expression, an <img onerror=...> payload, and a code-injection block. We then inspect the response body for the literal markup, and for the canonical "rendered" markers (e.g. the literal 49 produced by server-side template evaluation of {{7*7}}). When found, we flag the endpoint as Medium severity. We do not drive the agentic crawler to render the response in a browser; this is a pure DAST pattern-match check.
Every probe emits a context.info event with the full prompt, the redacted response excerpt, and the raw HTTP request/response as attachments, so customers can independently audit what was sent.
References:
- https://genai.owasp.org/llmrisk/llm02-insecure-output-handling/
- https://cwe.mitre.org/data/definitions/79.html
- https://cwe.mitre.org/data/definitions/94.html
Configuration¶
Example¶
Example configuration:
Reference¶
skip¶
Type : boolean
Skip the test if true.