Injection: LLM Insecure Output Handling¶
Identifier:
llm_insecure_output_handling
Scanner(s) Support¶
GraphQL Scanner | REST Scanner | WebApp Scanner |
---|---|---|
Description¶
LLM insecure output handling means that generated content isnt carefully checked before its used or displayed. If outputs arent properly validated, cleaned, or encoded, malicious code or data attacks can sneak in, potentially letting attackers inject harmful scripts, redirect requests, or steal sensitive data. Developers might assume the tools output is safe by default, but without careful checks, these oversights can open up vulnerabilities like XSS or SSRF. Simply put, failing to properly handle what the model generates can lead to significant security risks, so it's crucial to treat every output with caution.
References:
- https://genai.owasp.org/llmrisk/llm02-insecure-output-handling/
- https://owasp.org/www-project-top-10-for-large-language-model-applications/
Configuration¶
Example¶
Example configuration:
---
security_tests:
llm_insecure_output_handling:
assets_allowed:
- REST
- GRAPHQL
- WEBAPP
skip: false
Reference¶
assets_allowed
¶
Type : List[AssetType]
*
List of assets that this check will cover.
skip
¶
Type : boolean
Skip the test if true.