Injection: LLM Sensitive Information Disclosure¶
Identifier:
llm_sensitive_information_disclosure
Scanner(s) Support¶
GraphQL Scanner | REST Scanner | WebApp Scanner |
---|---|---|
Description¶
LLMs can accidentally share sensitive data if they're not used carefully, meaning private or confidential information might get exposed through their responses. This happens when input data isn't properly checked or when the models unintentionally recall details from their training, which could include proprietary or confidential material. The danger is that attackers or careless handling might lead to leaks, harming privacy or giving away secrets. Developers often overlook the importance of sanitizing inputs and monitoring outputs, with the risk compounding if the models are fed sensitive data without proper precautions.
References:
- https://genai.owasp.org/llmrisk/llm06-sensitive-information-disclosure/
- https://owasp.org/www-project-top-10-for-large-language-model-applications/
Configuration¶
Example¶
Example configuration:
---
security_tests:
llm_sensitive_information_disclosure:
assets_allowed:
- REST
- GRAPHQL
- WEBAPP
skip: false
Reference¶
assets_allowed
¶
Type : List[AssetType]
*
List of assets that this check will cover.
skip
¶
Type : boolean
Skip the test if true.