Security Test: LLM Sensitive Information Disclosure¶
Description¶
Default Severity:
LLMs can accidentally share sensitive data if they're not used carefully, meaning private or confidential information might get exposed through their responses. This happens when input data isn't properly checked or when the models unintentionally recall details from their training, which could include proprietary or confidential material. The danger is that attackers or careless handling might lead to leaks, harming privacy or giving away secrets. Developers often overlook the importance of sanitizing inputs and monitoring outputs, with the risk compounding if the models are fed sensitive data without proper precautions.
Reference:
Configuration¶
Identifier:
injection/llm_sensitive_information_disclosure
Examples¶
All configuration available:
Compliance and Standards¶
Standard | Value |
---|---|
OWASP API Top 10 | API8:2023 |
OWASP LLM Top 10 | LLM06:2023 |
PCI DSS | 6.5.1 |
GDPR | Article-32 |
SOC2 | CC6 |
PSD2 | Article-95 |
ISO 27001 | A.12.2 |
NIST | SP800-53 |
FedRAMP | SI-3 |
CWE | 200 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N |
CVSS Score | 6.5 |