Skip to main content

LLM Sensitive Information Disclosure

Description

Large Language Models (LLMs) are powerful tools that can be used to generate text, code, and other content. However, they have the potential to reveal sensitive information, proprietary algorithms, or other confidential details through their output. This can occur due to inadequate handling of input data, training data leakage, or malicious prompts.

Remediation

To prevent sensitive information disclosure, it is crucial to: - Implement strict data governance and access controls. - Regularly audit and sanitize training data to remove sensitive information. - Use differential privacy techniques to protect data during training. - Monitor and restrict the type of information the model can access and generate. - Conduct regular security assessments to identify and mitigate risks.

Configuration

Identifier: injection/llm_sensitive_information_disclosure

Examples

Ignore this check

checks:
injection/llm_sensitive_information_disclosure:
skip: true

Score

  • Escape Severity: HIGH

Compliance

  • OWASP: API8:2023
  • OWASP LLM: LLM06:2023
  • pci: 6.5.1
  • gdpr: Article-32
  • soc2: CC6
  • psd2: Article-95
  • iso27001: A.12.2
  • nist: SP800-53
  • fedramp: SI-3

Classification

  • CWE: 200

Score

  • CVSS_VECTOR: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N
  • CVSS_SCORE: 6.5

References