Skip to main content

LLM Prompt Injection

Description

Large Language Models (LLMs) are powerful tools that can be used to generate text, code, and other content. However, they are vulnerable to prompt injection attacks. This occurs when an attacker manipulates an LLM through crafted inputs, causing the model to perform unintended actions. This can lead to the generation of malicious content, data leakage, or biased outputs.

Remediation

To prevent prompt injection attacks, it is crucial to: - Implement robust input validation and sanitization to filter out malicious prompts. - Use input/output encoding to prevent injection. - Incorporate monitoring and anomaly detection to identify and mitigate suspicious activities. - Regularly update and patch the LLM software to address known vulnerabilities. - Conduct thorough security testing to identify and fix potential issues.

Configuration

Identifier: injection/llm_prompt_injection

Examples

Ignore this check

checks:
injection/llm_prompt_injection:
skip: true

Score

  • Escape Severity: HIGH

Compliance

  • OWASP: API8:2023
  • OWASP LLM: LLM01:2023
  • pci: 6.5.1
  • gdpr: Article-32
  • soc2: CC6
  • psd2: Article-95
  • iso27001: A.12.2
  • nist: SP800-53
  • fedramp: SI-3

Classification

  • CWE: 200

Score

  • CVSS_VECTOR: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N
  • CVSS_SCORE: 5.3

References