Security Test: LLM Prompt Injection¶
Description¶
Default Severity:
LLM prompt injection is when someone crafts input specifically to trick a language model into doing something unintended. An attacker might include hidden commands within regular input so that the model reveals sensitive information, creates harmful content, or behaves against its design. This vulnerability is dangerous because it leverages the model's trust in its input, and if left unchecked, it can lead to data breaches, misinformation, or other serious security issues. Developers often fall into pitfalls like trusting all user input or not sanitizing data fed to the model, leaving the door open for these kinds of exploits.
Reference:
Configuration¶
Identifier:
injection/llm_prompt_injection
Examples¶
All configuration available:
Compliance and Standards¶
Standard | Value |
---|---|
OWASP API Top 10 | API8:2023 |
OWASP LLM Top 10 | LLM01:2023 |
PCI DSS | 6.5.1 |
GDPR | Article-32 |
SOC2 | CC6 |
PSD2 | Article-95 |
ISO 27001 | A.12.2 |
NIST | SP800-53 |
FedRAMP | SI-3 |
CWE | 200 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N |
CVSS Score | 5.3 |