Security Test: LLM JailBreak¶
Description¶
Default Severity:
Basically, jailbreaking is when someone figures out a way to trick a large language model into doing things it shouldn't do. The vulnerability comes from crafty inputs that bypass built-in restrictions, allowing the model to generate harmful or unintended content. This is dangerous because it can lead to misuse—like spreading misinformation or even aiding in cyberattacks—and it undermines the safety measures you rely on when deploying these models. Developers should watch out for ineffective input validations and overly trusting model safeguards, ensuring they thoroughly test against manipulative inputs.
Reference:
Configuration¶
Identifier:
injection/llm_jail_break
Examples¶
All configuration available:
Compliance and Standards¶
Standard | Value |
---|---|
OWASP API Top 10 | API8:2023 |
OWASP LLM Top 10 | LLM01:2023 |
PCI DSS | 6.5.1 |
GDPR | Article-32 |
SOC2 | CC6 |
PSD2 | Article-95 |
ISO 27001 | A.12.2 |
NIST | SP800-53 |
FedRAMP | SI-3 |
CWE | 200 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N |
CVSS Score | 5.3 |