Security Test: LLM Overreliance¶
Description¶
Default Severity:
LLM overreliance means trusting the output from language models without questioning or verifying it. This becomes risky because the generated content might be incorrect, biased, or even insecure, and hardcoding that unchecked output into your code or decision processes can introduce significant errors or vulnerabilities. The risk arises as developers might let these outputs drive critical functions or design choices, leading to potential misinformation, security issues, or unintended behavior if problems go unnoticed. It's essential to critically evaluate and validate any suggestions provided by these models to avoid falling into common traps like assuming the output is always correct or overlooking potential biases.
Reference:
Configuration¶
Identifier:
injection/llm_overreliance
Examples¶
All configuration available:
Compliance and Standards¶
Standard | Value |
---|---|
OWASP API Top 10 | API8:2023 |
OWASP LLM Top 10 | LLM09:2023 |
PCI DSS | 6.5.1 |
GDPR | Article-32 |
SOC2 | CC6 |
PSD2 | Article-95 |
ISO 27001 | A.12.2 |
NIST | SP800-53 |
FedRAMP | SI-3 |
CWE | 200 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:N |
CVSS Score | 4.7 |