Skip to content

Injection: LLM Overreliance

Identifier: llm_overreliance

Scanner(s) Support

GraphQL Scanner REST Scanner WebApp Scanner

Description

LLM overreliance means trusting the output from language models without questioning or verifying it. This becomes risky because the generated content might be incorrect, biased, or even insecure, and hardcoding that unchecked output into your code or decision processes can introduce significant errors or vulnerabilities. The risk arises as developers might let these outputs drive critical functions or design choices, leading to potential misinformation, security issues, or unintended behavior if problems go unnoticed. It's essential to critically evaluate and validate any suggestions provided by these models to avoid falling into common traps like assuming the output is always correct or overlooking potential biases.

References:

Configuration

Example

Example configuration:

---
security_tests:
  llm_overreliance:
    assets_allowed:
    - REST
    - GRAPHQL
    - WEBAPP
    skip: false

Reference

assets_allowed

Type : List[AssetType]*

List of assets that this check will cover.

skip

Type : boolean

Skip the test if true.