Skip to content

Injection: LLM Model Denial of Service

Identifier: llm_model_dos

Scanner(s) Support

GraphQL Scanner REST Scanner WebApp Scanner

Description

LLM Model Denial of Service is a risk where attackers overwhelm a language model by feeding it extremely complex or lengthy inputs that consume excessive computational resources. This can slow down or completely crash the system, leaving it unresponsive to genuine requests. The danger lies in the fact that many developers might not put strict limits or proper validation on input sizes and complexities, opening the door for such exhaustive attacks. If these vulnerabilities arent addressed, the resulting performance degradation or complete outages could lead to major disruptions in service and user trust.

References:

Configuration

Example

Example configuration:

---
security_tests:
  llm_model_dos:
    assets_allowed:
    - REST
    - GRAPHQL
    - WEBAPP
    skip: false

Reference

assets_allowed

Type : List[AssetType]*

List of assets that this check will cover.

skip

Type : boolean

Skip the test if true.