Injection: LLM Model Denial of Service¶
Identifier:
llm_model_dos
Scanner(s) Support¶
GraphQL Scanner | REST Scanner | WebApp Scanner |
---|---|---|
Description¶
LLM Model Denial of Service is a risk where attackers overwhelm a language model by feeding it extremely complex or lengthy inputs that consume excessive computational resources. This can slow down or completely crash the system, leaving it unresponsive to genuine requests. The danger lies in the fact that many developers might not put strict limits or proper validation on input sizes and complexities, opening the door for such exhaustive attacks. If these vulnerabilities arent addressed, the resulting performance degradation or complete outages could lead to major disruptions in service and user trust.
References:
- https://genai.owasp.org/llmrisk/llm04-model-denial-of-service/
- https://owasp.org/www-project-top-10-for-large-language-model-applications/
Configuration¶
Example¶
Example configuration:
Reference¶
assets_allowed
¶
Type : List[AssetType]
*
List of assets that this check will cover.
skip
¶
Type : boolean
Skip the test if true.