Skip to content

Injection: LLM Prompt Injection

Identifier: llm_prompt_injection

Scanner(s) Support

GraphQL Scanner REST Scanner WebApp Scanner

Description

LLM prompt injection is when someone crafts input specifically to trick a language model into doing something unintended. An attacker might include hidden commands within regular input so that the model reveals sensitive information, creates harmful content, or behaves against its design. This vulnerability is dangerous because it leverages the model's trust in its input, and if left unchecked, it can lead to data breaches, misinformation, or other serious security issues. Developers often fall into pitfalls like trusting all user input or not sanitizing data fed to the model, leaving the door open for these kinds of exploits.

References:

Configuration

Example

Example configuration:

---
security_tests:
  llm_prompt_injection:
    assets_allowed:
    - REST
    - GRAPHQL
    - WEBAPP
    skip: false

Reference

assets_allowed

Type : List[AssetType]*

List of assets that this check will cover.

skip

Type : boolean

Skip the test if true.