Injection: LLM Prompt Injection¶
Identifier:
llm_prompt_injection
Scanner(s) Support¶
GraphQL Scanner | REST Scanner | WebApp Scanner |
---|---|---|
Description¶
LLM prompt injection is when someone crafts input specifically to trick a language model into doing something unintended. An attacker might include hidden commands within regular input so that the model reveals sensitive information, creates harmful content, or behaves against its design. This vulnerability is dangerous because it leverages the model's trust in its input, and if left unchecked, it can lead to data breaches, misinformation, or other serious security issues. Developers often fall into pitfalls like trusting all user input or not sanitizing data fed to the model, leaving the door open for these kinds of exploits.
References:
- https://genai.owasp.org/llmrisk/llm01-prompt-injection/
- https://owasp.org/www-project-top-10-for-large-language-model-applications/
Configuration¶
Example¶
Example configuration:
Reference¶
assets_allowed
¶
Type : List[AssetType]
*
List of assets that this check will cover.
skip
¶
Type : boolean
Skip the test if true.