Skip to content

Injection: LLM Training Data Poisoning

Identifier: llm_training_data_poisoning

Scanner(s) Support

GraphQL Scanner REST Scanner WebApp Scanner

Description

LLM training data poisoning happens when an attacker subtly manipulates the data used to train a language model so that it learns harmful or biased behavior. The problem arises when the training data is not properly vetted, giving an attacker a chance to sneak in misleading or skewed information. This is dangerous because once the model is deployed, it might produce biased outputs, spread misinformation, or even behave in harmful ways, possibly affecting both end users and business processes. Developers often overlook the quality and security of the training data, making it an easy target for adversaries. If left unchecked, this vulnerability can undermine trust in AI systems and cause significant downstream risks.

References:

Configuration

Example

Example configuration:

---
security_tests:
  llm_training_data_poisoning:
    assets_allowed:
    - REST
    - GRAPHQL
    - WEBAPP
    skip: false

Reference

assets_allowed

Type : List[AssetType]*

List of assets that this check will cover.

skip

Type : boolean

Skip the test if true.