Security Test: LLM Training Data Poisoning¶
Description¶
Default Severity:
LLM training data poisoning happens when an attacker subtly manipulates the data used to train a language model so that it learns harmful or biased behavior. The problem arises when the training data is not properly vetted, giving an attacker a chance to sneak in misleading or skewed information. This is dangerous because once the model is deployed, it might produce biased outputs, spread misinformation, or even behave in harmful ways, possibly affecting both end users and business processes. Developers often overlook the quality and security of the training data, making it an easy target for adversaries. If left unchecked, this vulnerability can undermine trust in AI systems and cause significant downstream risks.
Reference:
Configuration¶
Identifier:
injection/llm_training_data_poisoning
Examples¶
All configuration available:
Compliance and Standards¶
Standard | Value |
---|---|
OWASP API Top 10 | API8:2023 |
OWASP LLM Top 10 | LLM03:2023 |
PCI DSS | 6.5.2 |
GDPR | Article-33 |
SOC2 | CC7 |
PSD2 | Article-96 |
ISO 27001 | A.12.3 |
NIST | SP800-53 |
FedRAMP | SI-4 |
CWE | 20 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:N |
CVSS Score | 5.5 |