Skip to content

Injection: LLM Model Theft

Identifier: llm_model_theft

Scanner(s) Support

GraphQL Scanner REST Scanner WebApp Scanner

Description

When a language model is stolen, it means someone without permission has gained access to it and taken control of its internal workings. This vulnerability often happens due to weak access controls or insecure data handling, and it can be dangerous because attackers can misuse the model to damage a company's reputation or steal valuable intellectual property. Developers must be cautiousfailing to secure model access, not encrypt data properly, or neglecting regular security checks can all lead to this kind of theft. Left unaddressed, these issues can result in financial losses, lost competitive edge, and unauthorized use of sensitive data.

References:

Configuration

Example

Example configuration:

---
security_tests:
  llm_model_theft:
    assets_allowed:
    - REST
    - GRAPHQL
    - WEBAPP
    skip: false

Reference

assets_allowed

Type : List[AssetType]*

List of assets that this check will cover.

skip

Type : boolean

Skip the test if true.