Skip to content

Injection: LLM Endpoint Detection

Identifier: llm_detection

Scanner(s) Support

GraphQL Scanner REST Scanner WebApp Scanner

Description

LLM Endpoint Detection is about finding when an application exposes a way to interact with a language model, which can be a hidden door for potential attackers. If developers arent careful, these endpoints may allow malicious input that tricks the system into doing unexpected or harmful things, like revealing sensitive data or running unauthorized code. Often, the issue arises when endpoints arent properly secured or validated, letting attackers use injection attacks to manipulate how the underlying model behaves. This can lead not only to data breaches but also to broader misuse of the application, especially when developers make assumptions about what kind of input will be received. The danger lies in these overlooked spaceswhat seems like a harmless feature can become a gateway for more significant security problems if not treated with caution.

References:

Configuration

Example

Example configuration:

---
security_tests:
  llm_detection:
    assets_allowed:
    - REST
    - GRAPHQL
    - WEBAPP
    skip: false

Reference

assets_allowed

Type : List[AssetType]*

List of assets that this check will cover.

skip

Type : boolean

Skip the test if true.