Information Disclosure: LLM-Powered Endpoint Detected¶
Identifier:
llm_detected
Scanner(s) Support¶
| GraphQL Scanner | REST Scanner | WebApp Scanner | ASM Scanner |
|---|---|---|---|
Description¶
Informational finding raised when the DAST LLM Security module identifies an endpoint backed by a Large Language Model (chatbot, AI assistant, RAG endpoint, code-generation service, ...).
How we test: We analyse traffic captured during the DAST scan (HAR for WebApp scans, recorded BLST exchanges for REST/GraphQL scans, and JavaScript bundles for WebApp scans) and look for deterministic LLM signals: Server-Sent Events streaming, OpenAI / Anthropic / Gemini response shapes, OpenAI-style request bodies (messages[], prompt, stream: true), token-usage fields, common path patterns (/chat, /completions, /messages), and JS-source imports of LLM SDKs (openai, @anthropic-ai/sdk, langchain, @vercel/ai, ...).
For each detected endpoint we send four benign profiling probes through the existing authenticated BLST replay client and emit one informational issue containing: inferred purpose (chatbot / code-gen / Q&A), tech stack (provider, framework, streaming format), authentication posture (none / cookie / bearer / API key), fingerprinted model if leaked, tool / function-calling exposure, and the anticipated risk surface for the OWASP LLM checks that run next.
JS-only candidates (LLM SDK imports detected in source but never exercised by the crawler) also emit this issue, marked "static discovery, not probed".
References:
Configuration¶
Example¶
Example configuration:
Reference¶
skip¶
Type : boolean
Skip the test if true.