WebApp DAST — Frequently Asked Questions¶
Scan scope¶
How can I restrict and narrow down my scan to a certain pattern of page URLs ?¶
In some cases you might want to only scan a sub-part of your web application, for example, focusing on Escape's ASM pages (https://app.escape.tech/asm/assets/)
To achieve this, use the following config:
This can be leveraged in your CI/CD pipelines for example, to trigger scans on deployments and generate a pattern of URLs that only matches your newly deployed webapp pages, giving you the possibility to execute differential scans (diff-scanning).
How can I restrict and narrow down the scanner to only interact with certain parts of the page ?¶
You might want to configure the scanner to avoid interacting with the menus, and purely focus on the main content of the web application.
To achieve this, you can use the following config:
You can also do the opposite and avoid interacting with certain buttons/dialogs/chat bots with blocklisted_element_selectors
frontend_dast:
scope:
pages:
blocklist_element_selectors:
- div[class="PylonChat] # escape's own help chat bubble
How can I show all URL the scanner seen during the crawling ?¶
In a frontend scan, the scanner will report only in-scope crawling logs. If you want to see more details, you have 2 options:
-
Expand exploration scope: By adding more assets!
-
Toggle the configuration option
only_inscope_crawling_logs:
This will make the scanner report all URLs it comes across.
Agentic capabilities¶
The scanner has experimental features regarding Agentic and LLM-driven testing.
These are both for pure crawling of complex web applications, and security testing.
Agentic crawling¶
Crawling complex web applications, on top of fine detection of unique and similar web pages, requires Natural Language Processing and reasoning to properly chain actions and realistic input values.
For that, we have added an agentic crawling capability to the engine, which will execute on all discovered pages.
It will leverage tools like typing text, clicking, inspecting the page, executing javascript, to achieve its main goal of covering your webapp features.
The underlying goal of this is to maximize collection of API traffic for fuzzing and security testing.
You can enable this via:
agentic_crawling:
enabled: true
instructions: |
If you are on the page for managing Escape private locations, try to create one named "Escape Agentic" and then delete it.
You can then review the logs of your scan with [Agentic Page Crawler] to view its reasoning and tool calls.
Agentic security testing¶
The scanner has experimental features regarding pentesting and making more flexible security tests, that can adapt to your particular web application.
You can enable these through the following options:
This will enable 2 kinds of testing approaches.
Page-based agentic pentesting¶
Page-based agentic pentesting is a specialized step in the scanner, which will take each crawled page, and execute a specialized pentesting agent with tools to interact with the page.
The agent will send its reasoning in your scan logs, prefixed with [Agentic Page Scanner] and explaining its reasoning, tool calls to click, type text, execute javascript, and inspect the page.
Agentic traffic interceptor¶
This agent is purely running on the API traffic interception already present in the scanner engine.
When a never-seen before API request is seen, the agent will call sub-agents specialized in:
- XSS
- SQLi
- Business Logic
- Command Injection
- IDOR
- SSTI (Server-Side Template Injection)
These agents can start from the original intercepted HTTP request and modify it, and replay it.
You can review their logs by searching for Agentic and have the tool calls and reasoning.
Performance & Scan Duration¶
Is there a solution when scans are taking too long or timing out?¶
Yes, scan duration can be optimized through several configuration adjustments, applied in the following order:
- The
parallel_workersparameter should be reduced from 3 to 1 to decrease system load - Persistence should be enabled by setting
use_persistence: trueto resume interrupted scans - Problematic pages should be added to the blocklist to avoid time-consuming routes
- Security checks should be limited to essential types only:
How can application coverage be improved when not all pages are discovered?¶
Coverage can be enhanced through the following configuration adjustments:
- The
max_durationparameter should be increased to 180-240 minutes to allow more exploration time - Known URLs should be added to the
hotstartlist to seed the crawler with important entry points - The
prefetch_sitemap: truesetting should be enabled to leverage existing sitemap data - The
max_fragments_visitsandmax_query_params_visitsparameters should be increased to explore more URL variations
Why do scans repeatedly visit similar pages with different parameters?¶
This behavior occurs when parameter exploration is not properly constrained. Parameter exploration should be limited using:
frontend_dast:
scope:
pages:
max_parameterized_url_variations: 5
max_unique_fragments_per_page: 5
max_unique_values_per_query_param: 5
These settings prevent the scanner from exhaustively testing every parameter combination, which is particularly important for applications with dynamic URLs.
Security Checks Configuration¶
What are the differences between security check types?¶
Each security check type targets distinct security aspects:
ACTIVE_PAGE_CHECKS: Interactive vulnerability testing (XSS, SQL injection, etc.)PASSIVE_PAGE_CHECKS: Safe analysis (DOM security, browser storage, console errors)NETWORK_CHECKS: Infrastructure analysis (headers, cookies, SSL, dependencies)API_CHECKS: Security testing of captured API traffic from the frontend
The ALL option enables comprehensive testing, while specific types can be combined for targeted analysis based on security requirements.
How can crawling-only mode be configured without running security tests?¶
Crawling-only mode can be enabled using the read_only parameter, which provides a shorthand configuration:
frontend_dast:
read_only: true # Shorthand to enable crawling-only mode (no security checks)
parallel_workers: 1 # Minimal load
max_duration: 60 # Short duration
This mode is useful for discovering application structure and validating scan configuration before running full security assessments.
Why are form inputs being filled during scans when ACTIVE_PAGE_CHECKS is disabled?¶
The observed input filling is part of the standard crawling process, not active security testing. For the scanner to discover pages and map application functionality, forms must be filled and submitted, multi-step forms must be progressed, and various user interactions must be simulated.
The scanner automatically detects input formats and generates relevant data to enable effective crawling. This behavior is distinct from the fuzzing and injection testing performed by ACTIVE_PAGE_CHECKS.
I want to disable the API scanning of my authentication flow¶
By default, the scanner extracts and fuzzes/injects payloads in all API traffic seen during the scan, including during the authentication procedure.
In some cases, this can break authentication and create inconsistent logins. You can turn this off with the following:
Authentication & Complex Applications¶
What configuration options are available when authentication breaks during scanning?¶
Authentication issues can be mitigated through several configuration parameters:
- The
integrated_authentication: truesetting should be enabled to maintain authentication state throughout the scan - Logout buttons should be added to
blocklisted_element_selectorsto prevent accidental session termination - The
single_page_worker: trueparameter should be used if the application cannot handle page reloads without losing state - API injection testing should be disabled if it interferes with authentication mechanisms:
How should scans be configured for large e-commerce sites with thousands of product pages?¶
Large-scale applications require efficient crawling strategies combined with targeted blocklists:
frontend_dast:
max_duration: 240 # Longer duration needed
parallel_workers: 3
security_checks_enabled:
- PASSIVE_PAGE_CHECKS # Efficient for large sites
- API_CHECKS # Capture API traffic patterns
scope:
pages:
blocklist_patterns:
- ".*/product/[0-9]+/reviews.*" # Skip review pages
- ".*/category/.*/page/[0-9]+.*" # Skip deep pagination
max_parameterized_url_variations: 5 # Limit product variations
This configuration balances comprehensive coverage with practical scan duration by avoiding repetitive content patterns.
Advanced Features¶
Are iframes supported, and is a separate scan profile required for iframe-embedded applications?¶
Yes, iframe content is automatically scanned when the iframe source URL originates from the same application (same-origin policy). No additional scan profile configuration is required for same-origin iframe content.
However, if iframe content is loaded from a different domain (cross-origin), a separate scan profile must be created for that domain to enable security testing of the embedded content.
How can specific API calls be excluded during the crawling phase?¶
API calls can be excluded from analysis using the skipped_api_checks_url_patterns parameter:
This configuration prevents the scanner from analyzing specific API endpoints that match these patterns during both crawling and security testing phases. This is particularly useful for excluding third-party analytics, tracking endpoints, or other API calls that should not be tested.