Configuration
Frontend DAST Configuration¶
Most of your scan configuration already follows the existing API DAST scan configuration, such as authentication.
However, there are a few additional configurations that are specific to SPA DAST scans.
Authentication¶
Just like API scans, you can configure a simple header authentication preset for now.
presets:
- type: headers
users:
- headers:
Authorization: Bearer user1Token
username: user1
validation: false
Blocklist configuration¶
Just like API scans, you can also configure blocklisted paths in your frontend scans. These support regex patterns. This enables you to optimize the scanner time by avoiding crawling useless pages, like /faq/
and articles.
Scope Configuration¶
In your Expert Configuration section in the settings of your scan, you can configure the domains in scope of your scan. This allows the frontend scanner to capture underlying traffic to subdomains, enabling generation of OpenAPI schemas of all your APIs used by the frontend, but also security analysis of the traffic. Domain scopes are filled by your organization's domains by default. Set it to "self" to allow only the current frontend domain.
Hotstart Base URLs¶
In your Expert Configuration section in the settings of your scan, you can configure and add more base URLs for your scan. Base URLs is a list of URLs that the scanner should visit. You can pre-seed the scanner with a list of URLs to start the scan from and enrich the crawling process by boosting known URLs.
Maximum scan duration¶
You can configure your scan time in minutes, to achieve better coverage for bigger web applications.
Maximum is 8 hours = 480 minutes Default for Frontend scans: 120 minutes
Single page worker mode¶
For specific use-cases where your web application can only be used in a single page, without any reloads or navigations or page refresh (F5
etc...), this mode will force the scanner to only run in a single page and purely navigate in the elements of that page, not through any other URLs.
Worker Parallelism¶
To speed up or slow down your scan, you can configure the number of simultaneous opened pages for the scanner. Maximum allowed is 5 due to memory constraints. If you observe stability issues with failed scans, you can try to lower this value. Default is 3.
Integrated authentication¶
For specific use cases where the browser-based authentication MUST happen in the same browser as the scanner engine, this option enables you to play the authentication procedure inside the scanner directly. This can help with specific authentication mechanism that rely on in-memory values, web-workers etc...
In most cases, automatic extraction of cookies, local storage, session storage in the default browser-based login will be sufficient and inject the values into the engine.
Crawling only mode¶
For faster scan times, we provide a special mode that disables heavier security checks, while still performing frontend crawling. Note that by default, API traffic will still be sent for security checks (see API Traffic Analysis section below).
API Traffic Analysis¶
By default, the frontend scan will send captured API traffic to security checks. You can disable this if needed:
Scan Persistence¶
To speed up the scan, you can enable persistence mode. This will save URLs from previous scans and load them into the scanner engine. This is enabled by default to enhance crawling stability.
X-Escape-User header¶
Disabled by default to avoid breaking web applications. If enabled, attaches the currently logged-in user name from your authentication configuration, into a X-Escape-User
header in all requests.
Blocked Element Selectors¶
You can configure a list of element selectors that the scanner should not interact with during the frontend scan. This is useful for avoiding interactions with elements that could disrupt the scan or the application state, such as logout buttons, help widgets, chat boxes, etc.
scan:
frontend_blocklisted_element_selectors:
- "#logout-button"
- ".help-widget"
- "#chat-box"
- ".lock-account-button"
Sitemap Prefetching¶
By default, the scanner will attempt to prefetch and use any available sitemaps (robots.txt, sitemap.xml, etc.) as seeds for the crawler. This helps improve coverage by starting with a known set of URLs. You can disable this feature if needed:
Fragment and Query Parameter Limits¶
You can control how many times the scanner visits pages with the same fragments or query parameters to optimize the crawling process:
scan:
frontend_max_fragments_visits: 3 # Maximum visits to a page with same fragment
frontend_max_query_params_visits: 3 # Maximum visits to a page with same query parameters
frontend_max_parameter_occurence: 5 # Maximum occurrences of a parameter in a URL
User Agent Configuration¶
You can specify a custom user agent string for the frontend scanner: