Skip to content

WebApp DAST — Frequently Asked Questions

Scan scope

How can I restrict and narrow down my scan to a certain pattern of page URLs ?

In some cases you might want to only scan a sub-part of your web application, for example, focusing on Escape's ASM pages (https://app.escape.tech/asm/assets/)

To achieve this, use the following config:

frontend_dast:
  scope:
    pages:
      allowlist_url_patterns:
        - https://app.escape.tech/asm.*

This can be leveraged in your CI/CD pipelines for example, to trigger scans on deployments and generate a pattern of URLs that only matches your newly deployed webapp pages, giving you the possibility to execute differential scans (diff-scanning).

How can I restrict and narrow down the scanner to only interact with certain parts of the page ?

You might want to configure the scanner to avoid interacting with the menus, and purely focus on the main content of the web application.

To achieve this, you can use the following config:

frontend_dast:
  scope:
    pages:
      allowlist_element_selectors:
        - div[class="main-content"]

You can also do the opposite and avoid interacting with certain buttons/dialogs/chat bots with blocklisted_element_selectors

frontend_dast:
  scope:
    pages:
      blocklist_element_selectors:
        - div[class="PylonChat] # escape's own help chat bubble

How can I show all URL the scanner seen during the crawling ?

In a frontend scan, the scanner will report only in-scope crawling logs. If you want to see more details, you have 2 options:

  • Expand exploration scope: By adding more assets!

  • Toggle the configuration option only_inscope_crawling_logs:

frontend_dast:
  scope:
    pages:
      only_inscope_crawling_logs: true

This will make the scanner report all URLs it comes across.

Agentic capabilities

The scanner has experimental features regarding Agentic and LLM-driven testing.

These are both for pure crawling of complex web applications, and security testing.

Agentic crawling

Crawling complex web applications, on top of fine detection of unique and similar web pages, requires Natural Language Processing and reasoning to properly chain actions and realistic input values.

For that, we have added an agentic crawling capability to the engine, which will execute on all discovered pages.

It will leverage tools like typing text, clicking, inspecting the page, executing javascript, to achieve its main goal of covering your webapp features.

The underlying goal of this is to maximize collection of API traffic for fuzzing and security testing.

You can enable this via:

frontend_dast:
  agentic_crawling:
    enabled: true
    instructions: |
      If you are on the page for managing Escape private locations, try to create one named "Escape Agentic" and then delete it.

For more complex scenarios with multiple instructions, use XML prompting to structure your instructions clearly:

frontend_dast:
  agentic_crawling:
    enabled: true
    instructions: |
      <tasks>
        <task priority="high">
          If you find a page for managing team members or users, try to:
          1. Create a new user with email "test-agentic@escape.tech"
          2. Assign them the "Viewer" role
          3. Remove the user you just created
        </task>
        <task priority="medium">
          If you encounter any forms for creating resources (projects, assets, locations), 
          fill them out with realistic test data and submit them.
          After successful creation, try to delete or clean up the created resource.
        </task>
        <task priority="low">
          Explore all tabs and navigation menus to discover hidden features.
          Try clicking on settings icons and configuration panels.
        </task>
      </tasks>

      <constraints>
        <avoid>Do not click on logout buttons or sign out links</avoid>
        <avoid>Do not modify or delete any existing production data</avoid>
        <avoid>Skip any payment or billing-related actions</avoid>
      </constraints>

      <conditions>
        If you encounter a multi-step wizard or onboarding flow, complete all steps.
        If a page requires specific data formats (emails, URLs, dates), use realistic examples.
      </conditions>

Why XML prompting? XML provides clear structure for complex, multi-part instructions while remaining highly readable for LLMs. It allows you to separate different instruction types (tasks, constraints, priorities) and helps the agent understand context and hierarchy.

You can then review the logs of your scan with [Agentic Page Crawler] to view its reasoning and tool calls.

Agentic security testing

The scanner has experimental features regarding pentesting and making more flexible security tests, that can adapt to your particular web application.

You can enable these through the following options:

experimental:
  agentic_pentesting: true

This will enable 2 kinds of testing approaches.

Page-based agentic pentesting

Page-based agentic pentesting is a specialized step in the scanner, which will take each crawled page, and execute a specialized pentesting agent with tools to interact with the page.

The agent will send its reasoning in your scan logs, prefixed with [Agentic Page Scanner] and explaining its reasoning, tool calls to click, type text, execute javascript, and inspect the page.

Agentic traffic interceptor

This agent is purely running on the API traffic interception already present in the scanner engine.

When a never-seen before API request is seen, the agent will call sub-agents specialized in:

  • XSS
  • SQLi
  • Business Logic
  • Command Injection
  • IDOR
  • SSTI (Server-Side Template Injection)

These agents can start from the original intercepted HTTP request and modify it, and replay it.

You can review their logs by searching for Agentic and have the tool calls and reasoning.

Performance & Scan Duration

Is there a solution when scans are taking too long or timing out?

Yes, scan duration can be optimized through several configuration adjustments, applied in the following order:

  1. The parallel_workers parameter should be reduced from 3 to 1 to decrease system load

  2. Persistence should be enabled by setting use_persistence: true to resume interrupted scans

  3. Problematic pages should be added to the blocklist to avoid time-consuming routes

  4. Security checks should be limited to essential types only:

    frontend_dast:
      security_checks_enabled:
        - API_CHECKS  # Fast mode: only analyze captured API traffic
    

How can application coverage be improved when not all pages are discovered?

Coverage can be enhanced through the following configuration adjustments:

  1. The max_duration parameter should be increased to 180-240 minutes to allow more exploration time
  2. Known URLs should be added to the hotstart list to seed the crawler with important entry points
  3. The prefetch_sitemap: true setting should be enabled to leverage existing sitemap data
  4. The max_fragments_visits and max_query_params_visits parameters should be increased to explore more URL variations

Why do scans repeatedly visit similar pages with different parameters?

This behavior occurs when parameter exploration is not properly constrained. Parameter exploration should be limited using:

frontend_dast:
  scope:
    pages:
      max_parameterized_url_variations: 5
      max_unique_fragments_per_page: 5
      max_unique_values_per_query_param: 5

These settings prevent the scanner from exhaustively testing every parameter combination, which is particularly important for applications with dynamic URLs.

Security Checks Configuration

What are the differences between security check types?

Each security check type targets distinct security aspects:

  • ACTIVE_PAGE_CHECKS: Interactive vulnerability testing (XSS, SQL injection, etc.)
  • PASSIVE_PAGE_CHECKS: Safe analysis (DOM security, browser storage, console errors)
  • NETWORK_CHECKS: Infrastructure analysis (headers, cookies, SSL, dependencies)
  • API_CHECKS: Security testing of captured API traffic from the frontend

The ALL option enables comprehensive testing, while specific types can be combined for targeted analysis based on security requirements.

How can crawling-only mode be configured without running security tests?

Crawling-only mode can be enabled using the read_only parameter, which provides a shorthand configuration:

frontend_dast:
  read_only: true  # Shorthand to disable active page checks, still keeps api checks, passive page checks, network checks
  parallel_workers: 1  # Minimal load
  max_duration: 60  # Short duration

This mode is useful for discovering application structure and validating scan configuration before running full security assessments.

How can I run a safe, low-impact scan on production environments?

When scanning production environments, minimizing noise and system impact is critical to avoid triggering security alerts, overwhelming infrastructure, or affecting end-user experience. A production-safe configuration should combine multiple safety measures:

Minimal Security Testing

The read_only parameter should be enabled to disable active vulnerability testing while still collecting valuable security insights:

frontend_dast:
  read_only: true  # Disables active injection testing

Rate Limiting and Traffic Control

Request rates should be significantly reduced to minimize server load and avoid rate limiting or WAF triggers:

frontend_dast:
  max_requests_per_second: 2  # Conservative request rate (default: 10)
  max_concurrent_requests: 1  # Single request at a time (default: 3)
  parallel_workers: 1  # Single browser instance (default: 3)

Scan Duration and Scope

The scan scope should be limited to reduce overall system impact:

frontend_dast:
  max_duration: 30  # Shorter scan window (30 minutes)
  scope:
    pages:
      max_parameterized_url_variations: 2  # Reduce URL permutations

Firewall and Security System Configuration

To prevent triggering security alerts and blocks:

  1. IP Allowlisting: The scanner's IP address should be allowlisted in WAF, CDN, and firewall configurations

  2. User-Agent Identification: A custom User-Agent header can be configured for easy identification and allowlisting:

    frontend_dast:
      user_agent: "EscapeSecurity-ProductionScan/1.0 (Authorized-Scan)"
    
  3. Rate Limit Exemption: Scanner requests should be exempted from rate limiting rules when possible

  4. SIEM Alert Suppression: Security monitoring systems should have filters configured to suppress alerts from known scanner activities

Request Headers and Authentication

Scanner requests can be identified with custom headers for monitoring and filtering:

frontend_dast:
  custom_headers:
    X-Security-Scanner: "Escape"
    X-Scan-Purpose: "Production-Safety-Test"

Sensitive Areas Exclusion

High-risk endpoints should be explicitly excluded from scanning:

frontend_dast:
  scope:
    pages:
      blocklist_patterns:
        - ".*/admin/delete/.*"  # Exclude destructive admin actions
        - ".*/payment/process/.*"  # Exclude payment processing
        - ".*/data/export/.*"  # Exclude bulk data operations
    api:
      skipped_url_patterns:
        - ".*/api/admin/.*"  # Exclude admin APIs
        - ".*/api/delete/.*"  # Exclude deletion endpoints

Monitoring Exclusions

Elements that trigger monitoring or analytics should be avoided:

frontend_dast:
  scope:
    pages:
      blocklist_element_selectors:
        - 'button[data-action="delete"]'  # Avoid deletion buttons
        - 'a[href*="/admin/"]'  # Skip admin area links
        - '[data-analytics-critical="true"]'  # Skip critical analytics triggers

Complete Production-Safe Configuration Example

frontend_dast:
  # Minimal security testing
  read_only: true
  security_checks_enabled:
    - PASSIVE_PAGE_CHECKS  # Safe, non-invasive checks only
    - NETWORK_CHECKS  # Infrastructure analysis

  # Rate limiting and performance
  parallel_workers: 1
  max_duration: 30

  # Identification headers
  user_agent: "EscapeSecurity-ProductionScan/1.0 (Authorized-Scan)"
  custom_headers:
    X-Security-Scanner: "Escape"
    X-Scan-Purpose: "Production-Safety-Test"

  # Scope limitations
  scope:
    pages:
      max_parameterized_url_variations: 2
      max_unique_fragments_per_page: 2

      # Exclude high-risk areas
      blocklist_patterns:
        - ".*/admin/delete/.*"
        - ".*/payment/process/.*"
        - ".*/api/admin/.*"

      blocklist_element_selectors:
        - 'button[data-action="delete"]'
        - 'button[type="submit"][class*="danger"]'
        - '[data-critical-action="true"]'

    api:
      skipped_url_patterns:
        - ".*/api/delete/.*"
        - ".*/api/admin/.*"

Pre-Scan Validation

Before running production scans, the following validations should be performed:

  1. Staging Test: The same configuration should be tested on a staging environment first
  2. Off-Peak Scheduling: Scans should be scheduled during low-traffic periods
  3. Monitoring Setup: Application and infrastructure monitoring should be active to detect any issues
  4. Rollback Plan: A plan to terminate the scan quickly should be established if issues arise

Post-Scan Verification

After completing a production scan:

  1. Log Review: Application logs should be reviewed for errors or unusual patterns
  2. Metric Analysis: Performance metrics should be analyzed for any degradation
  3. Alert Review: Security alerts should be checked to identify any triggered rules
  4. User Impact: End-user experience should be verified through monitoring or feedback

Production Scanning Risks

Even with conservative configuration, production scanning carries inherent risks. Always coordinate with operations, security, and development teams before scanning production environments. Consider using a Private Location within your infrastructure for better control.

Progressive Approach

Start with the most conservative configuration and gradually increase scope and depth as you gain confidence in the scanner's behavior within your specific environment. Monitor each scan's impact before expanding coverage.

Why are form inputs being filled during scans when ACTIVE_PAGE_CHECKS is disabled?

The observed input filling is part of the standard crawling process, not active security testing. For the scanner to discover pages and map application functionality, forms must be filled and submitted, multi-step forms must be progressed, and various user interactions must be simulated.

The scanner automatically detects input formats and generates relevant data to enable effective crawling. This behavior is distinct from the fuzzing and injection testing performed by ACTIVE_PAGE_CHECKS.

I want to disable the API scanning of my authentication flow

By default, the scanner extracts and fuzzes/injects payloads in all API traffic seen during the scan, including during the authentication procedure.

In some cases, this can break authentication and create inconsistent logins. You can turn this off with the following:

frontend_dast:
  api_checks_during_auth: false

Authentication & Complex Applications

What configuration options are available when authentication breaks during scanning?

Authentication issues can be mitigated through several configuration parameters:

  1. The integrated_authentication: true setting should be enabled to maintain authentication state throughout the scan

  2. Logout buttons should be added to blocklisted_element_selectors to prevent accidental session termination

  3. The single_page_worker: true parameter should be used if the application cannot handle page reloads without losing state

  4. API injection testing should be disabled if it interferes with authentication mechanisms:

    frontend_dast:
      security_checks_enabled:
        - ACTIVE_PAGE_CHECKS
        - PASSIVE_PAGE_CHECKS
        - NETWORK_CHECKS
        # API_CHECKS excluded
    

How should scans be configured for large e-commerce sites with thousands of product pages?

Large-scale applications require efficient crawling strategies combined with targeted blocklists:

frontend_dast:
  max_duration: 240  # Longer duration needed
  parallel_workers: 3
  security_checks_enabled:
    - PASSIVE_PAGE_CHECKS  # Efficient for large sites
    - API_CHECKS          # Capture API traffic patterns
  scope:
    pages:
      blocklist_patterns:
        - ".*/product/[0-9]+/reviews.*"   # Skip review pages
        - ".*/category/.*/page/[0-9]+.*"  # Skip deep pagination
      max_parameterized_url_variations: 5  # Limit product variations

This configuration balances comprehensive coverage with practical scan duration by avoiding repetitive content patterns.

Advanced Features

Are iframes supported, and is a separate scan profile required for iframe-embedded applications?

Yes, iframe content is automatically scanned when the iframe source URL originates from the same application (same-origin policy). No additional scan profile configuration is required for same-origin iframe content.

However, if iframe content is loaded from a different domain (cross-origin), a separate scan profile must be created for that domain to enable security testing of the embedded content.

How can specific API calls be excluded during the crawling phase?

API calls can be excluded from analysis using the skipped_api_checks_url_patterns parameter:

frontend_dast:
  scope:
    api:
      skipped_url_patterns:
        - ".*/api/analytics/.*"
        - ".*/api/tracking/.*"

This configuration prevents the scanner from analyzing specific API endpoints that match these patterns during both crawling and security testing phases. This is particularly useful for excluding third-party analytics, tracking endpoints, or other API calls that should not be tested.