Rate Limiting Scans That Run Through a Private Location¶
Private Locations often target internal infrastructure that is sensitive to bursts of scanner traffic — corporate monitoring agents (Datadog, New Relic, …), SIEMs, WAFs, or lower-tier staging databases. The default Escape scan rate (up to 100 requests/second per profile) is tuned for public, production-grade targets and can overwhelm these environments.
This page explains how to enforce a lower, consistent rate limit on every scan that runs through a given Private Location, without having to edit each profile by hand.
TL;DR
There is no per-location rate-limit setting in the UI today, but the Public API already makes this fully automatable. Set network.requests_per_second on each profile using the Private Location — once — via PUT /v3/profiles/{profileId}/configuration, and re-apply it whenever new profiles are created. A 30-line script is enough. See Recipe 1 below.
Why Rate Limiting Lives on the Profile, Not on the Location¶
Escape's scan rate is a property of the profile configuration (network.requests_per_second), not of the Private Location that runs the scan. A Private Location is a transport — it decides where the scanner runs from and which network it can reach. The traffic volume itself is set by the scan profile and applies uniformly to every request the scan issues, whether the profile runs through an Escape-managed location or a Private Location.
This keeps behavior explicit and reproducible:
- Two profiles pointed at the same asset but run from different locations have the same traffic shape.
- A scan's request budget is visible in the profile configuration — no hidden override based on which location picked it up.
- Per-profile rate limits can diverge (slow scans for internal monitoring-sensitive assets, fast scans for public staging).
The trade-off is that a customer who wants all scans going through a specific Private Location to be throttled must apply the setting to each of those profiles. The recipes below automate that.
Prerequisites¶
-
An API key with profile write access. Generate one from your user profile.
-
The ID of the Private Location you want to throttle, either from
escape-cli locations listor from the Locations settings page in the UI. -
A way to identify the profiles bound to that location. Escape currently does not expose
proxyIdonGET /v3/profiles, so pick one of these conventions before starting — choose once and stick with it:- Tag-based (recommended): tag every profile you create against the Private Location with a stable tag such as
location:msc-internal. This makes the set explicit, searchable in the UI, and trivial to filter on via the API. - Name-based: prefix the profile name (for example
[Internal]or[MSC]). Filter with thesearchquery parameter onGET /v3/profiles. - Domain-based: filter with the
domainsquery parameter when all internal assets live under a distinct domain (for example*.internal.msc.com). - Exhaustive: apply the rate limit to every profile in the organization. Appropriate when the Private Location is the dominant scan path (for example, the whole organization scans only internal assets).
- Tag-based (recommended): tag every profile you create against the Private Location with a stable tag such as
Recipe 1: One-shot rate limit across profiles¶
Applies network.requests_per_second to every profile matching the selection convention you chose. Run it once after the initial rollout.
The script below assumes tag-based selection — replace TAG_ID and RATE to fit your environment. Both filtering variants (by search or domains) work with the same pattern; only the query string changes.
#!/usr/bin/env bash
set -euo pipefail
API_KEY="${ESCAPE_API_KEY:?export ESCAPE_API_KEY first}"
API="https://public.escape.tech/v3"
TAG_ID="00000000-0000-0000-0000-000000000000" # tag marking "internal" profiles
RATE=10 # requests_per_second to enforce
# 1. List every profile carrying the tag, walking the cursor.
cursor=""
while :; do
page=$(curl -fsS -G "$API/profiles" \
-H "X-ESCAPE-API-KEY: $API_KEY" \
--data-urlencode "tagIds=$TAG_ID" \
--data-urlencode "size=100" \
--data-urlencode "cursor=$cursor")
echo "$page" | jq -r '.data[].id' | while read -r profile_id; do
# 2. Read the current configuration, override requests_per_second, write it back.
current=$(curl -fsS "$API/profiles/$profile_id" \
-H "X-ESCAPE-API-KEY: $API_KEY" | jq '.configuration')
updated=$(jq --argjson rate "$RATE" \
'.network = (.network // {}) | .network.requests_per_second = $rate' \
<<< "$current")
curl -fsS -X PUT "$API/profiles/$profile_id/configuration" \
-H "X-ESCAPE-API-KEY: $API_KEY" \
-H "Content-Type: application/json" \
-d "$(jq -n --argjson cfg "$updated" '{configuration: $cfg}')" \
> /dev/null
echo "Updated $profile_id -> requests_per_second=$RATE"
done
cursor=$(echo "$page" | jq -r '.nextCursor // empty')
[[ -z "$cursor" ]] && break
done
What the script does:
- Pages through
GET /v3/profilesfiltered by a tag that identifies profiles bound to the Private Location. - For each profile, fetches the full configuration with
GET /v3/profiles/{profileId}. - Merges
network.requests_per_secondinto the existing configuration (preserving every other setting — custom headers, authentication, scope, etc.). - Writes the new configuration back with
PUT /v3/profiles/{profileId}/configuration.
Merge, don't overwrite
PUT /v3/profiles/{profileId}/configuration replaces the entire configuration body with what you send. Always read the current configuration first, patch only the fields you want to change, and write the merged object back. The jq step in the script above is the minimal safe pattern.
Valid range for requests_per_second
The default is 100. In the Escape app, the scanner configuration schema allows integers from 1 to 1000. The Public API OpenAPI description does not declare a numeric maximum for this field. The MCP tool orchestration_update_profile_rate_limit accepts 1 through 5000 — use the upper part of that range only when the target can sustain it. For monitoring-sensitive internal infrastructure, values between 5 and 20 are typical. Start low and raise only after confirming the monitoring pipeline is not impacted.
Recipe 2: Keep it enforced for newly-created profiles¶
Running Recipe 1 once is enough for existing profiles, but any new profile created afterward will start at the default 100 requests/second. Two approaches keep the rate limit in place as the environment evolves.
Option A — Enforce at creation time¶
Set the field inline when the profile is created, either via the UI (Advanced settings → Network → Requests per second) or via the API:
API_KEY="${ESCAPE_API_KEY:?export ESCAPE_API_KEY first}"
API="https://public.escape.tech/v3"
curl -X POST "$API/profiles/dast_rest" \
-H "X-ESCAPE-API-KEY: $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"assetId": "<ASSET_ID>",
"schemaId": "<SCHEMA_ID>",
"name": "[Internal] Customer Service API",
"proxyId": "<PRIVATE_LOCATION_ID>",
"tagIds": ["<INTERNAL_TAG_ID>"],
"mode": "read_only",
"configuration": {
"network": {
"requests_per_second": 10
}
}
}'
Any team that creates profiles against the Private Location should use this shape. If profile creation is scripted or templated, update the template once.
Option B — Periodic re-enforcement job¶
For organizations where profiles are created through the UI and team discipline around the tag/name convention is imperfect, wrap Recipe 1 in a scheduled job (GitHub Actions, GitLab CI cron, Kubernetes CronJob, etc.) that runs nightly or hourly. It is idempotent — profiles already at the target rate are PUT with the same value and nothing changes functionally.
# .github/workflows/enforce-private-location-rate-limit.yml
name: Enforce private-location rate limit
on:
schedule:
- cron: "0 2 * * *" # every night at 02:00 UTC
workflow_dispatch:
jobs:
enforce:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Apply rate limit
env:
ESCAPE_API_KEY: ${{ secrets.ESCAPE_API_KEY }}
run: ./scripts/enforce-rate-limit.sh # the script from Recipe 1
Recipe 3: One-off rate limit update via the MCP server¶
If you only need to adjust the rate on a single profile occasionally — for example when a DevOps engineer temporarily tightens the budget for a maintenance window — use the orchestration tool exposed by the Escape MCP server:
{
"tool": "orchestration_update_profile_rate_limit",
"arguments": {
"profileName": "Customer Service API",
"requestsPerSecond": 5
}
}
The tool resolves the profile by fuzzy name match, reads its current configuration, updates network.requests_per_second, and writes it back — equivalent to the per-profile body of Recipe 1 but driven from natural language through any MCP-compatible client.
What This Approach Does Not Cover¶
- Global caps at the Private Location level. The Private Location agent does not, today, enforce its own rate ceiling independent of the profile config. If two scans with 100 requests/second each run concurrently through the same agent, the agent will forward all 200 requests/second to the target. The solution is to configure
network.requests_per_secondon each profile (which the recipes above automate) or to run fewer scans concurrently. - Per-endpoint / per-domain rate limits.
network.requests_per_secondapplies uniformly to every request the scan issues. To protect a specific sensitive path, use the API Testing blocklist rather than a rate limit. - Rate limiting the agentic crawler. Crawling traffic (the browser fetching pages) is governed by
frontend_dast.parallel_workersand related settings, not bynetwork.requests_per_second. See WebApp Testing — Performance Tuning for the full model.
Related Documentation¶
- Public API — authentication, base URL, and other recipes
- Profiles Management — creating and listing profiles via the CLI
- API DAST — Rate Limiting — scanner-side rate limiting reference
- WebApp DAST — Performance Tuning — WebApp-specific throttling
- Private Location Logging & Monitoring — observing Private Location behavior in production