Practical Recipes¶
This guide provides complete, production-ready examples for common Escape CLI workflows. Each recipe includes full code, explanations, and best practices.
Running a Scan and Saving Results¶
A complete workflow for executing a scan and capturing results in JSON format.
#!/bin/bash
set -e
# Verify CLI installation
escape-cli version
# Configuration
PROFILE_ID="00000000-0000-0000-0000-000000000000"
# Start a scan and capture the scan ID
echo "Starting security scan..."
SCAN_ID=$(escape-cli scan start "${PROFILE_ID}" -o json | jq -r '.id')
echo "Scan ID: ${SCAN_ID}"
# Save scan metadata
echo "${SCAN_ID}" > scan-id.txt
# Monitor the scan until completion
echo "Monitoring scan progress..."
escape-cli scan watch "${SCAN_ID}"
# Retrieve and save issues
echo "Retrieving scan results..."
escape-cli scans issues "${SCAN_ID}" -o json | tee issues.json
# Generate summary
TOTAL_ISSUES=$(jq 'length' issues.json)
CRITICAL_ISSUES=$(jq '[.[] | select(.severity=="CRITICAL")] | length' issues.json)
HIGH_ISSUES=$(jq '[.[] | select(.severity=="HIGH")] | length' issues.json)
echo "Scan complete:"
echo " Total issues: ${TOTAL_ISSUES}"
echo " Critical: ${CRITICAL_ISSUES}"
echo " High: ${HIGH_ISSUES}"
Creating a REST Profile and Starting a Scan¶
Complete workflow for setting up a new REST API profile.
#!/bin/bash
set -e
# Upload OpenAPI schema
echo "Uploading API schema..."
UPLOAD_ID=$(escape-cli upload schema -o json < openapi.json | jq -r '.')
echo "Upload ID: ${UPLOAD_ID}"
# Create schema asset
echo "Creating schema asset..."
cat <<EOF > schema-asset.json
{
"asset_type": "SCHEMA",
"upload": {
"temporaryObjectKey": "$UPLOAD_ID"
}
}
EOF
SCHEMA_ASSET_ID=$(escape-cli asset create -o json < schema-asset.json | jq -r '.id')
echo "Schema Asset ID: ${SCHEMA_ASSET_ID}"
# Create service asset
echo "Creating service asset..."
cat <<EOF > service-asset.json
{
"asset_class": "API_SERVICE",
"asset_type": "REST",
"url": "https://api.example.com",
"framework": "REST_FASTAPI"
}
EOF
SERVICE_ASSET_ID=$(escape-cli asset create -o json < service-asset.json | jq -r '.id')
echo "Service Asset ID: ${SERVICE_ASSET_ID}"
# Get available location
echo "Fetching available location..."
PROXY_ID=$(escape-cli locations list -o json | jq -r '.[0].id')
echo "Location ID: ${PROXY_ID}"
# Create profile
echo "Creating REST profile..."
cat <<EOF > profile.json
{
"assetId": "$SERVICE_ASSET_ID",
"name": "Production REST API",
"proxyId": "$PROXY_ID",
"schemaId": "$SCHEMA_ASSET_ID",
"tagsIds": []
}
EOF
PROFILE_ID=$(escape-cli profiles create-rest -o json < profile.json | jq -r '.id')
echo "Profile ID: ${PROFILE_ID}"
echo "Profile created successfully!"
# Note: A scan is automatically started when the profile is created
# Optionally start a manual scan
echo "Starting additional scan..."
escape-cli scans start $PROFILE_ID
Bulk Asset Import from File¶
Import multiple assets from a text file containing URLs.
Import REST APIs from URL List¶
#!/bin/bash
# File: urls.txt (one URL per line)
# https://api1.example.com
# https://api2.example.com
# https://api3.example.com
while read -r url; do
echo "Creating asset for: $url"
cat <<EOF | escape-cli asset create -o json
{
"asset_class": "API_SERVICE",
"asset_type": "REST",
"url": "$url"
}
EOF
echo "Created asset: $url"
done < urls.txt
Import DNS Hosts from CSV¶
#!/bin/bash
# CSV format: date,domain,comment
# 2025-01-01,example.com,Production domain
# 2025-01-02,staging.example.com,Staging environment
# Skip header line with sed
sed 1d domains.csv | while IFS=',' read -r date domain comment; do
echo "Creating asset for domain: $domain"
cat <<EOF | escape-cli asset create -o json
{
"asset_class": "HOST",
"asset_type": "DNS",
"address": "$domain"
}
EOF
echo "Created: $domain ($comment)"
done
Import from JSON Array¶
#!/bin/bash
# Import assets from a JSON array
# File: assets.json
# [
# {"url": "https://api1.example.com", "type": "REST"},
# {"url": "https://api2.example.com", "type": "REST"}
# ]
jq -c '.[]' assets.json | while read -r asset; do
URL=$(echo "$asset" | jq -r '.url')
TYPE=$(echo "$asset" | jq -r '.type')
echo "Creating $TYPE asset: $URL"
cat <<EOF | escape-cli asset create
{
"asset_class": "API_SERVICE",
"asset_type": "$TYPE",
"url": "$URL"
}
EOF
done
Automated Scan with Quality Gate¶
Run a scan and fail if critical or high-severity issues are found.
#!/bin/bash
set -e
PROFILE_ID="${1:-}"
if [ -z "$PROFILE_ID" ]; then
echo "Usage: $0 <profile-id>"
exit 1
fi
# Start scan
echo "Starting security scan for profile: $PROFILE_ID"
SCAN_ID=$(escape-cli scans start "$PROFILE_ID" -o json | jq -r '.id')
echo "Scan ID: $SCAN_ID"
# Watch scan progress
echo "Monitoring scan..."
if ! escape-cli scans watch "$SCAN_ID"; then
echo "ERROR: Scan failed or was cancelled"
exit 1
fi
# Get results
echo "Retrieving results..."
escape-cli scans issues "$SCAN_ID" -o json > results.json
# Analyze results
CRITICAL_COUNT=$(jq '[.[] | select(.severity=="CRITICAL")] | length' results.json)
HIGH_COUNT=$(jq '[.[] | select(.severity=="HIGH")] | length' results.json)
MEDIUM_COUNT=$(jq '[.[] | select(.severity=="MEDIUM")] | length' results.json)
LOW_COUNT=$(jq '[.[] | select(.severity=="LOW")] | length' results.json)
# Print summary
echo "===== Scan Results ====="
echo "Critical: $CRITICAL_COUNT"
echo "High: $HIGH_COUNT"
echo "Medium: $MEDIUM_COUNT"
echo "Low: $LOW_COUNT"
echo "======================="
# Quality gate: fail on critical or high severity
if [ "$CRITICAL_COUNT" -gt 0 ] || [ "$HIGH_COUNT" -gt 0 ]; then
echo "FAILURE: Found $CRITICAL_COUNT critical and $HIGH_COUNT high severity issues"
# Print critical/high issues
echo ""
echo "Critical and High Severity Issues:"
jq -r '.[] | select(.severity=="CRITICAL" or .severity=="HIGH") |
" - [\(.severity)] \(.name) - \(.url)"' results.json
exit 1
else
echo "SUCCESS: No critical or high severity issues found"
exit 0
fi
Daily Security Report¶
Generate a daily security status report and email it to stakeholders.
#!/bin/bash
# Daily security report generator
OUTPUT_FILE="security-report-$(date +%Y-%m-%d).txt"
{
echo "======================================"
echo "Escape Security Report"
echo "Date: $(date)"
echo "======================================"
echo ""
# Asset Summary
echo "## Asset Summary"
TOTAL_ASSETS=$(escape-cli assets list -o json | jq 'length')
MONITORED_ASSETS=$(escape-cli assets list --statuses MONITORED -o json | jq 'length')
echo "Total Assets: $TOTAL_ASSETS"
echo "Monitored Assets: $MONITORED_ASSETS"
echo ""
# Profile Summary
echo "## Profile Summary"
TOTAL_PROFILES=$(escape-cli profiles list -o json | jq 'length')
echo "Total Profiles: $TOTAL_PROFILES"
echo ""
# Issues by Severity
echo "## Current Security Issues"
escape-cli issues list -o json > /tmp/all-issues.json
CRITICAL=$(jq '[.[] | select(.severity=="CRITICAL" and .ignored==false)] | length' /tmp/all-issues.json)
HIGH=$(jq '[.[] | select(.severity=="HIGH" and .ignored==false)] | length' /tmp/all-issues.json)
MEDIUM=$(jq '[.[] | select(.severity=="MEDIUM" and .ignored==false)] | length' /tmp/all-issues.json)
LOW=$(jq '[.[] | select(.severity=="LOW" and .ignored==false)] | length' /tmp/all-issues.json)
echo "Critical: $CRITICAL"
echo "High: $HIGH"
echo "Medium: $MEDIUM"
echo "Low: $LOW"
echo ""
# Top 10 Critical/High Issues
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
echo "## Top Priority Issues"
jq -r '.[] | select(.severity=="CRITICAL" or .severity=="HIGH") |
select(.ignored==false) |
" [\(.severity)] \(.name) - \(.url)"' /tmp/all-issues.json | head -10
echo ""
fi
# Recent Scans
echo "## Recent Activity"
echo "Latest scans across all profiles:"
escape-cli profiles list -o json | jq -r '.[0:5] | .[] | .id' | while read -r profile_id; do
PROFILE_NAME=$(escape-cli profiles get "$profile_id" -o json | jq -r '.name')
LATEST_SCAN=$(escape-cli scans list "$profile_id" -o json | jq -r '.[0] |
"\(.status) - \(.createdAt)"' 2>/dev/null || echo "No scans")
echo " $PROFILE_NAME: $LATEST_SCAN"
done
echo ""
echo "======================================"
echo "Report generated by Escape CLI"
echo "======================================"
} > "$OUTPUT_FILE"
# Display the report
cat "$OUTPUT_FILE"
# Email the report (requires mail command)
if command -v mail &> /dev/null; then
cat "$OUTPUT_FILE" | mail -s "Daily Security Report - $(date +%Y-%m-%d)" \
security-team@example.com
echo "Report emailed to security-team@example.com"
fi
# Cleanup
rm -f /tmp/all-issues.json
Automated Asset Discovery¶
Discover and import assets from various sources.
Discover APIs from Kubernetes¶
#!/bin/bash
# Discover APIs running in Kubernetes and create Escape assets
NAMESPACE="${1:-default}"
echo "Discovering services in namespace: $NAMESPACE"
kubectl get services -n "$NAMESPACE" -o json | \
jq -r '.items[] | select(.spec.type=="LoadBalancer" or .spec.type=="NodePort") |
{name: .metadata.name, ip: .status.loadBalancer.ingress[0].ip, port: .spec.ports[0].port}' | \
jq -c '.' | while read -r service; do
NAME=$(echo "$service" | jq -r '.name')
IP=$(echo "$service" | jq -r '.ip')
PORT=$(echo "$service" | jq -r '.port')
if [ "$IP" != "null" ]; then
URL="http://${IP}:${PORT}"
echo "Creating asset for service: $NAME ($URL)"
cat <<EOF | escape-cli asset create
{
"asset_class": "API_SERVICE",
"asset_type": "REST",
"url": "$URL"
}
EOF
fi
done
Import Assets from AWS API Gateway¶
#!/bin/bash
# Discover AWS API Gateway endpoints and create Escape assets
AWS_REGION="${1:-us-east-1}"
echo "Discovering API Gateway endpoints in region: $AWS_REGION"
aws apigateway get-rest-apis --region "$AWS_REGION" --output json | \
jq -r '.items[] | {id: .id, name: .name}' | jq -c '.' | while read -r api; do
API_ID=$(echo "$api" | jq -r '.id')
API_NAME=$(echo "$api" | jq -r '.name')
# Get stages
aws apigateway get-stages --rest-api-id "$API_ID" --region "$AWS_REGION" --output json | \
jq -r '.item[] | .stageName' | while read -r stage; do
URL="https://${API_ID}.execute-api.${AWS_REGION}.amazonaws.com/${stage}"
echo "Creating asset: $API_NAME ($stage)"
cat <<EOF | escape-cli asset create
{
"asset_class": "API_SERVICE",
"asset_type": "REST",
"url": "$URL"
}
EOF
done
done
Continuous Monitoring Script¶
Monitor your application portfolio and alert on new critical findings.
#!/bin/bash
# Continuous monitoring script - run via cron every hour
ALERT_EMAIL="security-team@example.com"
STATE_FILE="/var/lib/escape-monitor/last-run.json"
# Ensure state directory exists
mkdir -p "$(dirname "$STATE_FILE")"
# Get current critical issues
CURRENT_CRITICAL=$(escape-cli issues list --severity CRITICAL --status OPEN -o json)
CURRENT_COUNT=$(echo "$CURRENT_CRITICAL" | jq 'length')
# Load previous state
if [ -f "$STATE_FILE" ]; then
PREVIOUS_COUNT=$(jq -r '.count' "$STATE_FILE")
else
PREVIOUS_COUNT=0
fi
echo "Current critical issues: $CURRENT_COUNT"
echo "Previous critical issues: $PREVIOUS_COUNT"
# Check for new critical issues
if [ "$CURRENT_COUNT" -gt "$PREVIOUS_COUNT" ]; then
NEW_COUNT=$((CURRENT_COUNT - PREVIOUS_COUNT))
echo "ALERT: $NEW_COUNT new critical issue(s) detected!"
# Generate alert email
{
echo "Security Alert: New Critical Issues Detected"
echo ""
echo "New critical issues: $NEW_COUNT"
echo "Total critical issues: $CURRENT_COUNT"
echo ""
echo "Critical Issues:"
echo "$CURRENT_CRITICAL" | jq -r '.[] | " - \(.name): \(.url)"'
} | mail -s "SECURITY ALERT: $NEW_COUNT New Critical Issues" "$ALERT_EMAIL"
fi
# Save current state
echo "{\"count\": $CURRENT_COUNT, \"timestamp\": \"$(date -Iseconds)\"}" > "$STATE_FILE"
# Exit with error if there are any critical issues
if [ "$CURRENT_COUNT" -gt 0 ]; then
exit 1
else
exit 0
fi
Scan Result Comparison¶
Compare security findings between two scans to track progress.
#!/bin/bash
# Compare issues between two scans
SCAN_ID_1="${1:-}"
SCAN_ID_2="${2:-}"
if [ -z "$SCAN_ID_1" ] || [ -z "$SCAN_ID_2" ]; then
echo "Usage: $0 <scan-id-1> <scan-id-2>"
exit 1
fi
# Get issues from both scans
echo "Fetching scan results..."
escape-cli scans issues "$SCAN_ID_1" -o json > scan1.json
escape-cli scans issues "$SCAN_ID_2" -o json > scan2.json
# Count issues by severity
echo ""
echo "===== Scan Comparison ====="
echo ""
echo "Scan 1 (${SCAN_ID_1:0:8}...):"
jq 'group_by(.severity) | map({severity: .[0].severity, count: length})' scan1.json
echo ""
echo "Scan 2 (${SCAN_ID_2:0:8}...):"
jq 'group_by(.severity) | map({severity: .[0].severity, count: length})' scan2.json
# Find new issues (in scan 2 but not in scan 1)
echo ""
echo "New Issues in Scan 2:"
jq -s --slurp '
(.[1] | map(.name)) as $scan2_names |
(.[0] | map(.name)) as $scan1_names |
($scan2_names - $scan1_names) as $new_issues |
.[1] | map(select(.name as $n | $new_issues | index($n)))
' scan1.json scan2.json | jq -r '.[] | " [\(.severity)] \(.name)"'
# Find resolved issues (in scan 1 but not in scan 2)
echo ""
echo "Resolved Issues:"
jq -s --slurp '
(.[0] | map(.name)) as $scan1_names |
(.[1] | map(.name)) as $scan2_names |
($scan1_names - $scan2_names) as $resolved |
.[0] | map(select(.name as $n | $resolved | index($n)))
' scan1.json scan2.json | jq -r '.[] | " [\(.severity)] \(.name)"'
# Cleanup
rm -f scan1.json scan2.json
Multi-Profile Scanning¶
Run scans across multiple profiles in parallel or sequence.
#!/bin/bash
# Scan multiple profiles
PROFILES=(
"00000000-0000-0000-0000-000000000001"
"00000000-0000-0000-0000-000000000002"
"00000000-0000-0000-0000-000000000003"
)
echo "Starting scans for ${#PROFILES[@]} profiles..."
SCAN_IDS=()
# Start all scans
for profile_id in "${PROFILES[@]}"; do
PROFILE_NAME=$(escape-cli profiles get "$profile_id" -o json | jq -r '.name')
echo "Starting scan for: $PROFILE_NAME"
SCAN_ID=$(escape-cli scans start "$profile_id" -o json | jq -r '.id')
SCAN_IDS+=("$SCAN_ID")
echo " Scan ID: $SCAN_ID"
done
echo ""
echo "All scans started. Monitoring progress..."
# Monitor all scans
for i in "${!SCAN_IDS[@]}"; do
SCAN_ID="${SCAN_IDS[$i]}"
PROFILE_ID="${PROFILES[$i]}"
PROFILE_NAME=$(escape-cli profiles get "$PROFILE_ID" -o json | jq -r '.name')
echo ""
echo "Monitoring scan for: $PROFILE_NAME"
escape-cli scans watch "$SCAN_ID"
# Get results
escape-cli scans issues "$SCAN_ID" -o json > "results-${PROFILE_NAME// /-}.json"
echo "Results saved to: results-${PROFILE_NAME// /-}.json"
done
echo ""
echo "All scans completed!"
Next Steps¶
- CI/CD Integration - Integrate these recipes into your CI/CD pipelines
- Scans Management - Learn more about scan operations
- Issues Management - Deep dive into issue handling