Reading our findings.csv for executive reporting
Every completed scan exposes a CSV at /v1/scans/{id}/findings.csv. It's
designed to drop straight into a spreadsheet, a Google Data Studio dashboard, or a
compliance workbook. CISOs who use it tend to use it the same way; here's the playbook.
The 11 columns
| Column | Meaning |
|---|---|
checker | The internal display name (e.g., "TLS deep-dive") |
category | DNS / TLS / WEB / EMAIL |
severity | info / low / medium / high / critical |
title | One-line description |
evidence | What we observed (e.g., "TLS 1.0 negotiated with cipher AES_128_CBC_SHA") |
description | The longer explanation of WHY this is a finding |
remediation | The fix summary (one paragraph) |
effort_minutes | Estimated effort to fix (median dev who knows their stack) |
effort_human | "30 min" / "2h" / "1 day" — human-readable |
risk_if_not_fixed | Curated description of the threat if ignored |
risk_categories | security / legal / reputational / operational / financial / privacy |
Note: we don't include the slug column in CSV (it's noise for non-engineering audiences). It's available in the JSON export.
Pattern 1: priority dashboard
Pivot the CSV by severity, sort by effort. The HIGH-with-30-min-effort findings are your sweet spot — biggest impact for least dev time. CISOs love this column because it lets them argue for "let's fix these 8 things this sprint" with concrete time estimates.
=QUERY(A1:K500, "SELECT D, F, H, I WHERE C='high' ORDER BY G ASC")
# title desc effort_min effort_human
Pattern 2: trend tracking
Schedule a monthly Extended scan, archive the CSV. Diff month-over-month finding counts by severity. The headline metric for the board: "we have 4 fewer HIGH findings this month than last".
# Bash one-liner per month's CSV
for m in 2026-01 2026-02 2026-03 2026-04; do
echo -n "$m: "
awk -F, 'NR>1 && $3=="critical" {c++} END{print c+0" CRITICAL"}' "$m.csv"
done
Pattern 3: compliance evidence
For PCI-DSS / NIS 2 / SOC 2, paste the CSV into a workbook column "Open findings". Run a re-scan after remediation; the finding either disappears (resolved) or persists. Auditors love being able to see the chronology.
For deeper compliance work, use the JSON export instead — it includes a
compliance array per finding mapping each to PCI / ISO / SOC 2 / NIS 2 /
GDPR controls. The CSV is for narrative; the JSON is for the audit binder.
Pattern 4: sprint planning
Filter CSV by risk_categories = "security" to triage tech-debt.
risk_categories = "legal" or "privacy" → escalate to compliance team.
risk_categories = "reputational" → loop in marketing/comms (e.g., a security.txt
missing finding fits here).
Building a dashboard
Most teams pipe the CSV into a Google Sheet via a simple cron + curl:
# scripts/scan-and-publish.sh
SCAN_ID=$(curl -s -X POST -H "Authorization: Bearer $UNVEILSCAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"domain":"example.com","profile":"extended"}' \
https://unveilscan.com/v1/scans | jq -r .id)
# Wait for completion (Extended takes ~3-5 min)
while true; do
STATUS=$(curl -s -H "Authorization: Bearer $UNVEILSCAN_TOKEN" \
https://unveilscan.com/v1/scans/$SCAN_ID | jq -r .status)
[ "$STATUS" = "completed" ] && break
sleep 30
done
# Pull CSV, append to Google Sheets via Sheets API
curl -s -H "Authorization: Bearer $UNVEILSCAN_TOKEN" \
https://unveilscan.com/v1/scans/$SCAN_ID/findings.csv \
> latest-findings.csv
# (your Sheets-append script here)
Or use the GitHub Action with output: csv and pipe the artifact to your
analytics tool of choice.
Limitations
The CSV is a snapshot, not a stream:
- Resolved findings (re-checked + fixed) don't appear in the CSV — they're audit-only history in the SPA.
- Suppressed findings (operator marked as accepted-risk) don't appear either.
- Compare across scans manually — the CSV alone doesn't tell you what changed since last time.
For change-tracking specifically, use the delta field in JSON export, or
the per-domain history endpoint. The CSV is for "this is the current state" snapshots.
