← back to docs

what we measure

Every dashboard panel, the data source behind it, and what writes the rows.

Every panel maps to one of three storage layers: live span ingest into D1, 1-minute pre-aggregates in Workers Analytics Engine (WAE), or the daily roll-up table that the cron worker produces every 15 minutes.

PanelSourceUpdated by
Cost per user · 7drollups_dailycron */15
Cost per repo · 7drollups_dailycron */15
Subagent tree (by prompt_id)spans (parent / child walk)live ingest
Cost trend · 30d (sparkline + run-rate)rollups_dailycron */15
Cache hit % per user · 7drollups_daily.cache_*cron */15
Tool approval rate · 7dtool_decisions from log eventslive ingest
Active developers (DAU/WAU/MAU) · 30drollups_daily distinct userscron */15
Top cost prompts · 7dprompt_summarylive ingest
Subagent fanout / depth per repo · 7dprompt_summarylive ingest

what we don’t store

  • Raw prompt text. The claude_code.interaction span carries the prompt id, not the body.
  • Raw response bodies. Forensics tier renders these via signed URLs from your own bucket.
  • Per-token detail beyond aggregate input / output / cache_read / cache_creation counts.
  • Trace data older than the tier retention window. Aggregates persist 90 days; spans 30.

why pre-aggregate

A 12-hour Claude Code session emits ~50,000 spans. Indexing every span at full fidelity is the cost driver behind generic OTel backends. We collapse to ~200 WAE rows keyed by (org, user, repo, model, tool, status) per 1-minute bucket. The dashboards you actually look at weekly only need that aggregate.