what we measure
Every dashboard panel, the data source behind it, and what writes the rows.
Every panel maps to one of three storage layers: live span ingest into D1, 1-minute pre-aggregates in Workers Analytics Engine (WAE), or the daily roll-up table that the cron worker produces every 15 minutes.
| Panel | Source | Updated by |
|---|---|---|
| Cost per user · 7d | rollups_daily | cron */15 |
| Cost per repo · 7d | rollups_daily | cron */15 |
Subagent tree (by prompt_id) | spans (parent / child walk) | live ingest |
| Cost trend · 30d (sparkline + run-rate) | rollups_daily | cron */15 |
| Cache hit % per user · 7d | rollups_daily.cache_* | cron */15 |
| Tool approval rate · 7d | tool_decisions from log events | live ingest |
| Active developers (DAU/WAU/MAU) · 30d | rollups_daily distinct users | cron */15 |
| Top cost prompts · 7d | prompt_summary | live ingest |
| Subagent fanout / depth per repo · 7d | prompt_summary | live ingest |
what we don’t store
- Raw prompt text. The
claude_code.interactionspan carries the prompt id, not the body. - Raw response bodies. Forensics tier renders these via signed URLs from your own bucket.
- Per-token detail beyond aggregate input / output / cache_read / cache_creation counts.
- Trace data older than the tier retention window. Aggregates persist 90 days; spans 30.
why pre-aggregate
A 12-hour Claude Code session emits ~50,000 spans. Indexing every span at full fidelity is the cost driver behind generic OTel backends. We collapse to ~200 WAE rows keyed by (org, user, repo, model, tool, status) per 1-minute bucket. The dashboards you actually look at weekly only need that aggregate.