cost
per dev, per repo, per prompt.
tokens × model price, rolled up by user, repo, and prompt id. surface the top 1% prompts burning your budget. weekly run-rate against last month.
cloudflare-native. cost attribution per dev / repo / prompt. subagent trees nobody else renders. ~5% the price of Datadog.
what we measure
cost
tokens × model price, rolled up by user, repo, and prompt id. surface the top 1% prompts burning your budget. weekly run-rate against last month.
cache hit
prompt caching only pays off if the rates are visible. we expose cache_read / cache_creation / input ratios per developer so coaching actually has a target.
subagent depth
render the parent / child span chain inside one prompt id. see when claude waits for an approval, which tool calls fanned out, and how deep the recursion ran.
why us
cloudflare only
workers, analytics engine, d1, queues, r2. ingest at the edge, roll up to thin aggregates, dashboard reads from d1. EU / UK residency on day one.
no raw bodies
forensics tier signs urls into a bucket you own. we never custody prompt or response payloads. compliance burden stays where it belongs.
claude-shaped
built for the agent loop: subagent trees, tool approval rates, hook timings, prompt-id correlation across cli + sdk + cowork. generic backends don't model this.