Free audit report
Everyone says "make Claude talk like a caveman, save 75% of your tokens." We ran the numbers on 1.29 billion real tokens. Here's what actually happened.
PDF · No email required · Real data from 20 sessions
The problem
Different repo every week, same headline number. "Caveman prompt saves 75% of tokens."
The benchmarks behind these claims use a single-line system prompt and ten isolated one-shot prompts. No tool use, no file reads, no conversation history. That's not how anyone actually uses Claude Code.
This report runs a real audit. 20 sessions, 5,499 assistant messages, every token categorized by source. It shows where your tokens actually go.
What's inside
THE BENCHMARK PROBLEM
What the benchmarks measure vs. what actually happens in real Claude Code sessions. Different inputs, different math.
REAL TOKEN BREAKDOWN
The caveman rule can only touch that last 9%. Everything else goes straight to the model, untouched.
REALISTIC SAVINGS
The real session-wide saving, broken down by session type. The math, not the marketing.
WHAT ACTUALLY WORKS
What to paste into your project config that handles the real problem without breaking Claude's ability to think.
The full audit: token breakdown, real benchmarks, and the three CLAUDE.md lines that actually work. One PDF, no gate.
Want to run your own audits? Join Agent-J+. We do this every week.