Claude Token Counter, now with model comparisons

Simon Willison's Blog / 4/20/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • Simon Willison updated his Claude Token Counter tool to compare token counts across multiple Claude models using the same input.
  • Because Claude Opus 4.7 introduced a new tokenizer, meaningful comparisons are primarily between Opus 4.7 and Opus 4.6 (the other supported models keep the same tokenizer pairings described in the post).
  • Opus 4.7 is expected to produce more tokens for the same text (Anthropic cites roughly 1.0–1.35×), and Willison’s test with an Opus 4.7 system prompt found about 1.46× versus Opus 4.6.
  • Since pricing per token is unchanged, the increased token usage implies Opus 4.7 could cost roughly ~40% more than Opus 4.6 for comparable workloads, and the tool also supports image inputs with improved Opus 4.7 image handling.
Sponsored by: Honeycomb — AI agents behave unpredictably. Get the context you need to debug what actually happened. Read the blog

20th April 2026 - Link Blog

Claude Token Counter, now with model comparisons. I upgraded my Claude Token Counter tool to add the ability to run the same count against different models in order to compare them.

As far as I can tell Claude Opus 4.7 is the first model to change the tokenizer, so it's only worth running comparisons between 4.7 and 4.6. The Claude token counting API accepts any Claude model ID though so I've included options for all four of the notable current models (Opus 4.7 and 4.6, Sonnet 4.6, and Haiku 4.5).

In the Opus 4.7 announcement Anthropic said:

Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type.

I pasted the Opus 4.7 system prompt into the token counting tool and found that the Opus 4.7 tokenizer used 1.46x the number of tokens as Opus 4.6.

Screenshot of a token comparison tool. Models to compare: claude-opus-4-7 (checked), claude-opus-4-6 (checked), claude-opus-4-5, claude-sonnet-4-6, claude-haiku-4-5. Note: "These models share the same tokenizer". Blue "Count Tokens" button. Results table — Model | Tokens | vs. lowest. claude-opus-4-7: 7,335 tokens, 1.46x (yellow badge). claude-opus-4-6: 5,039 tokens, 1.00x (green badge).

Opus 4.7 uses the same pricing is Opus 4.6 - $5 per million input tokens and $25 per million output tokens - but this token inflation means we can expect it to be around 40% more expensive.

The token counter tool also accepts images. Opus 4.7 has improved image support, described like this:

Opus 4.7 has better vision for high-resolution images: it can accept images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models.

I tried counting tokens for a 3456 × 2234 pixel 3.7MB PNG and got an even bigger increase in token counts - 3.01x times the number of tokens for 4.7 compared to 4.6:

Same UI, this time with an uploaded screenshot PNG image. claude-opus-4-7: 4,744 tokens, 3.01x (yellow badge). claude-opus-4-6: 1,578 tokens, 1.00x (green badge).

Posted 20th April 2026 at 12:50 am

This is a link post by Simon Willison, posted on 20th April 2026.

ai 1969 generative-ai 1746 llms 1713 anthropic 273 claude 270 llm-pricing 68 tokenization 12

Monthly briefing

Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments.

Pay me to send you less!

Sponsor & subscribe