Token Simulator

See how AI turns text into chunks.

Type or paste anything, and this simplified calculator will break it into token-like pieces. It is designed to teach the intuition: common English tends to be cheaper, while rare strings, code, URLs, numbers, and non-English text often fragment into more tokens.

Usually heavy even without typing much

Image visual context
Screenshot UI + text
PDF document context
Code block symbols + structure
Long URL fragmented parts
Token count
0
Approximate chunks the model would process.
Character count
0
Visible characters, including spaces and punctuation.
Token / character
0.00
Higher means the text is relatively expensive.
Cost rating
Cheap
Simple, common language usually compresses well.

Visual token breakdown

Each pill is one simulated token.
Why this matters in chat

You send it once. The model may reread it for the rest of the conversation.

In a long chat, old messages stay in context. That means a short prompt is not just cheaper once, it can be cheaper over and over again as the model keeps carrying it forward.

Later replies that still carry this message
20 turns
Cumulative token reads
0
Write something above to see how repeated rereading adds up across a longer conversation.
At 20 later replies, this would use 0% of a 200,000-token context window.
0.00%
Simple idea: each message has input tokens, and in chat those tokens can stay alive. The longer the conversation runs, the more often the model may need to read them again.