LLM token counter — tools.voiddo/tokcount vs OpenAI Tokenizer
Both tools count LLM tokens. tokcount goes further: 60+ models, cost estimates, and a CLI that works in pipelines. The OpenAI Tokenizer is browser-only and covers one model family.
tools.voiddo/tokcount
- 60+ models: GPT-5, Claude Opus 4.7, Gemini 2.5, Grok 4, Llama 4, Mistral, DeepSeek, Qwen…
- Shows input cost, estimated output cost, and total
- CLI-first — pipe text in, use in scripts and CI
- No API key, no account, works fully offline
- MIT licensed and open source
- Multi-model cost comparison in one command
OpenAI Tokenizer
- Covers GPT-3 (p50k) and GPT-4 (cl100k) tokenizers only
- Shows token count and character count — no cost estimate
- Browser-only — no CLI, no scriptability
- Cannot compare across Claude, Gemini, or other providers
- Official and authoritative for OpenAI tokenizer byte offsets
- Requires browser; cannot be piped or automated
Feature comparison
| Feature | tools.voiddo/tokcount | OpenAI Tokenizer |
|---|---|---|
| Model coverage | 60+ models, all major providers | GPT-3 & GPT-4 tokenizers only |
| Claude / Anthropic models | ✓ Opus 4.7, Sonnet 4.6, Haiku 4.5 | ✗ |
| Gemini / Google models | ✓ 2.5 Pro, Flash, Flash Lite | ✗ |
| Token count | ✓ | ✓ |
| Input cost estimate | ✓ | ✗ |
| Output cost estimate | ✓ (configurable multiplier) | ✗ |
| Multi-model comparison | ✓ side-by-side cost table | ✗ |
| CLI / terminal usage | ✓ pipe-friendly, script-safe | ✗ browser only |
| Works offline | ✓ fully local tokenizer | requires browser + network |
| No API key required | ✓ | ✓ |
| Open source | ✓ MIT licensed | closed source |
| GPT-4 byte-level token offsets | count only | ✓ visual offset highlight |
| CI / Makefile integration | ✓ | ✗ |
| Ad-free | ✓ | ✓ |
Comparison based on publicly observable behavior as of 2026-05. For visual token-by-token offset highlighting of GPT-4 inputs, the OpenAI Tokenizer remains the reference tool. For multi-model cost estimation and CLI scripting, tokcount is the better fit.
FAQ
Can tokcount estimate cost, not just count tokens?
Yes. tokcount shows input token cost, estimated output cost (at a configurable output multiplier), and the total for the selected model. Pricing is a 2026 snapshot across 60+ models. The OpenAI Tokenizer shows only the count — no cost breakdown.
Does tokcount work from a terminal or CI pipeline?
Yes. tokcount is CLI-first — pipe text in, pass file paths as arguments, or embed it in scripts, Makefiles, and GitHub Actions.
cat prompt.md | tokcount --model gpt-4o returns a one-line result. The OpenAI Tokenizer is browser-only with no scripting interface.Which models does tokcount support?
60+ models across all major providers: OpenAI (GPT-5, GPT-4o, o3, o4-mini), Anthropic (Claude Opus 4.7, Sonnet 4.6, Haiku 4.5), Google (Gemini 2.5 Pro, Flash, Flash Lite), xAI (Grok 4.1, Grok 3), Meta (Llama 4 Scout, Maverick), Mistral, DeepSeek, Alibaba Qwen, and more.
Does tokcount require an API key or account?
No. tokcount tokenizes locally using each model's tokenizer specification — no API call, no key, nothing billed. It works fully offline once installed.
How do I compare cost across Claude, GPT-4o, and Gemini in one run?
Use
tokcount prompt.md --compare or pass multiple --model flags. tokcount outputs a cost table showing all selected models side by side. The OpenAI Tokenizer does not support cross-provider comparison.When would I still use the OpenAI Tokenizer instead?
If you need to see GPT-4 token boundaries highlighted visually in the input text — useful for understanding tokenization of specific strings — the OpenAI Tokenizer's visual offset display is unique. For everything else (multi-model, cost, CLI, offline, non-OpenAI models), tokcount is more capable.
Try tools.voiddo/tokcount
Count tokens and estimate API cost across 60+ models — from the browser or CLI. No account. No API key. Works offline.
open tokcount →Competitor names and trademarks belong to their respective owners. This comparison reflects publicly observable tool behavior.