Count LLM tokens and estimate cost across 60+ models — locally, privately, no API keys. GPT-5, Claude Opus 4.7, Gemini 2.5, Grok 4, Llama 4, Mistral, DeepSeek, Qwen. Pipe-friendly.
npm i -g @v0idd0/tokcount
$ tokcount prompt.md model: gpt-4o · tokens: 1,842 · cost: $0.0184 (in) + $0.0737 (out @ 4×) $ tokcount prompt.md --model claude-opus-4-7 model: claude-opus-4-7 · tokens: 1,842 · cost: $0.0277 (in) + $0.1107 (out @ 4×) $ tokcount prompt.md --model gemini-2.5-flash model: gemini-2.5-flash · tokens: 1,842 · cost: $0.0006 (in) + $0.0022 (out @ 4×) $ tokcount prompt.md --all --tag cheap # shows cost table for all cheap/budget models at once $ cat system-prompt.md context.md | tokcount --model claude # pipe text from stdin, alias resolves to claude-sonnet-4-6
60+ models with 2026-04 pricing snapshot. Updated as providers change rates.
tokcount is also available as a browser extension — see live token counts directly in the ChatGPT, Claude, Gemini, and Grok interfaces as you type. install from extensions.voiddo.com →