npm · cli · free forever

tokcount

Count LLM tokens and estimate cost across 60+ models — locally, privately, no API keys. GPT-5, Claude Opus 4.7, Gemini 2.5, Grok 4, Llama 4, Mistral, DeepSeek, Qwen. Pipe-friendly.

$ npm i -g @v0idd0/tokcount
no api keys no telemetry pipe-friendly 60+ models cost estimate mit license
tokcount

how it works.

$ tokcount prompt.md
model: gpt-4o · tokens: 1,842 · cost: $0.0184 (in) + $0.0737 (out @ 4×)

$ tokcount prompt.md --model claude-opus-4-7
model: claude-opus-4-7 · tokens: 1,842 · cost: $0.0277 (in) + $0.1107 (out @ 4×)

$ tokcount prompt.md --model gemini-2.5-flash
model: gemini-2.5-flash · tokens: 1,842 · cost: $0.0006 (in) + $0.0022 (out @ 4×)

$ tokcount prompt.md --all --tag cheap
# shows cost table for all cheap/budget models at once

$ cat system-prompt.md context.md | tokcount --model claude
# pipe text from stdin, alias resolves to claude-sonnet-4-6

model coverage.

60+ models with 2026-04 pricing snapshot. Updated as providers change rates.

OpenAIgpt-5 · gpt-4o · o3 · o4-mini
Anthropicclaude-opus-4-7 · sonnet-4-6 · haiku-4-5
Googlegemini-2.5-pro · flash · flash-lite
xAIgrok-4.1-fast · grok-3
Metallama-4-scout · llama-4-maverick
Mistralmistral-large-3 · magistral · nemo
DeepSeekdeepseek-v3.2 · deepseek-r2
Alibabaqwen3-max · qwen3-235b-a22b
Coherecommand-a · command-r7b
Amazonnova-pro · nova-lite · nova-micro

tokcount is also available as a browser extension — see live token counts directly in the ChatGPT, Claude, Gemini, and Grok interfaces as you type. install from extensions.voiddo.com →

← back to all tools