Stop bundling 500KB just to count tokens.
tiktoken (and forks like gpt-tokenizer, js-tiktoken) is the standard way to count tokens client-side. But it's heavy — ~500KB minified, plus you have to maintain encoding-per-model logic (cl100k_base for GPT-4 / Claude, o200k_base for GPT-4o family, etc) and update it every time OpenAI ships a new model. Promptibus MCP's `count_tokens` tool does the routing for you and adds an automatic USD cost projection when the model has token-based pricing in our catalog.
Bundle tiktoken if you NEED offline + zero-latency counts. Use Promptibus `count_tokens` if you'd rather skip the dep + get cost projection for free.
| Feature | Promptibus | Raw tiktoken (DIY token counting) |
|---|---|---|
| Bundle size impact | 0 KB (server-side) | ~500 KB (tiktoken-node) or ~1 MB (full WASM) |
| Encoding selection | Auto — based on model slug (cl100k_base / o200k_base) | Manual — you maintain the map |
| Cost projection | Auto — when model has token-based pricing | DIY — query pricing separately |
| Latency | ~50-200ms (HTTP) | <1ms (in-process) |
| New model support | Auto-updated when we add models | Wait for tiktoken release + manual update |
| Cost | Free (no plan gate) | Free (MIT license) |
Free tier: 100 calls/day. No credit card.