I use the OpenAI tokenizer UI a lot when prompt engineering.<p>Token count for inputs allows comparison of different data formats (YAML, JSON, TS) and is a crude measure of prompt importance weighting. For outputs it is a relative measure of output speed between prompts (tok/s varies by time of day) and a crude measure of compute used in outputs (why “Think step-by-step” works). Token count also determines the cost of a prompt.<p>Since there’s no equivalent for other providers, I built one for Mistral & Anthropic. If it’s useful, I can add other providers too - let me know which you’d like.