TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Token price calculator for 400+ LLMs

268 点作者 Areibman11 个月前
Hey HN! Tokencost is a utility library for estimating LLM costs. There are hundreds of different models now, and they all have their own pricing schemes. It’s difficult to keep up with the pricing changes, and it’s even more difficult to estimate how much your prompts and completions will cost until you see the bill.<p>Tokencost works by counting the number of tokens in prompt and completion messages and multiplying that number by the corresponding model cost. Under the hood, it’s really just a simple cost dictionary and some utility functions for getting the prices right. It also accounts for different tokenizers and float precision errors.<p>Surprisingly, most model providers don&#x27;t actually report how much you spend until your bills arrive. We built Tokencost internally at AgentOps to help users track agent spend, and we decided to open source it to help developers avoid nasty bills.

15 条评论

yelnatz11 个月前
Can you do a column and normalize them?<p>Too many zeroes for my blind ass making it hard to compare.
评论 #40711040 未加载
simonw11 个月前
I don&#x27;t understand how the Claude functionality works.<p>As far as I know Anthropic haven&#x27;t released the tokenizer for Claude - unlike OpenAI&#x27;s tiktoken - but your tool lists the Claude 3 models as supported. How are you counting tokens for those?
评论 #40711374 未加载
评论 #40710980 未加载
评论 #40718095 未加载
J_Shelby_J11 个月前
Would anybody be interested in this for Rust? I already do everything this library does with the exception of returning the price in my LLM utils crate [1]. I do this just to count tokens to ensure prompts stay within limits. And I also support non-open ai tokenizers. So adding a price calculator function would be trivial.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;ShelbyJenkins&#x2F;llm_utils">https:&#x2F;&#x2F;github.com&#x2F;ShelbyJenkins&#x2F;llm_utils</a>
Lerc11 个月前
With all the options there seems like an opportunity for a single point API that can take a series of prompts, a budget and a quality hint to distribute batches for most bang for buck.<p>Maybe a small triage AI to decide how effectively models handle certain prompts to preserve spending for the difficult tasks.<p>Does anything like this exist yet?
评论 #40715879 未加载
评论 #40715521 未加载
评论 #40712921 未加载
Ilasky11 个月前
I dig it! Kind of related, but I made a comparison of LLM API costs vs their leaderboard performance to gauge which models can be more bang for the buck [0]<p>[0] <a href="https:&#x2F;&#x2F;llmcompare.net" rel="nofollow">https:&#x2F;&#x2F;llmcompare.net</a>
评论 #40714327 未加载
评论 #40711502 未加载
sakex11 个月前
An interesting parameter that I don&#x27;t read about a lot is vocab size. A larger vocab means you will need to generate less tokens for the same word on average, also the context window will be larger. This means that a model with a large vocab might be more expensive on a per token basis, but would generate less tokens for the same sentence, making it cheaper overall. This should be taken into consideration when comparing API prices.
评论 #40720776 未加载
评论 #40720767 未加载
评论 #40718221 未加载
pamelafox11 个月前
Are you also accounting for costs of sending images and function calls? I didn&#x27;t see that when I looked through the code. I developed this package so that I could count those sorts of calls as well: <a href="https:&#x2F;&#x2F;github.com&#x2F;pamelafox&#x2F;openai-messages-token-helper">https:&#x2F;&#x2F;github.com&#x2F;pamelafox&#x2F;openai-messages-token-helper</a>
oopsallmagic11 个月前
Can we get conversions for kg of CO2 emitted, too?
评论 #40714663 未加载
zackfield11 个月前
Very cool! Is this cost directory you&#x27;re using the best source for historical cost per 1M tokens? <a href="https:&#x2F;&#x2F;github.com&#x2F;BerriAI&#x2F;litellm&#x2F;blob&#x2F;main&#x2F;model_prices_and_context_window.json">https:&#x2F;&#x2F;github.com&#x2F;BerriAI&#x2F;litellm&#x2F;blob&#x2F;main&#x2F;model_prices_an...</a>
评论 #40712982 未加载
Karrot_Kream11 个月前
A whole bunch of the costs are listed as zeroes, with multiple decimal points. I noticed y&#x27;all used the Decimal library and tried to hold onto precision so I&#x27;m not sure what&#x27;s going on, but certainly some of the cheaper models just show up as &quot;free&quot;.
ilaksh11 个月前
Nice. Any plans to add calculations for image input for the models that allow that?
评论 #40718070 未加载
评论 #40711455 未加载
yumaueno11 个月前
What a nice product! I think the way to count tokens depends on the language, but is this only supported in English?
评论 #40717214 未加载
armen9911 个月前
This is great project! I would love to see something that calculates training costs as well.
jaredliu23311 个月前
wow, this is really useful!! Just the price list alone has given me a lot of inspiration, thank you
jacobglowbom11 个月前
Nice. Does it add Vision costs too?
评论 #40718073 未加载