TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: TokenFlow – Visualize LLM inference speed

1 点作者 davely3 个月前
How fast are your favorite LLMs? I recently saw a Reddit post where someone was able to get a distilled version of Deepseek R1 running on a Raspberry Pi. It could generate output at a whopping 1.97 tokens per second. That sounds slow. Is that even usable? I don’t know!<p>Meanwhile, Mistral announced that their Le Chat platform can output tokens at 1,100 per second! That sounds pretty fast? How fast? I don’t know!<p>So, that’s why I put together TokenFlow. It’s a (very!) simple webpage that lets you see the (theoretical) speed of different LLMs in action. You can select from a few preset models &#x2F; services or enter a custom speed in tokens per second. You can then watch it spit out tokens in real time, showing you exactly how fast a given inference speed is and how it impacts user experience.<p>Check it out: <a href="https:&#x2F;&#x2F;dave.ly&#x2F;tokenflow&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dave.ly&#x2F;tokenflow&#x2F;</a><p>Github: <a href="https:&#x2F;&#x2F;github.com&#x2F;daveschumaker&#x2F;tokenflow">https:&#x2F;&#x2F;github.com&#x2F;daveschumaker&#x2F;tokenflow</a>

暂无评论

暂无评论