TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

PyTorch Library for Running LLM on Intel CPU and GPU

308 点作者 ebalit大约 1 年前

7 条评论

vegabook大约 1 年前
The company that did 4-cores-forever, has the opportunity to redeem itself, in its next consumer GPU release, by disrupting the &quot;8-16GB VRAM forever&quot; that AMD and Nvidia have been imposing on us for a decade. It would be poetic to see 32-48GB at a non-eye-watering price point.<p>Intel definitely seems to be doing all the right things on software support.
评论 #39917935 未加载
评论 #39917810 未加载
评论 #39917439 未加载
评论 #39917738 未加载
评论 #39917946 未加载
评论 #39918596 未加载
评论 #39921491 未加载
评论 #39918547 未加载
评论 #39920260 未加载
评论 #39917785 未加载
Hugsun大约 1 年前
I&#x27;d be interested in seeing benchmark data. The speed seemed pretty good in those examples.
captaindiego大约 1 年前
Are there any Intel GPUs with a lot of vRAM that someone could recommend that would work with this?
评论 #39916880 未加载
评论 #39916760 未加载
DrNosferatu大约 1 年前
Any performance benchmark against &#x27;llamafile&#x27;[0] or others?<p>[0] - <a href="https:&#x2F;&#x2F;github.com&#x2F;mozilla-Ocho&#x2F;llamafile">https:&#x2F;&#x2F;github.com&#x2F;mozilla-Ocho&#x2F;llamafile</a>
评论 #39916988 未加载
donnygreenberg大约 1 年前
Would be nice if this came with scripts which could launch the examples on compatible GPUs on cloud providers (rather than trying to guess?). Would anyone else be interested in that? Considering putting it together.
antonp大约 1 年前
Hm, no major cloud provider offers intel gpus.
评论 #39916559 未加载
评论 #39916458 未加载
评论 #39916429 未加载
tomrod大约 1 年前
Looking forward to reviewing!