TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LLM-D: Kubernetes-Native Distributed Inference

120 点作者 smarterclayton19 天前

4 条评论

rdli19 天前
This is really interesting. For SOTA inference systems, I&#x27;ve seen two general approaches:<p>* The &quot;stack-centric&quot; approach such as vLLM production stack, AIBrix, etc. These set up an entire inference stack for you including KV cache, routing, etc.<p>* The &quot;pipeline-centric&quot; approach such as NVidia Dynamo, Ray, BentoML. These give you more of an SDK so you can define inference pipelines that you can then deploy on your specific hardware.<p>It seems like LLM-d is the former. Is that right? What prompted you to go down that direction, instead of the direction of Dynamo?
评论 #44043182 未加载
Kemschumam19 天前
What would be the benefit of this project over hosting VLLM in Ray?
dzr000119 天前
I did a quick scan of the repo and didn&#x27;t see any reference to Ray. Would this indicate that llm-d lacks support for pipeline parallelism?
评论 #44043121 未加载
评论 #44043088 未加载
anttiharju19 天前
I wonder if this is preferable to kServe
评论 #44042496 未加载