TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

LLM-D: Kubernetes-Native Distributed Inference

119 pointsby smarterclayton3 days ago

4 comments

rdli3 days ago
This is really interesting. For SOTA inference systems, I&#x27;ve seen two general approaches:<p>* The &quot;stack-centric&quot; approach such as vLLM production stack, AIBrix, etc. These set up an entire inference stack for you including KV cache, routing, etc.<p>* The &quot;pipeline-centric&quot; approach such as NVidia Dynamo, Ray, BentoML. These give you more of an SDK so you can define inference pipelines that you can then deploy on your specific hardware.<p>It seems like LLM-d is the former. Is that right? What prompted you to go down that direction, instead of the direction of Dynamo?
评论 #44043182 未加载
Kemschumam2 days ago
What would be the benefit of this project over hosting VLLM in Ray?
dzr00013 days ago
I did a quick scan of the repo and didn&#x27;t see any reference to Ray. Would this indicate that llm-d lacks support for pipeline parallelism?
评论 #44043121 未加载
评论 #44043088 未加载
anttiharju3 days ago
I wonder if this is preferable to kServe
评论 #44042496 未加载