TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

TScale – Distributed training on consumer GPUs

130 pointsby zX41ZdbW11 days ago

6 comments

zitterbewegung11 days ago
I&#x27;m trying to run this but fo.cpp doesn&#x27;t exist in the repository. I made an issue see <a href="https:&#x2F;&#x2F;github.com&#x2F;Foreseerr&#x2F;TScale&#x2F;issues&#x2F;1">https:&#x2F;&#x2F;github.com&#x2F;Foreseerr&#x2F;TScale&#x2F;issues&#x2F;1</a>
评论 #43887926 未加载
fizx11 days ago
What is this 1T index technique they seem so hyped about?
评论 #43887965 未加载
TYMorningCoffee11 days ago
Can the inference piece be partitioned over multiple hosts?<p>Edit: algorithmed or partitioned in a way that overcomes the network bottleneck
评论 #43887219 未加载
评论 #43886923 未加载
gitroom11 days ago
tbh i never get why people keep reinventing config parsers, but i guess old habits die slow
评论 #43891466 未加载
revskill11 days ago
Interesting that you put code in code folder, not src.
ArtTimeInvestor11 days ago
Even with consumer GPUs, the AI stack is completely dependent on ASML, isn&#x27;t it?<p>Thought experiment: What would happen if the Dutch government decided that AI is bad for mankind and shuts down ASML? Would the world be stuck in terms of AI? For how long?
评论 #43886826 未加载
评论 #43886782 未加载
评论 #43891453 未加载