TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

50+ hot takes on the current and future of AI/ML

3 pointsby chse_cakeabout 1 year ago

1 comment

sci-genieabout 1 year ago
TBH I kinda agree with the argument that distributed training is too hard. Its so architecture/compute-resources/network-topology dependent that when people open those can of worms, they quickly realize that the cost/benefit tradeoff is limited unless you are doing large-scale pre-training. its just so much easier to train as much as possible on a single node