TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

I Have a Stockpile of Servers and GPUs

2 pointsby BracketMasterover 1 year ago
I have a stockpile of 22 8 GPUs servers with AMD Mi50s(see notes about Mi50s below). I&#x27;ve been able to get PyTorch working on these GPUs and have been able to do inference for different large language models. I originally wanted to use these GPUs to serve up LLMs, but VLLM cuda kernels don&#x27;t work out of the box with the Mi50s, and Llama CPP has a bug where it only supports up to 4 AMD GPUs at once.<p>So TLDR, I don&#x27;t want these servers sitting around and if anybody has any creative useful ideas for the servers, I&#x27;m happy to grant them SSH access to piddle around.<p>Mi50 Specs: - 16GB VRAM - 1TB&#x2F;s VRAM BW - 25 TFLOPs

1 comment

shortrounddev2over 1 year ago
Consider contacting a university to donate your server time to for medical research
评论 #39056430 未加载