TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

We are self-hosting our GPUs

67 pointsby adityapatadia8 months ago

18 comments

godelski8 months ago
As a ML person who&#x27;s also worked on HPC stuff, you will most certainly save money by doing this and there are plenty of benefits. It is generally a good idea, but there is a bit more barrier to entry and you need in house expertise.<p>So important piece of advice. If you can, hire an admin with HPC experience. If you can&#x27;t, find ML people with HPC experience. Things you can ask about are slurm, environment modules (this clear sign!), what a flash buffer is, zfs, what they know about pytorch DDP, their linux experience, if they&#x27;ve built a cluster before, adminning linux, and so on. If you need a test, ask them to write a simple bash script to run some task and see if everything has functions and if they know how to do variable defaults. With these guys, they won&#x27;t know everything but they&#x27;ll be able to pick up the slack and probably enjoy it. As long as you have more than one. Adminning is a shitty job so if you only have one they&#x27;ll hate their life.<p>There are plenty of ML people who have this experience[0], and you&#x27;ll really reap rewards for having a few people with even a bit of this knowledge. Without this knowledge it is easy to buy the wrong things or have your system run far from efficient and end up with frustrated engineers&#x2F;researchers. Even with only a handful of people running experiments schedulers (like slurm) still have huge benefits. You can do more complicated sweeps than wandb, batch submit jobs, track usage, allocate usage, easily cut up your nodes or even a single machine into {dev,prod,train,etc} spaces, and much more. Most importantly, a scheduler (slurm) will help prevent your admin from quitting as it&#x27;ll help prevent them from going into a spiral of frustration.<p>[0] At least in my experience these tend to be higher quality ML people too, but not always. I think we can infer why there would be a correlation (details).
评论 #41571335 未加载
perryh28 months ago
&gt; We however found that our co-working space - WeWork has an excellent server hosting solution. We could put the servers on the same floor as our office and they would provide redundant power supply, cooling and internet connection. This entire package is available at a much cheaper rate and we immediately jumped on this. Right now all servers are securely running in our office.<p>Nice! How much does this cost?
评论 #41570887 未加载
CommieBobDole8 months ago
I think generally the benefit of cloud is either where your demands are very elastic, or if you are essentially a fractional user - a single server or GPU would be overkill.<p>Once you have heavy and&#x2F;or unconventional compute needs, it&#x27;s likely cheaper to self-host or colo purchased hardware.
ThinkBeat8 months ago
This does not make sense to me.<p>They are processing 2.5 Billion images and videos in a single day. They decided to self host their GPUs.<p>The solution uses off-the-shelf hardware, with GPU per &quot;server&quot;, add it all together into a single rack? And that is the GPU compute needed to process all the videos 24&#x2F;7?<p>Then they have this rack in the office, but they cant find a place to put it. That might be a decent thing to start out with, before the build. Where do we put it?<p>But no. Planning for multiple network links, multiple redundant power, cooling, security, monitoring, and backup generators, handling backups, fire suppression, and failover to a different region if something fails was not necessary.<p>Because Google book?<p>But our (insert ad here) WeWork let us put our servers in a room on the same floor, (their data centerish capabilities seem limited)<p>There are so many additional costs that are not factored into the article.<p>I am sure once they accrue serious downtime a few times and irate customers, then paying for hosting in a proper data center might start making sense.<p>Now I am basing this comment on the assumption that the company is providing continuous real-time operations for their clients.<p>If it is more batch operated, where downtime is fine as long as results are delivered let us say within 12 hours.
评论 #41576770 未加载
dangoodmanUT8 months ago
How did you expose the servers to the internet, if at all?<p>I&#x27;d personally have these on tailscale, not exposed to the internet, but at some point in self hosting, clients have to be able to talk to something.<p>I know tailscale has their endpoints but I can&#x27;t expect this to be able to server a production API at scale.
评论 #41571086 未加载
评论 #41570911 未加载
rorra8 months ago
It would be nice if you can add numbers, like what would be the cost in your cloud provider, what was the total investment made, how much are you saving, which other options did you have in mind and why were discarded Still it was a nice post to read
teaearlgraycold8 months ago
At my last job we did the same thing but for AI training hardware. It was definitely the right call cost-wise, with our little cluster breaking even after 8 months. We found a cheap data center in Texas.
评论 #41579481 未加载
not_your_vase8 months ago
<p><pre><code> &gt; AMD 5700x processor </code></pre> I find it to be an odd choice. I mean the CPU itself is perfectly fine (typing this myself on a 5600G, which I very much like), but AM4 socket is pretty much over - there is no upgrade path anymore once it starts getting long on the tooth. (Unlike the other parts, which can be bumped: RAM, GPU, storage...)
评论 #41569578 未加载
评论 #41570820 未加载
p0w3n3d8 months ago
Shouldn&#x27;t they be named VPU (vector processing units) as they are no longer to produce graphics?
rurban8 months ago
We also do, and you&#x27;d need to add a couple more zero&#x27;s for the cost. For administration it paid out that I&#x27;m a trained architect, because all the work is in cooling the room. Lots of temperature shielding and air and water flow, monitors, ...
kendallgclark8 months ago
<a href="https:&#x2F;&#x2F;www.stardog.com&#x2F;blog&#x2F;skathe-is-a-private-gpu-cloud&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.stardog.com&#x2F;blog&#x2F;skathe-is-a-private-gpu-cloud&#x2F;</a>
rkwasny8 months ago
RTX 4000 ADA? That&#x27;s a very under powered card: <a href="https:&#x2F;&#x2F;github.com&#x2F;mag-&#x2F;gpu_benchmark">https:&#x2F;&#x2F;github.com&#x2F;mag-&#x2F;gpu_benchmark</a>
BonoboIO8 months ago
Hetzner has RTX 4000 for 185€ per month. Is your solution cheaper?
评论 #41570799 未加载
qmarchi8 months ago
Tangential to the post:<p>Was going to toss an application your way since it sounds like interesting work, but it looks like the Google Form on your Careers page was deleted.
评论 #41571994 未加载
drio8 months ago
Do you mind sharing the details of the rack mount you use?
LarsDu888 months ago
How many GPU servers are we talking about here exactly?
评论 #41571139 未加载
erichileman8 months ago
Why not run something like 8 x L40&#x27;s for $4,750 a month from a bare metal provider like latitude.sh? This seems far more cost efficient and flexible.
评论 #41570822 未加载
评论 #41570905 未加载
评论 #41570896 未加载
评论 #41570774 未加载
briandilley8 months ago
I skimmed to the part about &quot;We host it in our WeWork office&quot; and thought WTF?
评论 #41572022 未加载