Forget LinPack and friends. Jack Dongarra is going to need to switch to the new metric for supercomputers—-kilograms of H100 GPUs—- about 3,300 give or take a few grams for this system.
> For use by startup investments of Nat Friedman and Daniel Gross<p>> Reach out if you want access<p>I'm confused by the last two bullet points. Is this website only meant to be used by these "startup investments" or can anyone fill out the linked form?
Can the creators explain in more detail: how is this different from (for example) the OpenAI cluster that MSFT built in Azure? Is it hosted in an existing cloud provider, or in a data center? Which data center? Who admins the system, is there an SRE team in case it goes down during training? And can you attempt ot run the same benchmarks that Top500 uses to determine what your <i>double precision flops are</i> and give that number in addition to your "10 exaflops" (which I believe is single precision).
Emad from Stability estimates this at 4M/month.
<a href="https://twitter.com/emostaque/status/1668666509298745344" rel="nofollow noreferrer">https://twitter.com/emostaque/status/1668666509298745344</a>
lmao are they trolling with the naming<p><a href="https://www.cerebras.net/andromeda/" rel="nofollow noreferrer">https://www.cerebras.net/andromeda/</a>
Same guys behind <a href="https://aigrant.org" rel="nofollow noreferrer">https://aigrant.org</a>, maybe it's mainly as a way to get dealflow?
Looks like they've reserved a bunch of compute from Lambda Labs?<p>Edit: Based off this tweet, looks very similar <a href="https://twitter.com/LambdaAPI/status/1668676838044868620" rel="nofollow noreferrer">https://twitter.com/LambdaAPI/status/1668676838044868620</a>
> Big enough to train llama 65B in ~10 days<p>Y'all could totally eat Meta's lunch and train an open LLM with all the innovations that have come since LLaMA's release. Other startups are trying, but they all seem bottlenecked by training time/resources.<p>This could be where the next Stable Diffusion 1.5 comes from.