TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Shadeform – Single Platform and API for Provisioning GPUs

62 点作者 edgoode将近 2 年前
Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform (<a href="https:&#x2F;&#x2F;www.shadeform.ai&#x2F;">https:&#x2F;&#x2F;www.shadeform.ai&#x2F;</a>), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available.<p>From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds.<p>With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference.<p>This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in.<p>We shipped a few features to help teams access GPUs today:<p>- a “single plane of glass” for GPU availability and prices;<p>- a “single control plane” for provisioning GPUs in any cloud through our platform and API;<p>- a reservation system that monitors real time availability and launches GPUs as soon as they become available.<p>Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more.<p>You can try our platform at <a href="https:&#x2F;&#x2F;platform.shadeform.ai">https:&#x2F;&#x2F;platform.shadeform.ai</a>. You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee.<p>We’d love your feedback on how we’re approaching this problem. What do you think?

9 条评论

thecupisblue将近 2 年前
First off, the color and the font of the hero look so neat together. Just giving straight up simple, professional but modern vibes. Good job whoever picked it!<p>Now, regarding the product - this is amazing. From both the perspective of saving time and money digging through providers to the part that actually I find the most impacting - the simplification of the AWS console mess to a niche use case. While I understand GPU&#x27;s are the hot thing now and there is a scramble for a single Flop, if you ever decide to pivot, I&#x27;d gladly pay more money each month to use such a simplified niche AWS&#x2F;Generic cloud console.<p>Can&#x27;t wait to have a chance to play with this more, keep up the good work and good luck!
评论 #37166521 未加载
alando46将近 2 年前
Problem for our use case is saving on gpus is pointless if we have to keep paying egress fees for our 250 TB training dataset.<p>The single interface for any cloud GPU is cool, but hard to imagine it taking off without some additional features.<p>I think for lots of shops the hardest part isn&#x27;t the compute but moving the data around. Ie for us, we use s3, some lustre caching and spot instance nodegroups. We are a deep learning research team that spends roughly 40-50k&#x2F;month on aws compute for training jobs. I imagine this is somewhat mid tier, maybe more than some but certainly far less than others.<p>For inference, data egress costs could be less of an issue, but your service would really need to be almost invisible. It probably would be pretty complicated for a number of reasons, but if you could design a &quot;virtual on-demand nodegroup&quot;™ that I could add to my existing clusters and then map to whatever k8s stuff I want, that would probably be useful. I would need to be able to auto deploy a base image to the machine and then provision the node and register with my cluster.<p>Just some unorganized thoughts. Good luck and have fun.
评论 #37172751 未加载
edgoode将近 2 年前
Here are two demos of provisioning and reserving GPUs through our platform:<p>Provisioning: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=7WyKPMS80Pk">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=7WyKPMS80Pk</a><p>Reservations: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Ab5GmfMYWKA">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Ab5GmfMYWKA</a>
doctorpangloss将近 2 年前
- SSH access isn’t super useful. If I have to author a bootstrapping script for my system it’s too much friction.<p>- the people who thrive at this use orchestration, like Slurm or Kubernetes. So the nodes I buy should join automatically to my orchestration control plane.<p>- people who don’t use orchestration or don’t own their orchestration will not run big jobs or be repeat customers. It doesn’t make sense to use nonstandard orchestration. I understand that it is something that people do, but it’s dumb.<p>- so basically I would pay for a ClusterAutoscaler across clouds. I would even pay a 5% fee for it automatically choosing the cheapest of the fungible nodes. I am basically describing Karpenter for multiple clouds. Then at least the whole offering makes sense from a sophisticated person’s POV: your Karpenter clone can see eg a Ray CRD and size the nodes, giving me a firm hourly rate or even upfront price to approve.<p>- I wouldn’t pay that fee to use your control plane, I don’t want to use a startup’s control plane or scheduler.<p>- I’m not sure why the emphasis on GPU availability or blah blah blah. Either AWS&#x2F;GCE&#x2F;AKS grants you quota or it doesn’t. Your thing ought to delegate and automate the quota requests, maybe you even have an account manager at every major cloud for that to bundle it all.<p>- as you probably have noticed, the off brand clouds play lots of games with their supposed inventory. They don’t have any expertise running applications or doing networking, they are ex crypto miners. I understand that they offer a headline price that is attractive but for an LLM training job, they “vast”ly overpromise their “core” offering.<p>- if you really want to save people money on GPUs, buy a bunch of servers and rack them and sell a lower hourly rate.
评论 #37166836 未加载
评论 #37169174 未加载
lucasfcosta将近 2 年前
Congrats on the launch!<p>My co-founder and I always joke that there are only two hair-on-fire problems in 2023 and they can be summarised in 6 letters: GPU &amp; PMF.<p>Really love what you&#x27;re building.
评论 #37170240 未加载
评论 #37170099 未加载
mike_d将近 2 年前
Be super careful inserting yourself as a reseller of GPUs (ShadeCloud).<p>You&#x27;ll quickly find that your platforms primary use is to turn stolen credit cards into cryptominers.
评论 #37168413 未加载
Takennickname将近 2 年前
Surprisingly little engagement with this post. I&#x27;m not in the market but can people who use gpus but didn&#x27;t find their offering attractive explain why?
评论 #37170847 未加载
marcopicentini将近 2 年前
It’s like Cloud66 but with the GPU in the headline, isn’t? What’s difference with Cloud66?
71a54xd将近 2 年前
Any plans to add providers like TensorDock or Vast?
评论 #37166160 未加载