TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Deepserve.ai – Heroku for Deep Learning

7 点作者 jeffrwells超过 4 年前

4 条评论

jeffrwells超过 4 年前
Hi HN,<p>I’m Jeff Wells. I am launching <a href="https:&#x2F;&#x2F;www.deepserve.ai" rel="nofollow">https:&#x2F;&#x2F;www.deepserve.ai</a> — a platform to deploy and host machine learning models.<p>Deepserve makes it extremely easy to take a trained model and deploy it with a single CLI command: `deepserve deploy`<p>We host the model, deploy it, and give you an API endpoint you can call on to make predictions from your applications. We manage the devops and dependencies and ensure your model is always running. We’ll scale up as much as your application needs, and scale down during off-peak hours to save you money over renting your own servers.<p>In order to make it easier to use, we have client SDKs to make calling your model a single line of code. We store all of the production inference data so you can grow your training sets with real examples.
评论 #24485840 未加载
kparaju超过 4 年前
All three of the big cloud providers have a solution for ML model deployment and support more libraries than just Fast.AI. What are some of the reasons one would use Deepserve.ai vs. the other cloud providers?<p>This is really neat though. The fact that this is so easy to use will be a big appeal to a lot of data scientists who don&#x27;t want to write production code or deal with lots of bootstrap configuration. There are also a lot of benefits of abstracting the deployment as you can seamlessly add a lot of features like logging, or even make performance improvements by tweaking a few env vars and everyone will get it by default! Thanks for sharing.
评论 #24488874 未加载
p1esk超过 4 年前
Hi Jeff, I&#x27;m not your target customer, but I could become one if I ever launch my own startup. So I&#x27;m curious about:<p>What&#x27;s your background? How did you come up with this idea? How did you get started? How did you get funding? How do you market this product (other than posting on HN)? How much do you work on it (what&#x27;s your day like)? When do you plan to become profitable? Are you worried about competition (e.g. Amazon offering an easy way to deploy a model)?
评论 #24488908 未加载
panabee超过 4 年前
which gpus do you use? is the pricing ($1 per 1000 requests) independent of inference time and bandwidth? for instance, some of our models finish within 2 seconds while other models take ~60 seconds, depending on the input. we have been searching for something like this for a long time, but all the other options were lacking in one way or another.
评论 #24495270 未加载