TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Launch HN: Slai (YC W22) – Build ML models quickly and deploy them as apps

130 pointsby Mernitabout 3 years ago
Hi HN, we’re Eli and Luke from Slai (<a href="https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;62203ae9ee716300083c879b" rel="nofollow">https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;62203ae9ee716300083c879b</a>). Slai is a fast ML prototyping platform designed for software engineers. We make it easy to develop and train ML models, then deploy them as production-ready applications with a single link.<p>ML applications are increasingly built by software engineers rather than data scientists, but getting ML into a product is still a pain. You have to set up local environments, manage servers, build CI&#x2F;CD pipelines, self-host open-source tools. Many engineers just want to leverage ML for their products without doing any of that. Slai takes care of all of it, so you can focus on your own work.<p>Slai is opinionated: we are specifically for software developers who want to build models into products. We cover the entire ML lifecycle, all the way from initial exploration and prototyping to deploying your model as a REST API. Our sandboxes contain all the code, dataset, dependencies, and application logic needed for your model to run.<p>We needed this product ourselves. A year ago, Luke was working as a robotics engineer, working on a computationally intensive problem on a robot arm (force vector estimation). He started writing an algorithm, but realized a neural network could solve the problem faster and more accurately. Many people had solved this before, so it wasn’t difficult to find an example neural net and get the model trained. You’d think that would be the hard part—but actually the hard part was getting the model available via a REST API. It didn’t seem sensible to write a Flask app and spin up an EC2 instance just to serve up this little ML microservice. The whole thing was unnecessarily cumbersome.<p>After researching various MLOps tools, we started to notice a pattern—most are designed for data scientists doing experimentation, rather than software engineers who want to solve a specific problem using ML. We set out to build an ML tool that is designed for developers and organized around SWE best practices. That means leaving notebooks entirely behind, even though they&#x27;re still the preferred form factor for data exploration and analysis. We&#x27;ve made the bet that a normal IDE with some &quot;Jupyter-lite&quot; functionality (e.g. splitting code into cells that can be run independently) is a fair trade-off for software engineers who want easy and fast product development.<p>Our browser-based IDE uses a project structure with five components: (1) a training section, for model training scripts, (2) a handler, for pre- and post-processing logic for the model and API schema, (3) a test file, for writing unit tests, (4) dependencies, which are interactively installed Python libraries, and (5) datasets used for model training. By modularizing the project in this way, we ensure that ML apps are functional end-to-end (if we didn&#x27;t do this, you can imagine a scenario where a data scientist hands off a model to a software engineer for deployment, who&#x27;s then forced to understand how to create an API around the model, and how to parse a funky ML tensor output into a JSON field). Models can be trained on CPUs or GPUs, and deployed to our fully-managed backend for invoking via a REST API.<p>Each browser-based IDE instance (“sandbox”) contains all the source code, libraries, and data needed for an ML application. When a user lands on a sandbox, we remotely spin up a Docker container and execute all runtime actions in the remote environment. When a model is deployed, we ship that container onto our inference cluster, where it’s available to call via a REST API.<p>Customers have so far used Slai to categorize bills and invoices for a fintech app; recognize gestures from MYO armband movement data; detect anomalies in electrocardiograms; and recommend content in a news feed based on previous content a user has liked&#x2F;saved.<p>If you’d like to try it, here are three projects you can play with:<p><i>Convert any image into stylized art</i> - <a href="https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;62203ae9ee716300083c879b" rel="nofollow">https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;62203ae9ee716300083c879b</a><p><i>Predict Peyton Manning’s Wikipedia page views</i> - <a href="https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;6215708345d19a0008be3f25" rel="nofollow">https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;6215708345d19a0008be3f25</a><p><i>Predict how happy people are likely to be in a given country</i> - <a href="https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;621e9bb3eda93f00081875fc" rel="nofollow">https:&#x2F;&#x2F;www.slai.io&#x2F;hn&#x2F;621e9bb3eda93f00081875fc</a><p>We don’t have great documentation yet, but here’s what to do: (1) Click “train” to train the model; (2) Click the test tube icon to try out the model - this is where you enter sentences for GPT-2 to complete, or images to transform, etc; (3) Click “test model” to run unit tests; (4) Click “package” to, er, package the model; (5) Deploy, by clicking the rocket ship icon and selecting your packaged model. “Deploy” means everything in the sandbox gets turned into a REST endpoint, for users to consume in their own apps. You can do the first 3 steps without signup and then there’s a signup dialog before step 4.<p>We make money by charging subscriptions to our tool. We also charge per compute hour for model training and inference, but (currently) that&#x27;s just the wholesale cloud cost—we don&#x27;t make any margin there.<p>Our intention with Slai is to allow people to build small, useful applications with ML. Do you have any ideas for an ML-powered microservice? We’d love to hear about apps you’d like to create. You can create models from scratch, or use pretrained models, so you can be really creative. Thoughts, comments, feedback welcome!

24 comments

lysecretabout 3 years ago
Congrats on the launch. I&#x27;m quite impressed.<p>Here are my unordered thoughts.<p>So it seems a lot like an improved colab with a deployment stage. Which sounds good to me, it will be much more expensive than colab though.<p>I like the pitch of SWEs doing ML instead of pitching towards Data Scientists. As Data Scientist turned SWE I still miss the Jupyter like cell based execution. (you said it exists but I couldn&#x27;t find it.)<p>In general I&#x27;m quite sceptical when it comes to online IDEs. However, for Text and Image based models it might be enough (since you don&#x27;t need tooo much code).<p>There might be a valid niche between colab on the one side and building it yourself with AWS cli on the other.<p>I wonder though, in your target market it really isn&#x27;t such a big deal to spin up a rest api, there are no lambdas with GPU though (but this should be a matter of time). Or use something like AWS batch for remote training. It will come down to: Is it more convenient to code in your IDE and you handle Lambda, Batch Docker and CD. Or do you code in your own IDE and you have to handle this stuff yourself.<p>Wish you all the best!
评论 #30548229 未加载
omarhaneefabout 3 years ago
Firstly, I can&#x27;t believe you have enough instances to resist the HN hug of death, with so many people presumably running tests. So that is impressive.<p>Secondly, I ran the train -&gt; test cycle and I didn&#x27;t see any error metrics. Is the idea that if we were spinning up our own we would be outputting these ourselves? Or would we have trained up the model somewhere else and we would transfer it to SLAI to do a final test and then package it?
评论 #30544310 未加载
rish1_2about 3 years ago
This is essentially HuggingFace models + aws cdk deployed over lambda. They are your biggest competition but likely there is room for more. I think the key difference here is the training part, which can be done by Sagemaker. If aws makes it user friendly, they will be a serious threat. Good luck!
评论 #30546659 未加载
5cottsabout 3 years ago
This seems pretty cool! I deployed a model to a REST endpoint and am trying to test it out now using a Jupyter notebook running Python.<p>Two things that happened to me:<p>1) I wasn&#x27;t able to install `slai` using Pip and PyPi. I ended up downloading the source tarball from <a href="https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;slai&#x2F;#files" rel="nofollow">https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;slai&#x2F;#files</a> and installing locally.<p>2) I am following the example for how to &quot;Integrate&quot; my model using Python under the &quot;Metrics&quot; tab. However, the call to `model = slai.model(&quot;foobarbaz&quot;)` is failing. It looks like the regex check for `MODEL_ROUTE_URI` from line 21 in `model.py` doesn&#x27;t like my custom email address :(. For example, the following model endpoint isn&#x27;t valid according to the regex: &quot;s@slai.io&#x2F;foo-bar-baz&#x2F;initial&quot; (My custom email is very similar to `s@slai.io`). I&#x27;ll post the regex below.<p>`MODEL_ROUTE_URI = r&quot;([A-Za-z0-9]+[\._]?[A-Za-z0-9]+[@]\w+[.]\w{2,3})&#x2F;([a-zA-Z0-9\-\_]+)&#x2F;?([a-zA-Z0-9\-\_]*)&quot;`<p>Just wanted to let you know! Looking forward to experimenting with this more.
评论 #30547554 未加载
评论 #30545651 未加载
icyfoxabout 3 years ago
Congratulations on the launch guys. Product need seems clear to me &amp; is a painpoint that I&#x27;ve felt most acutely in side projects that I&#x27;ve worked on outside of our company&#x27;s devoted CI infrastructure.<p>Are you planning any git or IDE integration? Most of the magic here seems to happens in the backend with easier training, scheduling, and inference. Could this be enabled locally so devs iterate in an environment that&#x27;s more comfortable to them?
评论 #30543960 未加载
kamikazeturtlesabout 3 years ago
Very interesting!<p>So when a user trains a model you guys startup a docker container with everything in it. You guys bind the container&#x27;s ports to the host and add it to some key value store that a reverse proxy references. Is that correct?<p>Sorry, I&#x27;m just really curious. It&#x27;s a really interesting project. Do you guys have anything open source?
评论 #30545158 未加载
chrisweeklyabout 3 years ago
Awesome! This or something like it is going to bring ML to the (SWE) masses. Congrats and hoid luck and thanks!
lysecretabout 3 years ago
Heyo,<p>I looked a bit more at your service, since I am migrating one of our text classification models anyways right now. I decided against using it but maybe my reasoning could still be helpful. (and I see a lot of potential so I want to help)<p>What I am using instead is a combination of AWS Batch and Colab. My reasoning:<p># Local Development<p>Yes it is true that ML code can be quite well separated from the rest. But then there is data. So the extract and load step. I know you have bindings to for example Postgres but I wouldn&#x27;t trust you with my DB.<p>And always moving over files could be done (we keep a backup anyways) but it would be more of a hassle. Also even for the actual ML code it is nice to have it in an actual IDe with a good debugger etc. I prefer to code your ML code locally and then just package it and send it away to be trained.<p>Also, yes there is a cost to set up the infrastructure but I prefer to solve that with code generation&#x2F;templates and libraries (to send your docker to Lambda for inference and Batch&#x2F;Colab for training). It is a cost that is paid once and then never again.<p># Price<p>Your gpu instance costs 1 dollar an hour which is about 3 times as much as a p2.xlarge spot instance (which I assume is the closest one). Colab of course is 10 bucks a month &#x2F; free. This is ignoring AWS credits for now. (would also be good to know which exact GPU you provide)
theggintheskyabout 3 years ago
Congrats on the launch!<p>Overall I like the idea and I agree with you, either the tools are too focused on Data Scientists or there are a lot of DevOps involved to get things started<p>I work on the field so I have some questions:<p>- Are there any plans to connect the project into a git repo?<p>- Is there any option for me to pass trained binaries to your product? For example I have a beast of a machine and I can easily train things locally, but I&#x27;d like to host the inference with you guys<p>- Do you intend to allow automated testing and linting?
评论 #30544120 未加载
luke-stanleyabout 3 years ago
This is cool. It took me a while to figure out that you want people to click the test button on the sidebar to try it out, not the &quot;Test model&quot; buttons unit tests in the bottom right side. Unit tests might benefit from a different kind of icon. I tried the &quot;Interactive Mode&quot; toggle button too, and that didn&#x27;t do anything obvious.
评论 #30545074 未加载
crsnabout 3 years ago
Your website pitches this product SO well. Kudos.
评论 #30543782 未加载
sandGorgonabout 3 years ago
this is pretty cool! especially the opinionated structuring part.<p>now Sagemaker allows u to download ur running code and docker (<a href="https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;sagemaker&#x2F;latest&#x2F;dg&#x2F;data-wrangler-data-export.html" rel="nofollow">https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;sagemaker&#x2F;latest&#x2F;dg&#x2F;data-wrangle...</a>) . Also allows u to simulate local running - <a href="https:&#x2F;&#x2F;github.com&#x2F;aws&#x2F;sagemaker-tensorflow-training-toolkit" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aws&#x2F;sagemaker-tensorflow-training-toolkit</a><p>rather than anything else, this is basically just a way to calm worries about lock-in. Google ML resisted this for a long time, but even they had to finally do it - <a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;automl-tables&#x2F;docs&#x2F;model-export" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;automl-tables&#x2F;docs&#x2F;model-export</a><p>are you planning something similar ?
评论 #30545614 未加载
1_over_nabout 3 years ago
Sorry i haven&#x27;t look at this properly yet - keen to know if i can upload a customer pre-trained model using any of the popular libraries (pytorch, keras etc) and just do the deployment as an app with Slai?
评论 #30549053 未加载
frozencellabout 3 years ago
Superb! Can we implement a paper like this? <a href="https:&#x2F;&#x2F;github.com&#x2F;nv-tlabs&#x2F;editGAN_release" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nv-tlabs&#x2F;editGAN_release</a>
评论 #30550282 未加载
dayeye2006about 3 years ago
How you guys compare with sagemaker? It also lets you bring in custom containers as training and (batch&#x2F;real-time) inference phase.
评论 #30549550 未加载
timmitabout 3 years ago
I had the similar idea in 2018, to transfer AI model to API endpoints. but I did not do anything. :cry
ayanbabout 3 years ago
Cool product! are you guys using wasm under the hood?
评论 #30543876 未加载
thecleanerabout 3 years ago
How is your product different than SageMaker ? Why can&#x27;t I replicate the same functionality with SageMaker endpoints ?
tullieabout 3 years ago
Amazing. About time sometime built a good replacement for sagemaker! Congrats on the launch
gergelyabout 3 years ago
What is your plan, how am I going to be able to automatically feed the model with data?
评论 #30557707 未加载
Orasabout 3 years ago
Congratulations on launch. How’s slai different from huggingface?
评论 #30545569 未加载
subrao1about 3 years ago
We would like to talk to you. We are in San Jose, CA
dayeye2006about 3 years ago
How you guys compare with sagemaker?
subrao1about 3 years ago
We would like to talk to you
评论 #30549063 未加载