TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Cardamom – Deploy ML to AWS Lambda with a Function Call

4 点作者 nmhancoc将近 2 年前
Hi HN!<p>I wanted to share my machine learning compiler, hosting ML models on AWS Lambda.<p>If you go to the linked page, there are instructions to run a script which generates a model using sklearn, feeds it to my endpoint, and then calls the created endpoint on lambda. In case you&#x27;re unable to run a script, I&#x27;ve also included a video on the page.<p>Unlike other ML hosting services I&#x27;ve seen, where models are spun up behind containers, this service compiles the model down to either a static C library or WASM library, and then has a template lambda function do the HTTP parsing &#x2F; argument handling &#x2F; etc.<p>Upsides of this approach:<p><pre><code> - You can directly embed models in applications, callable via FFI - You can run ML in weird places, like mobile apps, embedded devices, or the browser </code></pre> Downsides:<p><pre><code> - I have to implement the model inference for each algorithm - As such algorithm support is limited right now </code></pre> My goal is to make the interface easy enough that anyone who can build a model can use it to deploy their model somewhere it can deliver value, rather than having to request an engineer&#x27;s help.<p>I&#x27;m fine hosting models for now, but I&#x27;m also excited for building out custom integrations like running the models on mobile apps, inside existing applications via FFI, or even embedded devices.<p>Please let me know what you think!

暂无评论

暂无评论