Hi everyone! My name is Varun Mohan and I'm the CEO of Exafunction (we're hiring!). We're excited to share a bit about our platform, which we think will be the best way to run deep learning and GPU workloads in the cloud, starting with inference. We've already proven out our product by managing inference for some of the world's biggest GPU users, namely autonomous vehicle companies. You can read our Series A blog post here, where we briefly describe the tech: <a href="https://exafunction.com/blog/series-a-announcement" rel="nofollow">https://exafunction.com/blog/series-a-announcement</a><p>I'm happy to answer any questions and speak to any of the technical challenges we've had to tackle to make your remote GPU code feel effortlessly local.
This is cool, but the overhead to learn more is pretty high. I have to "contact sales" to either get started or get a demo, there's no technical details about how it works (I assume that it's basically like GPU/deep learning lambda functions?), and there's no case studies or any performance details showing how much of an improvement in cost I can expect to get.