Hi everyone, I'm Vasek (<a href="https://x.com/mlejva" rel="nofollow">https://x.com/mlejva</a>), the CEO of the company behind this - <a href="https://e2b.dev" rel="nofollow">https://e2b.dev</a>. The company is called E2B. We're an open-source (<a href="https://github.com/e2b-dev">https://github.com/e2b-dev</a>) devtool that makes it easy to run untrusted AI-generated code in our secure sandboxes. You can think of us as coding runtime for LLMs.
You can self host us on GCP (<a href="https://github.com/e2b-dev/infra/blob/main/self-host.md">https://github.com/e2b-dev/infra/blob/main/self-host.md</a>) and we're working on AWS, then Azure, and any Linux machine.<p>This repo is one of our open-source projects that we're releasing to show developers what they can build with E2B. We used our sandboxes that are powered by AWS's Firecracker and gave them GUI with Linux. At the same time we made it easy to control this cloud computer with our Desktop SDK (<a href="https://github.com/e2b-dev/desktop">https://github.com/e2b-dev/desktop</a>). Essentially building a virtual desktop computer for AI and we gave it to LLMs to control it.
Here's a demo we showed at an event - <a href="https://x.com/tereza_tizkova/status/1878834392891838556" rel="nofollow">https://x.com/tereza_tizkova/status/1878834392891838556</a><p>We think computer use is still highly experimental but it feels sort of similar like AI codegen in early 2023. You see the sparks but it's not there just yet. However, we wanted to research if open-source LLMs could at least get some results. Here we're using Llama 3.2, Llama 3.3, and an OS-Atlas as a base model (fine-tuned Qwen).<p>If you have any questions, happy to answer them!<p>We're also hiring! If you're a fullstack engineer, distributed systems engineer, AI engineer, product designer, or GTM person and based in SF send me a hello to vasek @ e2b.dev!