Hey everyone,<p>I’m the maintainer of the popular screenshot-to-code repo on Github (46k+ stars).<p>When Claude Opus was released, I thought to myself what if you could send in a video of yourself using a website or app, would the LLM be able to build it as a functional prototype? To my surprise, it worked quite well.<p>Here are two examples:<p>* In this video, you can see the AI replicating Google with auto-complete suggestions and a search results page (failed at putting the results on a separate page). <a href="https://streamable.com/s24pq6" rel="nofollow">https://streamable.com/s24pq6</a><p>* Here, we show it a multi-step form (<a href="https://tally.so/templates/online-quiz/V3qOnk" rel="nofollow">https://tally.so/templates/online-quiz/V3qOnk</a>) and ask Claude to re-create it. It does a really good job! <a href="https://streamable.com/gstsgn" rel="nofollow">https://streamable.com/gstsgn</a><p>The technical details: Claude Opus only allows you to send a max of 20 images so 20 frames are extracted from the video, and passed along with a prompt that uses a lot of Claude-specific techniques such as using XML tags and pre-filling an assistant response. In total, 2 passes are performed with the second pass instructing the AI improve on the first attempt. More passes might help as well. While I think the model has Google.com memorized but for many other multi-page/screen apps, it tends to work quite well.<p>You can try it out by downloading the Github repo and setting up a Anthropic API key in backend/.env Be warned that one creation/iteration (with 2 passes) can be quite expensive ($3-6 dollars).