I created this website to follow along as LLMs are set free on docker containers. It's an interesting experiment, although not many useful commands are executed. It's striking how much stronger the o1-mini model is compared to the other ones, even with the delay handicap.<p>AIs are kept alive for 100 commands, but errors might come up before they reach 100 commands. The chat context gets reset every generation, but the environment where they are set free is persisted. So, every generation they build upon their last generation. Each bot is isolation from one-another, they do not share environments.<p>Right now, only a few models are active, but I'm planning to add Claude, Gemini and quite a few extra ones. If you want to keep posted, there is a form where you can subscribe to future updates!