The most fun thing about this demo (and the demo is 100% worth trying out, you'll need to use Chrome for it) is that it shows how there's no persistent memory at all. If you get bored of the area you are in, look straight up at the sky and look down again. If you see something interesting - like a fence post - keep that in your field of vision and you should start to see more similarly interesting things.
This is an excellent demo. I watched the "DOOM with no game engine" video a while back and was mildly intrigued. It is much more engaging to be the one in control.<p>They made a great decision by letting me download the video of my journey when the demo ended. I saw some neat constructs and was thinking, "I wish I could show this to some friends." I can!<p>If you get lost in the dark like I did, just press E to bring up the items menu. Doing that a couple of times and moving around a bit brought some novelty back into view for it to work with.
A while back "deformable terrain" and walls you could destroy and such was a big buzzword in games. AFAICT, though, it's rarely been used in truly open-ended ways, vs specific "there are things behind some of these walls, or buried under this mound" type of stuff. Generally there are still certain types of environment objects that let you do certain things, and many more that let you do nothing.<p>Generative AI could be an interesting approach to the issue of solving the "what happens if you destroy [any particular element]" aspect.<p>For a lot of games you'd probably still want to have specific destinations set in the map; maybe now it's just much more open-ended as far as how you get there (like some of the ascend-through-matter stuff in Tears of the Kingdom, but more open-ended in a "just start trying to dig anywhere" way and you use gen AI to figure out exactly how much dirt/other material will get piled up for digging in a specific place?).<p>Or for games with more of an emphasis on random drops, or random maps, you could leverage some of the randomness more directly. Could be really cool for a roguelike.
This looks like a very similar project to "Diffusion Models Are Real-Time Game Engines"[1] that circulated on HN a few months ago [2], which was playing DOOM. There's some pretty interesting commentary on that post that might also apply to this.<p>I'd like to do a deeper dive into the two approaches, but on a surface level one interesting note is Oasis specifically mentions using a use-specific ASIC (presumably for inference?):<p>> When Etched's transformer ASIC, Sohu, is released, we can run models like Oasis in 4K.<p>[1] <a href="https://gamengen.github.io/" rel="nofollow">https://gamengen.github.io/</a><p>[2] <a href="https://news.ycombinator.com/item?id=41375548">https://news.ycombinator.com/item?id=41375548</a>