Hey folks, I’ve been working on using control-net to take in a video game level (input as a depth image) and output a beautiful illustration of that level. Play with it here: dimensionhopper.com or read the blog post about what it took to get it to work. Been a super fun project.
I wonder how far off we are from something like the battle school game from the book Ender's Game. That is, an immersive video game that uses player actions, choices, exploration etc. in order to generate not only new content, but entirely new game rules on the fly. It feels like we're getting closer and closer to Ender's holographic terminal with VR interfaces + AI content.
While I have folks attention, I want to try training a model to generate monster/creature walk animations. Anyone know of a dataset of walk cycle sprite sheets that I could massage and label to see if I can make that work?
I feel like Jump 'n Bump probably has a special place in the hearts of people who had access to the internet at a particular time. The internet was available, but multiplayer online gaming was still out of reach for many with it - there was an amazing niche of fairly polished indie local multiplayer games. Imagine being told while playing it back then what would be possible a few decades later.<p>(see also <a href="https://github.com/midzer/jumpnbump">https://github.com/midzer/jumpnbump</a>)<p>Now need someone to do other games of that time, place and genre: Tremor 3, C-Dogs, etc.
amazing advances this year. Remember the guy who created the 2d platformer thats based on time, what was it called again? He spent around $100k+ just for the art, which I am pretty sure was a huge expenditure for him, with this software he could have done it virtually for free without much artistic talent at all.
I was going to comment that that the contrast between the beautiful illustrations and red blood that violently explodes when you kill the opponent is pretty funny. But then I looked up the original Jump 'n Bump and it's just as gory, if not more! Good ol' 90s games.
Love to see a write up on your Hugging Face Diffusers experience, setting that up, what your dev cycle & stack look like, if you're hosting that server on a GPU cloud instance or what. Those kind of details are very interesting.
Does the 2d data like platforms and hit boxes still match the input entered by the human? If yes, I wouldn't say this is using AI for level editing, this seems using AI for level artwork generation. Impressive nonetheless, just different.<p>HN's submission title ("Show HN: Stable Diffusion powered level editor for a 2D game") made me think of the former. Article title ("2D Platformer using Stable Diffusion for live level art creation") was more accurate to me.
I've recently tried using InvokeAI to apply a specific style as a texture mod for the original Max Payne, along with RTX Remix. Instead of making the textures "modern", I was attempting to mix them into a noir rendition, similar to Sin City, but less cartoonish. Unfortuantely, it was really hard to get InvokeAI to restrict to the UV boundaries, with data always leaking and not looking really good when rendered.
To get even less "structured" backgrounds you should try replacing the support backgrounds and the far backgrounds with a light bit of depth noise.
I'm also curious if anyone has made a level that worked particularly well/poorly or has a great custom theme (that maybe I should add to the dropdown) :)
This is good for procedural generated 2D worlds. Think Hollow Knight, but expansive across infinite environments. Just randomly generate the control image and have the LLM generate the theme. Combine that with LLM generated lore and the possibilities are unlimited.<p>We have the technology to do this right now.