I did the same thing, but with gradient descent. You can create a soft version of the game of life, that is differentiable. Here is my messy collab notebook: [1]<p>[1] <a href="https://colab.research.google.com/drive/12CO3Y0JgCd3DVnQeNSB3J8WaZGAVo0Ln?usp=sharing" rel="nofollow">https://colab.research.google.com/drive/12CO3Y0JgCd3DVnQeNSB...</a><p>Edit: only 1 step though, not 4, as in the OP. I couldn't get my differentiable version to converge more than 1 step into the past.
> Running ~1000 iterations for a 483px wide Mona Lisa on the google colab GPU runtime only takes around 40 seconds!. Compared to the CPU version which takes several hours to do the same for a smaller image<p>Isn't this excruciatingly slow? I remember my old 486 computer did run life at full-screen full-resolution in realtime (probably 25fps at 800x600 ?) How it has come to that after 20 years? How can a 20 year old CPU outperform a modern GPU by one order of magnitude? Is it all fault of lots of useless intermediary language layers?
As a next step: how simple can you make the initial state. Can you have something that occupies much less space, but then grows to something that resembles the target image.
This comment may be too meta but this post just made me appreciate Hacker News so much! Classic creative hacking guided by nothing but pure curiosity. Love it!
A similar post can be found here (2020, implemented with backsearch):<p><a href="https://news.ycombinator.com/item?id=22552006" rel="nofollow">https://news.ycombinator.com/item?id=22552006</a><p><a href="https://kevingal.com/blog/mona-lisa-gol.html" rel="nofollow">https://kevingal.com/blog/mona-lisa-gol.html</a>
Gotta be used for some capture-the-flag or so, where after running it a clue to the next step is revealed.<p>I also wonder how effective this would be as a compression algorithm (just for the fun of it). Are the black&white spots too noisily distributed to make it compressable, or better than the resulting image after running GoL?<p>Also interesting that so little data is needed for my brain to understand it's a picture of the David statue.
Kaggle recently hosted a very relevant competition - "Conway's Reverse Game of Life" [0]. A write-up of the 1st place might be an interesting read [1].<p>[0] <a href="https://www.kaggle.com/c/conways-reverse-game-of-life-2020/overview" rel="nofollow">https://www.kaggle.com/c/conways-reverse-game-of-life-2020/o...</a><p>[1] <a href="https://www.kaggle.com/c/conways-reverse-game-of-life-2020/discussion/200980" rel="nofollow">https://www.kaggle.com/c/conways-reverse-game-of-life-2020/d...</a>
Similar post from mid-2020, using PyTorch instead of JAX, and using a "continuous" (gradient-based) hill-climbing: <a href="http://hardmath123.github.io/conways-gradient.html" rel="nofollow">http://hardmath123.github.io/conways-gradient.html</a><p>And the HN discussion from the time: <a href="https://news.ycombinator.com/item?id=23095190" rel="nofollow">https://news.ycombinator.com/item?id=23095190</a>
So my understanding is that the Game of Life is an undecidable system, so you basically can't write a solvable system of equations that will tell you that your initial state will produce a Mona Lisa.
Even after reading the article I don't really understand what he's doing to make this happen. And especially after stating that most states are Garden of Eden states!
Can anyone ELI5?
My understanding is Conway's game of life can be done by simply convolving a 3x3 array of all ones (except a central 0) through every pixel, and then looking at the results of that convolution to see if the pixel is dead or alive, e.g. <a href="https://stackoverflow.com/questions/46196346/why-does-my-game-of-life-simulation-slow-down-to-a-crawl-within-seconds-matplot" rel="nofollow">https://stackoverflow.com/questions/46196346/why-does-my-gam...</a><p>If that's the case, would a nice solution be to model it as a recurrent neural network, e.g. pytorch or jax, with a single convolutional layer, with a single kernel, which you don't backprop to, and then try to minimise to loss with reference to the target image, by backpropagating to the input image?<p>It then becomes highly analogous to Google's deepdream, but instead of optimising the image for the output class, you optimise it to the output image.
For a similar challenge: <a href="https://en.wikipedia.org/wiki/Proof_game" rel="nofollow">https://en.wikipedia.org/wiki/Proof_game</a>
Did you consider something like the following to speed up the calculation? This method can go forward _really_ fast. Not sur e if it can also go backward...<p><a href="https://pzemtsov.github.io/2015/04/24/game-of-life-hash-tables-and-hash-codes.html" rel="nofollow">https://pzemtsov.github.io/2015/04/24/game-of-life-hash-tabl...</a>
Some years ago I did a similar thing using genetic algorithm. I was researching it's use in generative art.<p>Here's a video of it in action: <a href="https://youtu.be/xgAigVfpIYc" rel="nofollow">https://youtu.be/xgAigVfpIYc</a>
I suspect slightly better (or faster) results would be achieved if instead of comparing against a specific dithering pattern nominated by the author, they instead compared downscaled candidate patterns against the greyscale target image.
This could be useful using least significant bit steganography to embed the start state and maybe a QR code as the end state, or something else able to withstand the noisy output.