My understanding is Conway's game of life can be done by simply convolving a 3x3 array of all ones (except a central 0) through every pixel, and then looking at the results of that convolution to see if the pixel is dead or alive, e.g. <a href="https://stackoverflow.com/questions/46196346/why-does-my-game-of-life-simulation-slow-down-to-a-crawl-within-seconds-matplot" rel="nofollow">https://stackoverflow.com/questions/46196346/why-does-my-gam...</a><p>If that's the case, would a nice solution be to model it as a recurrent neural network, e.g. pytorch or jax, with a single convolutional layer, with a single kernel, which you don't backprop to, and then try to minimise to loss with reference to the target image, by backpropagating to the input image?<p>It then becomes highly analogous to Google's deepdream, but instead of optimising the image for the output class, you optimise it to the output image.