I was playing with a web simulator called evolution where you draw an imaginary creature (bones, joints, muscles) and then set it to the task of learning something like walking using a combination of a neural network plus genetic fitness selection.<p>One of the parameters you can tune is the length of the simulation for each generation before the individuals of the population are scored for fitness and culled as appropriate.<p>One particularly successful variant learned to (sort of) walk and then right before the end of the simulation timer it would fall forward. Because of the way the simulation measured fitness, it was always judged to have the greatest distance from the starting point and therefore would always survive to the next generation even though it wasn't the best walker.
This kinda reminds me of my favorite sci fi book I read in 2018: Permutation City by Greg Egan. The book explores simulated consciousness and the tensions that crop up when intelligences that think and feel and want to live are computed on shared computing resources in competition with other intensive tasks with financial heft like climate disaster forecasting.<p>One plot the book explores is of some lower class intelligences that work their way into the noise of the simulations of wealthier intelligences. Their experiences are computed simultaneously but imperceptibly to the hosts, and appear as but noise in the simulations of the floors, walls, fountains, clouds and other environmental props that the hosts observe and interact with. They live lives as meaningful and filled with human emotion as the others but they’re computed in the space between.<p>I know it’s a bit of a leap to go from an AI learning to slip satellite imagery in the noise of street maps to simulated consciousness freeloading in the noise of another, but I think the themes are consistent. One man’s trash is another man’s treasure. The book really opened my mind to the thought that there could be so much more thriving in the things we think are noise in our fleshy bodies, and I think that’s thrilling. Imagine what it would take to completely look past the street map produced by that AI and only possibly see a detailed satellite image! No other interpretation makes sense. It would all feel entirely normal as you’d know no other perspective.
There recently was a Google spreadsheet linked where such examples were collected:<p><a href="https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml" rel="nofollow">https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOa...</a><p>HN discussion: <a href="https://news.ycombinator.com/item?id=18415031" rel="nofollow">https://news.ycombinator.com/item?id=18415031</a>
So basically AI abiding by Goodhart's law (<a href="https://en.wikipedia.org/wiki/Goodhart%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Goodhart%27s_law</a>). I wonder how much of this goes undetected in other applications due to poor objective definition.