Okay, TL;DR:<p>"Causal Entropic Forcing" is something like an AI's utility function, where the agent attempts to maximize future possibilities. Since this is meaningless (all possible futures are possible), what you actually want to do is make it as easy as possible to <i>get</i> to those futures - aka, their entropic adjacency, hence the name, causal entropic forcing.<p>However, CEF requires that the agent can actually <i>predict</i> possible future states of the system, which comes with some serious issues. In the original paper, this is covered by access to perfect simulators, but those aren't available in real-world situations.<p>This post discusses how to (possibly) use recurrent neural networks to make such predictions; how to do so effectively, and with consideration of the NN's confidence in it's predictions.<p>It's pretty cool!