Do you want paperclips? Because this is how you get paperclips!<p>Eliminate all agents, all sources of change, all complexity - anything that could introduce unpredictability, and it suddenly becomes far easier to predict the future, no?
So instead of next token prediction its next event prediction. At some point this just loops around and we're back to teaching models to predict the next token in the sequence.
From the abstract<p>> A simple trading rule turns this calibration edge into $127 of hypothetical profit versus $92 for o1 (p = 0.037).<p>I'm lazy: is this hypothetical shooting fish in a barrel, or is it a real edge?