I was playing around with having this model plot orbital trajectories and it was seriously impressive. Other top-tier models would struggle to get functional simulations working. Gemini 2.5 was able to do it after three or four turns in Cursor. It does feel like a meaningful step up in mathematical reasoning and math-dense coding.<p>On the other hand, if you try to play chess with any of these reasoning models (including Gemini 2.5), it basically doesn't work at all. They keep forgetting where pieces are. Even with rl and sequential thinking on max, they consistently move pieces in impossible ways and mutate the board position.<p>In a recent test with Gemini 2.5, it used like 1700 thinking tokens to conclude it was in checkmate... but it wasn't. It's going to be very hard to trust these models to do new science or to operate outside of domains humans can verify while this kind of behavior continues.