This article is fantastic. It's a really great discussion of the recent trends away from "train larger models" to "inference scaling", where things like o1 get better results by churning for longer and burning more tokens on a problem.<p>Lots of other insightful commentary in there as well. Best thing I've read about AI in ages.