It's "just" a minor version update, but in my own testing it seems <i>much</i> stronger than the original V3 — basically on par with R1 for the usual tricks I throw at LLMs, without needing <think> tokens.<p>I'm sure they're re-RL-training an R1-[minor bump] on top of this model, or perhaps even an R2; it'll be extremely strong when it comes out. For now I've swapped most of my usage to this new V3, since it's basically on-par for my use cases with R1 and doesn't require waiting for thinking tokens.