I had a blast writing this blog post and learned a tremendous amount. I suspect that we will soon see a blizzard of papers where the researcher/author isn't just "assisted" by really smart AI models, but rather where the human author becomes more of a research assistant/facilitator! That is, the model itself would dictate the core direction the research should take, with some feedback and input from the human researcher to keep things on track and focused.<p>The human becomes more of a "token dispenser" and also facilitates cooperation between AI models from different labs (i.e., Claude 3.5 Sonnet and O1-Pro, which I had working together by the end of this).<p>If anyone reading this is an expert, I'd love to hear your take on whether these ideas have real merit. I suspect they do, since O1-Pro certainly thought so, and I would guess that it would be skeptical of ideas that it knew were generated by its arch-rival, Claude!