"AI" didn't "crack" anything here. An LLM generated text that's notionally similar to a hypothesis this researcher was interested in but hadn't published. You can read Dr. Penades in his own words on BioRxiv, and if you might have been interested in reading the prompt or the output generated by co-scientist, it's included in the results and SI: <a href="https://www.biorxiv.org/content/10.1101/2025.02.19.639094v1" rel="nofollow">https://www.biorxiv.org/content/10.1101/2025.02.19.639094v1</a><p>What actually happened here looks more like rubber-ducking. If you look at the prompt (Supplementary Information 1), the authors provide the LLM with a carefully posed question and all the context it needed to connect the dots to generate the hypothesis. The output (Supplementary Information 2) even states outright what information in the prompt led it to the conclusion:<p>"Many of the hypotheses you listed in your prompt point precisely to this direction. These include, but are not limited to, the adaptable tail-docking hypothesis, proximal tail recognition, universal docking, modular tail adaptation, tail-tunneling complex, promiscuous tail hypothesis, and many more. They collectively underscore the importance of investigating capsid-tail interactions and provide a variety of testable predictions. In addition, our own preliminary data indicate that cf-PICI capsids can indeed interact with tails from multiple phage types, providing further impetus for this research direction."