I've found where LLMs can be useful in this context is around free-associations. Because they don't really "know" about things, they regularly grasp at straws or misconstrue intended meaning. This, along with the volume of language (let's not call it knowledge) result in the LLMs occasionally bringing in a new element which can be useful.
A group of PhD students at Stanford recently wanted to take AI/ML research ideas generated by LLMs like this and have teams of engineers execute on them at a hackathon. We were getting things prepared at AGI House SF to host the hackathon with them when we learned that the study <i>did not pass ethical review</i>.<p>I think automating science is an important research direction nonetheless.
This strikes me as similar to Cargo Cult Science.<p><a href="https://calteches.library.caltech.edu/51/2/CargoCult.htm" rel="nofollow">https://calteches.library.caltech.edu/51/2/CargoCult.htm</a><p><a href="https://metarationality.com/upgrade-your-cargo-cult" rel="nofollow">https://metarationality.com/upgrade-your-cargo-cult</a>
In some fields of research, the amount of literature out there is stupendous, and with little hope of a human reading, much less understanding the whole literature.
Its becoming a major problem in some fields, and I think, in some ways, approaches that can combine knowledge algorithmically are needed, perhaps llms.
Cool idea. Never gonna work. LLMs are still generative models that spits out training data, incapable of highly abstract creative tasks like research.<p>I still remember all the GPT-2 based startup idea generators that spits out pseudo-feasible startups.