There seems to be a forest/trees thing going on where people assume that search will use LLMs to extract data in answer to user queries. It’s not clear to me that LLM’s are better at this job that Google’s very finely tuned algorithms.<p>What LLMs do excel at is giving feedback to iteratively improve queries.<p>Ask a LLM “how do I start a thread in python” and you might get a good Python2 answer, a good python3 answer, a good answer for a utility module, or something totally made up.<p>Ask a LLM “what information should I provide to get the best answer for how to start a thread in python” and you’ll get advice to include the Python version number, desired threading module, whether you need to set thread priority, etc.<p>IMO there’s a natural place for LLMs as complementary to traditional search algorithms that is not getting enough attention.