I’m a scientist, and I use LLM’s fairly frequently. The first and primary use is for coding. Copilot is quite helpful and automates writing a bunch of the tedious steps in data wrangling and complex plot generation.<p>I have also been using perplexity.ai lately. Lately I’ve been branching out into a new area of biology that I’m not terribly familiar with, and its been quite helpful in reviewing the literature. I can ask it questions like “what’s the role of gene X in process Y”. Perplexity cites it sources when providing answers and thats a huge benefit over vanilla ChatGPT. I essentially never use ChatGPT for factual queries like that, because I don’t trust it and I don’t have an easy way to check its answers. Perplexity feels much more like reading Wikipedia. I don’t entirely trust the base text, but I can always go directly to the source.<p>That said Perplexity’s answers lack a level of coherence and specificity that you get from a good human-authored review of a subject. Like, it tends to be too broad in its answers, and will often string together multiple sources that are only tangentially related. Like many LLMs its also quite overconfident, and doesnt like to say “no” or “I dont know” as an answer.