I am utterly repelled by LLMs. I don’t know why otherwise thoughtful people use them, and this piece doesn’t explain the attraction either, except apparently what strikes me as creepy and pointless doesn’t strike everyone that way.<p>I notice little evidence of testing the information that he gets from Claude. From my own testing, which I repeat every so often, I find I cannot rely on anything I get from LLMs. Not anything. Have you tried AI summaries of documents or meetings that you know well? Are you happy with the results? I have not yet seen a summary that was good enough, personally.<p>Also a lot of example use cases he offers sound like someone who is not very confident in his own thinking, but strangely super-confident in whatever an LLM says ($2000/hr consultant? really?).<p>Claude cannot perform an inquiry. No LLM can. These tools do not have inquiring minds, nor learning minds. He says hallucinations have reduced. How can he know that, unless he cross-checks everything he doesn’t already know?<p>I find LLMs exhausting and intellectually infantilizing. From this piece I cannot rule out that there is something very nice about Claude. But I also can’t rule out that there is a certain kind of addictive or co-dependent personality who falls for LLMs for unhealthy reasons primarily.