A lot of what this takes are missing is that the unreliability of those models is not a hurdle for a tremendous amount of applications. If there is a human in the loop that can now write a couple of words in a fuzzy human language, check or select or edit the resulting response, and thus do their job twice as fast or god beware 10x as fast, that is absolutely a game changer.<p>My main criticism for LLMs are:<p>- the way they were rolled out was counterproductive. Unleashing a chatbot that pretends it knows everything, without any background and guardrails, is directly responsible for the hype untethered discourse that is prevalent in the mainstream. In the literature and among practitioners, every body is well aware that these things don't "think"<p>- for the first time, it feels that a significant amount of what my value as a aprogrammer will be fully owned by a corporation and trickled out to me for $99.95 a month. It's already the case with copilot. I can't imagine going back to a world where I work without gpt3 and copilot, which gives me no choice but to fully embrace my corporate overlords. I fully feel what farmers feel wrt their tractors.<p>The best I can do for now is figure out what the real usecases are for me and how to leverage GPT3, and start looking heavily into open models, so that I can help out with whatever unix <> bsd situation we are going to end up with.<p>None of this has anything to do with the end of human culture, education, discourse, or the end of quality in software. If software quality could be any lower, under capitalism, it would be. It's not like I can get really shitty code that pretends to do something for $5/h on upwork.