Or created anything of value (value higher than they would have without using chatGPT)? Or StableDiffusion etc?<p>If so, what sorts of things?<p>We've seen videos saying it will change things, and make money, but haven't yet seen any examples.<p>If it's just going to be used (directly) to create spammy content, will it just be more of an interactive Wikipedia, an education tool more than production (besides performing routine tasks as long as we know well enough to edit its work)?
At the bare minimum, and that simply minimizes the magic, LLMs like GPT-3 have possibly practically solved the human-machine interface problem. (The 'possibly' hedge there is to acknowledge the fog of hype and anecdotal perception.)<p>Once you have a tool to effectively permit machine classification of human language (text -> [ LLM ] -> (generated) text) you now have a powerful component in a pipeline that can have components that map 'voice to text', '(generated) text to command-control', etc.<p>Two unknowns are present.<p>One, it is not clear if fine-tuning will hit a bottleneck for widespread usage (whether due to technical or expertise required) so the possibility of a Cambrian Explosion of LLM-supported apps, tools, gadgets, etc. is an open question. Related and significant matter is ability to actually diagnose and debug these systems when they are up and running. We all (kinda) know how to deal with burning servers but what about an LLM that has gone nuts and is misbehaving?<p>Two, it is not clear if extant infrastructure and resources for top level service providers (such as openAI/Microsoft, Google, etc.) can be provided on a "we'll take your precious data and you can use our expensive systems" ala Surveillance Capitalism. What will it cost to train? What will it cost to triage a faulty system? What does 'system reboot' look like with 'trained but corrupted' components? Unknowns but high probability is 'expensive'.