ML is born in two master branches, one it's image manipulation, where video manipulation follow, another is textual search and generation toward the saint Graal of semantic search.<p>The first was started with simple non-ML image manipulation and video analysis (like finding baggage left unmoved for a certain amount of time in a hall, trespassing alerts for gates and so on) and reach the level of live video analysis for autonomous drive. The second date back a very big amount of time, maybe with the Conrad Gessner's libraries of Babel/Biblioteca Universalis ~1545 with a simple consideration: a book is good to develop and share a specific topic, a newspaper to know "at a glance" most relevant facts of yesterday and so on but we still need something to elicit specific bit of information out of "the library" without the human need to read anything manually. Search engines does works but have limits. LLMs are the failed promise to being able to juice information (in a model) than extract it on user prompt distilled well. That's the promise, the reality is that pattern matching/prediction can't work much for the same problem we have with image, there is no intelligence.<p>For an LLM if a known scientist (as per tags in some parts of the model ingested information) say (joking in a forum) that eating a small rock a day it's good for health, the LLM will suggest such practice simply because it have no knowledge of joke. Similarly having no knowledge of humans a hand with ten fingers it's perfectly sound.<p>That's the essential bubble, PRs and people without knowledge have seen Stable Diffusion producing an astronaut riding a horse, have ask some questions to ChatGPT and have said "WOW! Ok, not perfect but it will be just a matter of time" and the answer is no, it will NOT be at least with the current tech. There are some use, like automatic translation, imperfect but good enough to be arranged so 1 human translator can do the same job of 10 before, some low importance ID checks could be done with electronic IDs + face recognition so a single human guards can operate 10 gates alone in an airport just intervening where face recognition fails. Essentially FEW low skill jobs might be automated, the rest is just classic automation, like banks who close offices simply because people use internet banking and pay with digital means so there is almost no need to pick and deposit cash anymore, no reasons to go to the bank anymore. The potential so far can't grow much more, so the bubble burst.<p>Meanwhile big tech want to keep the bubble up because LLM training is a thing not doable at home as single humans alone, like we can instead run a homeserver for our email, VoIP phone system, file sharing, ... Yes, it's doable in a community, like search with YaCy, maps with Open Street Maps etc but the need of data an patient manual tagging is simply to cumbersome to have a real community born and based model that match or surpass one done by Big Tech. Since IT knowledge VERY lately and very limited start to spread a bit enough to endanger big tech model... They need something users can't do at home on a desktop. And that's a part of the fight.<p>Another is the push toward no-ownership for 99% to better lock-in/enslave. So far the cloud+mobile model have created lock-in but still users might get data and host things themselves, if they do not operate computers anymore, just using "smart devices" well, the option to download and self host is next to none. So here the push for autonomous taxis instead of personal cars, connected dishwashers who send 7+Gb/day home and so on. This does not technically work so despite the immense amount of money and the struggle of the biggest people start to smell rodent and their mood drop.