Artificial Intelligence is the name of the research field that this stuff comes out of. It was coined in 1956: <a href="https://en.wikipedia.org/wiki/Dartmouth_workshop" rel="nofollow">https://en.wikipedia.org/wiki/Dartmouth_workshop</a><p>Researchers have been using "AI" as a term to describe their work for nearly 70 years at this point.<p>I don't see why we should throw away six decades of nomenclature just because "LLMs aren't actually intelligent".<p>It's a perfectly cromulent term.
Stop saying AI?<p>>But they are not intelligence and anyone who spends time with them quickly realizes they are just a very superpowered suggestion engine<p>How about: stop moving goalposts. These models are obviously capable of acts of intelligence. If you told someone 10 years ago about the things these models can do, they would tell you the model is intelligent. The model does well on human exams that we use to measure intelligence.<p>I get it, AI is a hype term, but to pretend that there is no intelligence is silly, and to pretend that you can redefine intelligence is hubris.
To me, this is like asking if a hotdog is technically a sandwich. Who cares? It's semantics. This has the same vibe as, "Well, technically, Linux is just a kernel, not an operating system." No one cares, nerds. There's no point in trying to argue about this. People have adopted AI into their vernacular. It's not going to change at this point.
Bikeshedding about terminology isn't worth anybody's time. You can be skeptical of technology without grandstanding about it for social clout.
I find it useful to keep in mind that LLMs are essentially "just" random generators based on the probability distribution of some language (or more accurately a sample of that distribution in the form of a finite amount of text, a dataset).<p>You don't expect a random generator to spit out facts, you expect to get random nonsense. But it can be very good at replicating the fitted probability distribution in its output, i.e. generate convincingly coherent language.
> But they are not intelligence and [...]<p>Yes, and that is why the "artificial" qualifier is used. Artificial sugar is not sugar. Artificial flowers aren't flowers. That's the entire point of the term.<p>AI as a field deals with teaching computers to <i>approximate</i> human intelligence. The term has been used since the 60s, and isn't going to change because random people are throwing online tantrums.
Artificial Intelligence has nothing to do with the capabilities, and everything to do with the approach. If I hand-crafted an algorithm that could reliably spot stop signs in pictures by applying a mountain of heuristics that I came up with, AI would not be an appropriate label. If I instead made a program that could detect stop signs after being fed thousands of pictures of stop signs, then I'm comfortable calling that machine learning, or AI, because the approach was based off modelling intelligence, i.e., the ability to acquire and apply knowledge and skills.
I somewhat agree here, but not entirely. I think that in technical conversations it's much better to say llm, diffusion model, transformer, etc. But overall it's all Latin to the layman anyway. The use of AI as a marketing term is what I think is causing OP frustration, it's being marked as a panacea when it's far from it, but when is it ever? Social media was marketed as a life changing good for society too.<p>Any conversation about AGI, does make me cringe. What an absolutely silly and moonshot thing to be having discussions about.
I fully agree and also try to correct myself and use terms like "LLM" and "generative image models" as I really do feel the term "AI" is misleading at best... That said, I can't help but feel the cat is firmly out of the bag. Of course people have been calling lots of things AI for a long time, so this is nothing particularly new. I think we're just going to have to live with it.