The real question, to my mind, is "is the current generation of AI yet another dead end?", because whether or not the tech can improve upon its flaws will determine whether or not it is worth a business investing in.<p>We've gone through AI winters before, where the actual techniques and hardware simply had a terminal point of usefulness beyond which it was unlikely to grow.<p>If hallucinating bad information is to be regularly expected / intrinsic to the tech, then it's basically Clippy 2.0 and a dead end.<p>On the other hand, if we can expect lower power costs and higher trust in the output (i.e. needing less human intervention) then it makes sense to start finding places where it can fit into the business and grow over time.<p>I'm personally in the camp that it is a fun toy but with limited applicability for most businesses, and unlikely to grow beyond that. I'd love to be proven wrong, though.
Our agency has been asked to explore AI projects for a half dozen clients over the past 18 months. None of them have actually rolled out to real users. We keep finding the same things: the “AI” backed tool is worse than the people it’s supposed to replace, and too costly to implement and maintain at any real scale. Mix in concerns about PHI (we primarily work with healthcare-related businesses) and it all amounts to the same story: that’s cool, but…
You just have to be a bit more specific. I have had success with AI writing code but I have to drive it. It’s a massive time saver though.<p>I have also had success with classification tasks, like look at this email and then look at this list of topics, and pick which topic this email relates to, or “other” if you can’t see an obvious choice.<p>But you can’t say “hey AI do this person’s job for me”.
We are just a month away from releasing an internal tool for our sales dept that combined LLM and old fashioned statistical models to generate draft sales speeches. (We are a selling to businesses on a specific field, and our sales speeches are usually highly data-driven and customised per client, placing emphasis of how much additional revenue would the client get out of the purchase according to our analysis.)<p>It seems to work pretty well for this scenario. The usage is company internal (only draft generation) and the sales reps are augmented by the AI, not replaced by it.
AI doesn't suck any more than a socket wrench sucks in the hand of a bad mechanic. But ask a world class mechanic if they'd ever give up their socket wrench. Some businesses just aren't competent at taking advantage of new things, and competition is fierce. Just like the dot-com era, everyone wants a piece of this new pie, even less competent folks, and most will fail, but many will get rich. Plenty of businesses are already making gobs of income, with just some public model off civitai and a well crafted prompt, wrappered into a slick subscription website. Personally I find AI a godsend and have duct taped dozens of HTTP POST API calls with huge custom prompts to anthropic/openai/groq/etc all over my mac hotkeys and phone voice assistants (via Tasker tasks) etc. Anything that an LLM can do to make my life a little easier, I turn it into a HTTP request and tie it in to some automation.
Actual article: <a href="https://www.axios.com/2024/03/27/ai-chatbot-letdown-hype-reality" rel="nofollow">https://www.axios.com/2024/03/27/ai-chatbot-letdown-hype-rea...</a>
No, businesses are discovering that the uses that it is being sold to them for are inappropriate.<p>It may not be factually accurate. It may say disturbing things. It is not a reliable company representative. This does not mean AI sucks, it means you are trying to use it for things it is not appropriate for.<p>On the other hand, it can be a massive time saver for staff who know what they're doing and can interpret it's output. It's a tool that can boost output, not a replacement for people.
With LLMs at least, it feels like it's best to just treat "AI" as another type of user interface. If your business/idea can use an interface like this, cool, it might be a nicer way to interact. But it's not there yet for replacing most employees, and honestly feels like it's being used as a scape goat for a crappy economy or monetary policy driven layoffs.
If you watch Sam Altman talking on the Lex Friedman podcast, he clearly thinks ChatGPT 4 is kinda cool and that 5 will be better. But he really dropped the hype and sounded more like this article.
A friend of mine’s mother came down with a rare form of dementia at a sadly young age for that to happen. It was a strange experience talking with her because in most ways she was very lucid, but sometimes she would just veer off into stories that were obviously delusional. Conversing with the LLMs rather reminds me of that sometimes. But she was not able to work because of this disability.
So the headline is pretty clearly right, we were coming out of the circa 2016 deep learning bubble and expectations were being adjusted, gen AI came in and the hype roared back up, now we're peaking, the same problems from before are still there even if the demos are cooler, and expectations need to be tempered again.<p>But instead of that, the article starts with<p><pre><code> The tech's drawbacks are hard to overlook. Large language models like ChatGPT are prone to hallucinating and spreading misinformation. Both chatbots and AI image makers have been accused of plagiarizing writers and artists. And overall, the hardware that generative AI uses needs enormous amounts of energy, gutting the environment.
</code></pre>
None of those (hallucination maybe) are relevant, if it's good at automating misinformation, surely it can do useful work as well. This is more just a list of random criticisms.<p>Then<p><pre><code> Perhaps most of all, according to Gary Marcus...
</code></pre>
No point in continuing to read.