> Unstructured document parsing<p>If I had to invest in any one area of LLM usage, it would be this. There is <i>so</i> much unstructured data in the world, and converting things like legal contracts or chatlogs into structured, queryable data is absurdly powerful. Nobody wants to talk about this usage for LLMs because they're too busy making TikToks about how GPT4 actually has a soul or whatever, but this will be the lasting legacy of LLMs after the hype around generative AI dies out.<p>> A decent engineer will likely be able to write a slack-like application, definitely good enough to cancel the 500k/year contract, in a couple of months.<p>And this is why generative AI is massively overhyped: the people hyping it don't understand the true value of the products they allegedly replace. Very similar to the crypto/blockchain hype where people who understood nothing about banking or logistics insisted that blockchain would solve all the problems there. If you think a corp is paying Slack $500k/year because it's hard to write a piece of software that can send messages between people in an organization, you're completely off base. (IRC exists, can do this and is free by the way.)
Some of these seem reasonable but I disagree with this:<p>> A decent engineer will likely be able to write a slack-like application, definitely good enough to cancel the 500k/year contract, in a couple of months.<p>A decent engineer can already crank out a working Slack prototype within a couple of months, and there are mature Slack alternatives today. There's a reason companies are paying $500k/year, and I doubt it's the code: maybe it's the enterprise support, the external integrations, or even just the name recognition.<p>Companies getting leaner may be true (it seems like this has already been happening the past couple years regardless of AI, and companies used to be lean in the 2010s).
Has anyone had any success in code generation? I feel like chatgpt usually completely fails to write even a small function correctly unless it's a very trivial or well known problem. I usually have to go back and forth for a good long while explaining all the different bugs to it, and even then it often doesn't succeed (but often claims it's fixed the bugs). The types of things it gets wrong makes it a bit hard to believe it could improve enough to really boost dev productivity this year.
> A decent engineer will likely be able to write a slack-like application, definitely good enough to cancel the 500k/year contract, in a couple of months.<p>People are rightfully calling out this bit. It still wouldn't make sense for a Slack customer to make their own version of Slack in-house, but it does lower the bar for a lot of Slack competitors to get to feature parity much faster.
> I predict non-smartphone AI devices will fail. The AI device of the future is likely an iPhone or android phone with a dedicated GPU chip for AI.<p>I go back and forth on this. While I see this being the case for data collection wearables like humane or tab, it makes sense to have a personal AI computer like bedrock [0], tinybox [1], or a mac studio for running background tasks on personal data. If you're running agents that do more than chat, you need something that's going to be able to handle doing inference for extended periods of time without worrying about heat or battery life. You likely also want something capable of doing fine-tune level training on your personal inputs. A lot of the more interesting use-cases are on data you probably don't want to expose to a cloud provider. That said, probably Apple is eventually going to crush here as well, but maybe there's room for a challenger to develop as this niche opens up.<p>[0]: <a href="https://www.bedrock.computer/gal" rel="nofollow">https://www.bedrock.computer/gal</a>
[1]: <a href="https://tinygrad.org" rel="nofollow">https://tinygrad.org</a>
> I personally regularly use the “voice” version of chatGPT to brainstorm with it while I walk my dog. We sped past the Turing test so fast that no one even beat an eyelash about it<p>I don't think that just because the author has a pseudo-conversation with ChatGPT using voice as the interface means we've passed the Turing test.<p>They don't seem to be actively interrogating ChatGPT to determine whether it's a human or not - something that I'd expect would still be quite easy to do. And, as I understand it, the Turing test could be administered over text.