> For example, one actual major human tragedy caused by a generative AI model might suffice to push me over the edge.<p>The problem isn’t that the first such tragedy might push you over the edge, it’s that the second, third, etc will arrive so fast you’ll never see the end of that fall.<p>Also there’s no “belittling” in calling these things stochastic parrots… there’s no obvious upper limit to the vocabulary of an artificial parrot, there’s just an upper limit on it understanding its own speech, much less understanding the impact of its utterances on those it’s (likely uncomprehendingly) chatting with. The problem isn’t the parrot, the problem is humans who think the parrot isn’t a parrot.