There's also a portion of "doomerism" which I think is thinly disguised bragging for status and investor money.<p>Ex: "The stuff I'm involved with is <i>soooo</i> powerful that I hope you remembered to invest with us as we navigate this delicate moment of imminent godhood and you should definitely likeratesubscribe to my prophetic warnings."
I can sorta see the point. AI probably wont kill me.<p>But, its more than likely going to either limit my earning capacity or directly put me out of a job.<p>I know that the instant reply is "oh but other jobs will pop up to replace yours" I mean yeah they might, but it's unlikely that I'll be qualified to do them.<p>the historical cases of this kinda thing don't look to rosy. When a technology comes in to "revolutionise" an industry, lots of people get left behind. They are forgotten because they aren't rich enough to be written about. "they found other jobs" no mate, they died in penury.<p>When was the last time you hear about the weavers guild deposing some mayor because they didn't like them? Yeah not after the powered loom right? If you look at all the massive churches in the UK(<a href="http://www.norfolkchurches.co.uk/worstead/worstead.htm" rel="nofollow">http://www.norfolkchurches.co.uk/worstead/worstead.htm</a>) the ones that are in bubmble fuck of nowhere are because of weaving (more or less). They don't get built in georgian times do they?<p>AI is going to displace at least 10-20% of white collar work permanently. That sort of disruption tends to make shit hard to govern, or make dark satanic mills.
"even a robot, who is super-good at everything, would still benefit from trading with a bunch of super-dumb humans, who suck at everything."<p>Yeah, just like humans trade with monkeys, and even with ants.
That's quite an elaborate way to say "I thought a little bit about an awful lot of things"<p>TLDR: Because ChatGPT is not intelligent enough to be a threat and we don't yet now exactly what intelligence is, AI will never become a threat.
He even mixes up data volume with intelligence at some point and lacks the imagination to see how software could kill humans.<p>My opinion:
Doomers think someone could eventually figure out how to build AGI that could improve its own abilities which would lead to singularity and that could lead to bad things for humans.
Nobody who has basic knowledge about AI thinks ChatGPT, even in version 7 will start killing people. That's no proof that something more capable can't be built.
This is bullshit because the tech was already old news long before the layperson's attention and money were directed to it in recent years. It's still about as crappy as it was a decade ago and the real untapped potential is the money it can make before the next AI winter.<p>Nobody is losing their job. The real concern should be that we're basically resorting to approximation of proper computation because Moore's law doesn't go on forever, yet we depend so much on tech to grow the economy. This is a sign of things to come. AI is to technology what building new coal plants is to energy. This has happened before and the fears aren't unfounded, just misplaced.
He’s constructing a straw man just to beat it down for entertainment, I’d rather have read a thoughtful analysis on AI and a genuine interpretation of the counter arguments.