Nowhere do they define "AGI". I guarantee that is a big reason why the predictions have so much variance.<p>For many people, what GPT-4 does qualified as AGI -- up until GPT-4 came out and then everyone seemed to decide that AGI meant ASI.<p>I am guessing for many people answering this poll it means "a full emulation of a person". Or maybe it had to be "alive".<p>The thing that irritates me so much is that there is this lack of definition or moving of goalposts and also stupidly people seem to assume that AI can't be general purpose or useful unless it has all of those animal/human characteristics.<p>GPT-4 is very general. Make it say 10-20 times faster, open up the image modality to the public, and you will be able to do most human tasks with it. You don't need to invent a lot of other stuff to be general purpose.<p>You DO need to invent a lot of other stuff to become some kind of humanlike digital god. But that is not the least bit necessary to accomplish most human work output.
Its depressing to go over all this ultra-shallow chit-chat that has short-circuited any intelligent discussion about the role and trajectory of information technology (let alone any more serious problem or opportunity of the current times).<p>Talking about AI (and AGI) as if its some xenomorph lurking somewhere in silicon, waiting for its inevitable escape from its human prison.<p>AI will not bootstrap itself with some emergent property if somebody spends gazillions of dollars and Watts to estimate petazillions of parameters.<p>Further progress is <i>not</i> going to come unless some very human brain and intelligence opens up completely new algorithmic vistas.<p>The future of AI is literally tied to the future development of <i>human</i> mental (mathematical) models around information, knowledge and its digital representation.<p>If not intuitively obvious, the history of mathematical thought development is crushing evidence that it follows its own dynamic over timescales that span centuries.
I am of the opinion that this AI/ChatGPT, or atleast the current form of it, is just another VC money shuffling business which has no long term real world consequence. Like VR and crypto, while being great technical experiment but eventually they have not reached any kind of sufficient adoption. iPhone and Bitcoin came out at basically the same time and see how many people have iPhone and how many even know about bitcoin. And yet, Crypto & VR have been VC darlings for all these years. Generally I feel all the usecases for this AI are incredibly depressing and don't give us any hope that they will make the world a better place. I can totally imagine that this AI can replace some jobs, but who really benefits from it? I certainly do not, government probably doesn't, only the big corporations. We would have more umemployment, less tax collections and overall we will end up worse than before. So why are we even investing in such a tech? Who would you be even selling stuff to if all of us are unemployed?
AGI by 2040 huh. I bet it won’t happen by 2100, and I’m young enough that I can look back on this at 2040 and see how wrong/ correct I was. Future me, don’t forget!
How cool would it be if we could build stuff that directly builds the society we want? Call it utopia or whatever who cares.<p>But I wish there were other people in the game (none that I know personally at least, lots just act like accountants lol) who want to use their money to build and enable the kind of world at large, like we are playing a city builder irl. Making money is whatever but making a world is just _chefs kiss_.<p>To use the technology available today in ways that people don’t even have to think, but it leads to good outcomes.<p>AI seems to be one of those vectors and I hope it works out. I’m personally all in with even just LLM applications.
Where's the poll option for "it continues to be overhyped junk spewing misinformation, then arguing with the user (Bard) or acquiescing when pressed regardless of the correctness of the rebuttal (ChatGPT)"?<p>You can surely get a indistinguishable imitation of human text from including things like Reddit comments in the LLM training data. Correctness is a hurdle I am not convinced will be surpassed.
The numbers there seem odd (or did before the page broke, possibly due to the HN 'hug of death')<p>90% chance of near term "human level" AI but only a 75% chance someone tries to influence the US election with "AI driven misinformation" which for most definitions of "AI" and "misinformation" is already happening?
"Forecaster" here means any rando who signs up to their service and answers a question.<p>Sometimes a large polling number does not equal a more accurate answer.
> Metaculus predicts a 75% likelihood of an attempt to influence the 2024 US presidential election with AI-driven misinformation.<p>Hasn't that already happened? Wasn't DeSantis caught running deepfaked voices in an ad? Or am I misremembering something