I am quite puzzled by this. I am just a junior software engineer so forgive me for not understanding the full scope of AI or software engineering as a whole. But lately, with all the hype about Q* and a few AI experts claiming that programmers will be out of a job soon, I just find it remarkable that there are so many variances/differences in people's opinions on the capabilities of these models.<p>A lot of the models are open-source now. I have watched software engineering YouTubers like the primeagen talk about a lot of the current state of the models not being good enough (I have tried them myself and they are terrible).<p>On the other hand, there are people like Ilya or David Shapiro (on YouTube) who claim that AGI is a couple of months away and will automate most of the software engineering.<p>This is not a thread on whether software engineering is gonna be rendered obsolete. I just don't understand why experts cannot come to a concrete realization quicker given the amount of money are pumping into this industry right now, and the number of open-source models we have.
Also, is there a reason why people so strongly believe that an LLM would lead to AGI? People building magic dev seem to believe that can build an AI coworker.<p>Are there any videos on why and how these researchers strongly believe that AI systems will outperform humans to achieve AGI?<p>Building an LLM model consists of defining its "architecture" (an enormous mathematical function that defines the model's shape) and then using a lot of trial and error to guess which "parameters" (constants that we plug in to the function, like 'm' and 'b' in y=mx+b) will be most likely to produce text that resembles the training data. Copilot can already do this and it's ok at best. Magic dev guys seem to be using your repo as a training set to make better predictions on the output based on the context window. So are the systems "improving" is kinda my question.
AGI is already here, what you don't have enough of yet is ASI (Artificial Super Intelligence), and a confirmed self-conscious AI.<p>Almost every model is already an AGI engine somehow, some even can be tuned using its weight for different tasks than they were originally trained to perform.<p>Many models right now could provide most of the requirements for full autonomy, not that it would be a good idea to provide them autonomy. But if you feel like "hey, let's let this thing plan ahead, loop, plan again" (and you go an assemble these components ad-hoc to the prompt UI), you'll have something more or less capable of let's say, 40-60% of autonomic behavior.
The statement "AGI in X years/months" has been around since I started programming 55 years ago. Though back then it was just "AI".<p>Search on "AI winter" to get a feel for how far back this overhype followed by disillusion cycle goes.
As you get older, you realize there are people of all ages, who believe in all sorts of things. That flat earthers do exist (even ones with pHDs). That there are people who are 40+ who have no high school diploma. Hell, Bill Gates even once said, "Two years from now, spam will be solved" <i>back in 2004</i>.<p>Everyone likes to share their opinion, but that doesn't mean they know jack what they're talking about. Now can you see why people have such a wide range of opinions on AGI?
> people disagreeing about predictions of the future<p>> people not even agreeing on a definition of AGI<p>Yeah it sure is baffling that there is no consensus yet