It’s pretty bizarre how SV decision makers have shifted from talking about ML to LLMs to straight up Artificial Intelligence.<p>I think one of the reasons is that LLMs are very good at what the execs do every day for a living and because of that, use as a way of assessing each other’s mental capacity: producing coherent-sounding speech (1).<p>Now that the LLM can do what any VP has dedicated their entire life to mastering, they assume that the system will be able to handle any task that they delegate, including programming, system design, project management, etc. Since the people doing this are paid less than them, surely it must be simply easier.<p>By this intuition, LLMs have now become intelligent, and are capable of handling anything at all. It is a must we find a way to integrate them in all of our products. And so, they’re now AI.<p>(1) That speech doesn’t necessarily have to carry a lot of meaning. Its main purpose is establishing the competence of the speaker.
It starts with the sentence<p><i>"You can see the future first in San Francisco"</i><p>and while the following paragraphs make a good, positive upbeat point, reading it out of context I cannot help thinking about homeless people in the streets and a pretty dark and dystopian future.<p>It's interesting how a place can be so radically different things at the same time.
The tone of Aschenbrenner's paper is odd. It's somewhere between his wet dreams and his nightmares. In fairness he seems to be aware of that but it's like someone grinning reflexively while they tell you bad news.<p>He's convinced and it's going to be terrible and it's going to be important and he's going to be part of it.<p>I guess for sure in 5 years it will be much clearer.
Here is a summary of Aschenbrenner's paper:<p><a href="https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead" rel="nofollow">https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/...</a>
I love how the author of this article makes no intent to give even a start on why he thinks the author of the paper is naive or even describe what the paper is about and give an abstract.
Few times seen a blog post touching on so many subjects, without actually saying anything. Is there an argument, point, story, poem, news, analysis or something being made here?
""every time a person tries to artificially make something similar using AI, people feel some interest and decide to fund research into it, but often a so called AI winter comes — and there are two AI ‘winters already (1974–1980 and 1987–2000)""<p>In hindsight, the winters don't look as long as they seemed living through them.<p>IF we want to say the last winter ended in 2000, that was 24 years ago now.<p>It does seem like right now, if there isn't any breaking news for a couple weeks, people think a new Winter is starting.
My question after reading the paper is: if he believe with conviction everything he wrote, why did he recently become an investor instead of either working in policy or working on the tech itself?<p>If the tech is going to transform humanity into a new economic paradigm and give the country controlling it a decisive advantage to be the sole superpower, being an investor is either futile or meaningless. Even doing nothing and just enjoy the ride seems to be more rational.
There's a Sabine Hossenfelder youtube just out that is more interesting than the linked article about it. "Is the Intelligence-Explosion Near? A Reality Check." <a href="https://youtu.be/xm1B3Y3ypoE" rel="nofollow">https://youtu.be/xm1B3Y3ypoE</a>
So if we supposedly already have passed the point where LLMs are at the level of high schoolers, how comes that almost no tasks or jobs that high-schoolers are capable of doing have been replaced by LLMs?<p>Neither will more advanced work be replaced by 2027.
End of story.
An awful lot of complexity hidden behind the word 'unhobbling' here, for example page 33. Not to say we can't overcome those challenges, but they're by no means tweaks in the margins.
We must thank the author for the content-free article, so each of us can freely ramble on tangents, as if we are looking at an abstract art piece.<p>My beef is with the Dwarkesh Patel podcast. While he has some very good interviews (Carl Shulman, or even Zuckerberg) he seems to have a lot of rambling-on conversstion by very young employees of AI startups (openAI). I don't get value from those because they don't really say anything other than patting each other's back about how awesome they are for having been hired by their companies. I think he should focus on people with actual contribution to the science who have meaningful things to say
I love how he's got the smartest person tick on his y-axis for an "AI researcher" [in the paper]. )) These people are narcissists of the first caliber. Can we graph that?