TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: AI is smarter than the average person but feels like nothing's changed?

5 点作者 geepytee7 个月前
Primarily asking because I&#x27;d like to hear different perspectives, perhaps I am missing something.<p>Arguably, since OpenAI released the o1 models, LLMs are now &#x27;smarter&#x27; than the average human when measured by IQ (I&#x27;m going by this study [0] which sets o1 at a 120 IQ).<p>What I am trying to wrap my head around is why has this not changed our entire world much? Sure, if you live on Twitter, a lot of people made a big deal about it. But in my day to day, specially when offline, nothing seems to have changed. In fact, I don&#x27;t think most people are even aware that a computer is now smarter and cheaper than them and it&#x27;s widely available via API.<p>Am I exaggerating things here? It almost feels like the world has not caught up to the latest technology. Does this happen with every new tech? Is this period basically a huge opportunity for early adopters? Perhaps we are missing ways to connect the o1 brain to the real world so it can have real world applications?<p>For context, I am deep in LLMs stuff daily as it is part of my work. I am keenly aware of the improvements that have been made in coding for example, I just don&#x27;t believe this is on the same magnitude as &#x27;AI is now smarter than the average human&#x27;.<p>The other side of this argument is that the LLMs are not that good, and they just test high because the questions are part of the training data, and in fact they cannot adapt and learn on the spot the way humans can (which I believe is the point of the ARC prize [1]). Another counter-argument might be that it&#x27;s just too early?<p>Would love to hear what you have to say. Tell me how I&#x27;m wrong, or tell me how you think AI has already materially changed our world in a big way.<p>[0] - https:&#x2F;&#x2F;trackingai.org&#x2F;IQ<p>[1] - https:&#x2F;&#x2F;arcprize.org&#x2F;

6 条评论

mikequinlan7 个月前
1. IQ is a bad measurement of intelligence.<p>2. IQ is a quotient. What age did they say that o1 was?<p>3. Using a Mensa test (Mensa Norway apparently) is a bad way to determine IQ.<p><a href="https:&#x2F;&#x2F;test.mensa.no&#x2F;home&#x2F;test&#x2F;en" rel="nofollow">https:&#x2F;&#x2F;test.mensa.no&#x2F;home&#x2F;test&#x2F;en</a><p>&quot;This online test gives an indication of general cognitive abilities, represented by an IQ score of between 85 and 145, where 100 is the population average. This test is not a substitute for professional intelligence tests, such as those administered by Mensa and licensed psychologists.<p>This test consists of 35 puzzles in the form of visual patterns that must be solved within a 25-minute time limit. Participation requires neither specialised knowledge nor mathematical skills. The puzzles, which get progressively difficult, are weighted equally, so you get a point for each correct answer. You do not get bonus points for finishing the test early, so try to manage your time optimally. Also, you are not penalised for answering incorrectly, so make a guess whenever you are unsure.&quot;
评论 #41784406 未加载
评论 #41783105 未加载
rsynnott7 个月前
&gt; Arguably, since OpenAI released the o1 models, LLMs are now &#x27;smarter&#x27; than the average human when measured by IQ<p>Arguably, pigs can fly. It&#x27;s not a _good_ argument, but it&#x27;s an argument, I suppose.<p>No, of course LLMs aren&#x27;t smarter than the average person, don&#x27;t be silly.<p>(Twist: The OP was written by a self-aggrandising robot.)
JohnFen7 个月前
AI is clearly not smarter than the majority of people, let alone the average person.<p>The IQ test thing is, in my opinion, not significant for a couple of important reasons. First is that it&#x27;s an open question as to how much IQ correlates with intelligence (particularly since we still don&#x27;t have a solid definition of what &quot;intelligence&quot; is), and the second is that LLMs certainly ingested many IQ tests and the answers into their training data.
ungreased06757 个月前
Current models aren’t accurate, reliable and break in unpredictable ways. Therefore it’s hard to rely on them for serious tasks.<p>I also wouldn’t classify GPT4 as intelligent the same way I wouldn’t classify Google search as intelligent. It’s a software tool.
fuzzfactor7 个月前
I would say there is some chance that the world is not impressed since so many people have thought computers they already had for a while were more advanced than this when they bought them :\<p>A lot of times otherwise they wouldn&#x27;t have bought them !
zifpanachr237 个月前
They aren&#x27;t embodied, they don&#x27;t experience the passage of time, or the physical world, or emotion in the way a human does.<p>I&#x27;m also just not sure how ultimately useful more powerful analytical tools will be. Most of the issues in the world are either physical issues and involve resources constraints in such a way that the ephemeral nature of AI is ill suited to handle. Or they are social issues, in which case the lack of wisdom and authentic human experience also makes AI ill suited to the tasks.<p>The primary conceit of the current emphasis on digital technology as a means to solve our problems is likely the idea that if we were just smart enough and thought hard enough or had an ultra intelligent assistant, intelligence would improve things and make the world a better place. That may help up to a certain point, but it doesn&#x27;t seem obvious to me that is will continue to have positive returns when taken to it&#x27;s logical extreme. There&#x27;s also the issue of meaning and goals and teleology. I think that complicates a lot of the stories told by the more philosophically minded proponents of AI. In other words, suppose AI provided us a means to realize some kind of post-scarcity society...what&#x27;s the point? What next? We are all familiar with the Rat Park experiments. Even in the most fantastically successful version of the future where AI massively surpasses the expectations of even the most dedicated and convinced proponents...that doesn&#x27;t help us deal with the more fundamental issues. The industrial revolution already probably took us about as far down the path towards &quot;post scarcity&quot; that human psychology will allow. And the internet probably pushed us over the edge to the extent that it may engender the emergence of more serious reactionary impulses, and diminishing returns in the economic sense as it weakens us socially and emotionally and spiritually.<p>It doesn&#x27;t seem obvious to me that intelligence (in the IQ test sense, or the LLM regurgitating decent responses to a multiple choice test sense) is necessarily always going to be socially, culturally, or evolutionarily adaptive. There is probably some point at which higher analytical intelligence begins to have diminishing returns, and some point after that where it even becomes maladaptive for most people in most contexts.<p>There&#x27;s also the issue of trust. AI isn&#x27;t trusted. And the tech community more broadly but especially the &quot;AI community&quot; portion of that is especially not trusted. And the tools aren&#x27;t really that exciting to people outside of the tech social sphere. And that isn&#x27;t even to mention the negative potential effects with respect to the issue of social trust between real people (did they really write that and mean it, or did an AI do it for them? Can I trust that these people on HackerNews, that I&#x27;ve just spent 20 minutes writing a heartfelt take on a serious issue for, are even real and not AI bots?).<p>So besides all my high minded speculation above about the nature and value of the specific kind of intelligence AI vendors primarily claim their products to possess...you&#x27;ve also just got a plain old fashioned product problem. People aren&#x27;t really all that excited about the product and don&#x27;t see how it could realistically solve the issues they face in their day to day lives. That seems like a very reasonable take to me and probably the most obvious reason why AI hasn&#x27;t (and maybe won&#x27;t) radically change the way we live our lives. I&#x27;m mostly involved in systems engineering and infrastructure work and consulting on those topics, and I&#x27;m kinda in the same position the rest of the population outside of Big Silicon Valley Web Dev Tech land finds themselves in. I&#x27;ve yet to find a killer application for the technology. I&#x27;ll subscribe for a month to Anthropic or OpenAI when there is a big new release to give them a shot. But I&#x27;ve yet to renew over the past year or so because I just don&#x27;t find them very useful for my work.<p>TLDR: I think AI developers are way out on their skis and living in a bubble and are optimizing for the wrong things. I don&#x27;t really believe in &quot;AI doom&quot;, but I think a lot of the pro safety people were really just trying to address this more boring and less dramatic social and product problem in the only way they know how (via developing strange new religious eschatology and other related hobbies of people that have been hanging out in these deeply anti social groups for so long). I&#x27;m sympathetic to their efforts (when they are legitimate and not marketing). But I&#x27;m not sure the prognostications of doom are all that useful at this point, most people just get bad vibes from AI and don&#x27;t like it and don&#x27;t see the point and don&#x27;t really need much convincing in that respect. The products and the marketing that surround them are pretty on the knose about being anti-social and anti-human, and ordinary people have done a good job picking up on those vibes.
评论 #41788951 未加载