TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How do you define the singularity?

22 点作者 servytor大约 3 年前
I define it as when machines understand people.

32 条评论

ifdefdebug大约 3 年前
For me: when General Purpose Intelligence (GI) systems can build better GI systems.<p>Note that it&#x27;s not proven that this is even possible: the only GI systems we know about (organic brains) so far have failed to do so, despite trying hard.<p>And I don&#x27;t believe those organic brains are anywhere near to succeed, because I doubt seriously that the current approach (programs running on transistors) is capable of producing such a thing.
评论 #30830640 未加载
PaulHoule大约 3 年前
I still think of that Vernor Vinge novel <i>Across Realtime</i> where people just disappear. Like a murder mystery it has already happened at the beginning of the book and people at various technology levels are one-way time traveling into the future trying to figure out what happened but… they never do.<p>It’s a good read.<p>I also think of this differential equation<p><pre><code> dx&#x2F;dt = x*x </code></pre> Which like the usual exponential growth equation except x is squared so the rate of growth accelerates as the quantity increases, solutions look like<p><pre><code> x = 1&#x2F;t </code></pre> namely it grows pretty slowly for a long time but it really goes bam all the way to infinity at a specific time.<p>Anyhow I don’t believe in that God at the end of time that Aella and Tielhard de Chardin believe in because I think intelligence itself is limited the way Godel, Turing and many others point to.
tablespoon大约 3 年前
Honestly? I define it as a sci-fi fantasy concept that&#x27;s not worth taking seriously.
评论 #30830513 未加载
评论 #30830633 未加载
评论 #30830179 未加载
rocqua大约 3 年前
I think its when software gets &#x27;better&#x27; at writing AI-software than humans, and when AI-software is capable of writing an AI-software that is better than itself.<p>The point being that the quality of AI software has some sort of &#x27;exponential&#x27; (maybe not numerically but conceptually due to compounding effects) growth in its quality, and that during this growth it crosses the point where it is better at &#x27;general intelligence&#x27; than humans.
marvin大约 3 年前
When generally intelligent machines have been improving the intelligence and capabilities of intelligent machines for a while, presumably better and faster than humans could.<p>I’m open to the idea that this might not make technology or society completely incomprehensible, or incomprehensible over a short period of time.<p>I’d expect the world to become increasingly surprising and confusing as we approach the time when machine intelligence is developed, but a successful, safe advent of machine intelligence might conserve enough of the fabric of society that humans can understand it. I’d expect dramatic and explosive developments to be more associated with borderline or unsafe (heaven forbid) AI that ends up doing its own thing.<p>This is still all in the realm of philosophy, so it would be very surprising if we were able to predict the details.
CiPHPerCoder大约 3 年前
I define it as religion for nerds who think they&#x27;re too smart&#x2F;clever for religion.<p>To expand on that, see other critical comments e.g. <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30830690" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30830690</a>
eatbitseveryday大约 3 年前
Probably you should provide more context?<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Singularity?wprov=sfti1" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Singularity?wprov=sfti1</a><p>Singularity to me is within a black hole, an astrophysics concept.
评论 #30830696 未加载
cjg大约 3 年前
It&#x27;s predicated on accelerating technological progress, which can perhaps be seen to be true by looking back at how quickly society has moved through different technological regimes. Once change becomes too fast, it&#x27;s impossible to predict what is about to happen. It&#x27;s called &quot;the singularity&quot; with a loose analogy to black holes where a big curvature means that light cannot escape - with fast enough technological progress, the immediate future is highly uncertain.<p>The most commonly expected way that this might happen is via AGI - perhaps with self-improving AI - where a sudden increase in AI abilities might drive extremely rapid technological change.
cfcosta大约 3 年前
Well, it&#x27;s a spectrum to me. Glasses are already humans and machines merging together, and I feel that people in the 1500s would think the singularity had already happened to us, given how smartphones are available everywhere.<p>If I had to choose an event, though, I think it would be the first time that human intelligence is enhanced by AI in some way (imagine offloading numerical computations on your mind). When that happens, we will have lots of questions to answer, like: what happens when rich people are not only richer, but also fundamentally smarter and more efficient?
评论 #30830011 未加载
评论 #30830454 未加载
评论 #30830020 未加载
mrfusion大约 3 年前
What if technological problems we haven’t solved yet are exponentially hard and even rapidly self improving AI would still barely make a dent in them?<p>Then boom, no singularity.
goodluckchuck大约 3 年前
If humanity in 4,000 BC lost 1,000 years of technology, then things might very well be the same today.<p>If humanity in 1900 AD lost 100 years of technology, then it would be very noticeable, but would not threaten extinction.<p>If humanity in 2022 lost 50 years of technology, billions would die.<p>To me, singularity is when humanity relies on the present advance of technology, were any stalling would collapse the entire species.<p>Snowpiercer (2013) involves this sort of thing.
carapace大约 3 年前
&gt; I define it as when machines understand people.<p>I&#x27;d define it as when <i>people</i> understand people.<p>- - - -<p>Going back to the source, to von Neumann, &quot;the singularity&quot; was literally a mathematical singularity in the curves of accelerating technology.<p>Vinge thought about self-improving artificial intelligence:<p>&gt; Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.<p>&gt; Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.<p><a href="https:&#x2F;&#x2F;mindstalk.net&#x2F;vinge&#x2F;vinge-sing.html" rel="nofollow">https:&#x2F;&#x2F;mindstalk.net&#x2F;vinge&#x2F;vinge-sing.html</a><p>He was a little precipitous in his estimates of the time frame, unless you count the GAIs we&#x27;ve already created. It seems clear to me that we <i>have already</i> built self-improving artificial entities, to wit: Google, Facebook, etc... are GAIs. I think they are cyborgs, with whole humans as neurons. I suspect they even have self-awareness &quot;carried&quot; on the substratum of their human components&#x27; self-awareness.<p>- - - -<p>But to me the real singularity isn&#x27;t going to take place in cyberspace, the real singularity happens when people start learning how to operate their own minds more effectively. This is happening already too, but &quot;the future is here it&#x27;s just not evenly distributed&quot;. Without going on and on about it, there is a global current of cybernetic psychology (happening largely outside of the view of academic psychology) that is changing and transforming humanity, or, more precisely, allowing humanity to transform itself.
simne大约 3 年前
Very good question! Thank you for opportunity to think deep :)<p>As for definition, I like definition from science - singularity will happen when some group of people, who we name world scientific community, will mostly agree, that singularity happen. That is, epochs named only post factum, not before, not even when things happen.<p>But what exactly will cause singularity, is totally different question, and I could answer for it with some probability of become prophet.<p>So, there few theoretical things could happen (ordered just as I remember):<p>1. When human will create General AI, which will have IQ slightly better than median human.<p>2. When people invent some computer parts with neuro-interfaces, or some other tech&#x2F;bio&#x2F;genetic improvements, which will be usable as commodity, and approachable for large part of humanity (at least 5%), and made these people intellect much more powerful than current people.<p>Variation of this, is some form of biotech beings, which will have human parts and machine parts, and will be much more powerful than current people, and will got abilities to live in environments, which are deadly for humans, like space vacuum.<p>3. When people create powerful enough computers to support virtual reality indistinguishable from reality, and invent method, to transfer human Natural Intelligence to those computers, so people will got possibility, to live inside vr forever, and to easily transfer their personality for example to other planet, or even to other Galaxy.<p>4. Humanity will create some very powerful, cheap and clean source of energy, so energy consumption per capita, will grow at least few times, so we will just continue our current slow progress to space civilization of type 1. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kardashev_scale" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kardashev_scale</a><p>There are lot of other opportunities for singularity, but these I think most probable.
FrozenVoid大约 3 年前
Technological evolution today is very slow, with products coming after years of research or at best months: Singularity is when this evolution accelerates to &quot;near-instant&quot; level - i.e. products of idea are actualized and developed to state of art instantly. This would be my definition: it doesn&#x27;t require a Sci-Fi AI overlords deciding whats optimal for you, just general technological capability that reduce time of development to 0.
ArtWomb大约 3 年前
At some point, the material correlates we use for compute, have to match what the human brain uses for consciousness. Probably those correlates are quantum mechanical in essence. And every quantum computing experiment gets us closer to that harmonious unity. But I don&#x27;t believe it&#x27;s a &quot;given&quot;. We still use YBC oxide type superconducting qubits and not lab grown neural tissue itself ;)
评论 #30830353 未加载
Randolf_Scott大约 3 年前
Could be when humans understand God, but that will never happen on our own. That&#x27;s why God sent His Son, so humans could understand everything they need to about their creator.<p>Same with machines, a human will eventually need to go into the machine world and let the machines destroy him, so they can analyze and take upon themselves the human coding, thus machines will be able to understand humans.
jasfi大约 3 年前
I&#x27;m working on an AGI, early stages though. Check out <a href="https:&#x2F;&#x2F;lxagi.com" rel="nofollow">https:&#x2F;&#x2F;lxagi.com</a> to sign-up for the MVP.<p>An AGI isn&#x27;t the singularity, though, unless it were fully developed and extremely well designed and implemented. It would need to be able to improve itself. That&#x27;s the really far end of the spectrum.
saalweachter大约 3 年前
The day programmers aren&#x27;t needed anymore.
karmakaze大约 3 年前
I define it as when machines start doing things that we don&#x27;t know why. It may already have happened with ML models, unless we can get them to explain themselves so that we can understand.<p>Extrapolate this to self-driving, defense systems, recommendation systems, content moderation, etc. The basis is that we gave up executive control.
Flankk大约 3 年前
I&#x27;m bearish on AI. The human brain does far more than just learn things. AI still has no ability to reason or think creatively. So we&#x27;re not even in the ballpark. The space has been worked on for 50 years. I&#x27;m not sure why people think the field will now start to progress rapidly.
评论 #30830198 未加载
评论 #30829915 未加载
jordanpg大约 3 年前
It&#x27;s an inflection point.<p>When AGI blinks into existence, what happens after that moment might be so instantaneously transcendent that we don&#x27;t even have the vocabulary to discuss it.<p>It&#x27;s mostly useful for describing the time before AGI and the time after AGI.<p>What will actually occur is impossible to know.
pohl大约 3 年前
It&#x27;s that thing where people who, while otherwise mathematically literate, somehow convince themselves that exponential changes adumbrate some future transcendence rather than, say, the breaking point of systems.
spansoa大约 3 年前
The singularity is when we&#x27;ve exhausted all the low hanging fruit of technology and have started to work on the real hard problems of society. It&#x27;s essentially our last invention.
jerf大约 3 年前
I still think the original definition is the most useful, and also gives the greatest intuition as to <i>why</i> it is called the &quot;singularity&quot; and not some other name.<p>&quot;It is a point where our old models must be discarded and a new reality rules, a point that will loom vaster and vaster over human affairs until the notion becomes a commonplace.&quot; - <a href="https:&#x2F;&#x2F;frc.ri.cmu.edu&#x2F;~hpm&#x2F;book98&#x2F;com.ch1&#x2F;vinge.singularity.html" rel="nofollow">https:&#x2F;&#x2F;frc.ri.cmu.edu&#x2F;~hpm&#x2F;book98&#x2F;com.ch1&#x2F;vinge.singularity...</a><p>What I like about this definition is that its philosophical utility is precisely that is a name for the place where our expectations about the future break down completely, and this is a useful concept to have a name for.<p>It is true that the most commonly explored particular manifestation of this is an AI technological breakthrough, but using the definition contextualizes that as just one possibility in a set of possibilities, including <i>some &quot;Singularities&quot; that have already happened</i>.<p>In particular, try to imagine explaining the modern world to a Sumerian peasant farmer in X,000 BC. Good luck with that. You probably don&#x27;t understand the world they live in very well either, but they can&#x27;t even conceptualize our world. There&#x27;s <i>at least</i> one Singularity in human history between that farmer and us, and it&#x27;s not hard to draw out at least a few more.<p>However, it can be challenging to point out &quot;where&quot; or &quot;when&quot; they occurred.<p>From this we can draw out a few more lessons. Singularities are relative to the observer. You generally can&#x27;t &quot;pass through&quot; one, because as you would approach one, your ability to prognosticate the future reasonably grows through it, and so relative <i>to you</i>, in time, it recedes. Even if the hard AI takeoff is going to happen, we are even now learning more about what an AI world looks like and how it works, so even in that scenario, our ability to understand it improves as we approach it. Hard, species-wide singularities that give no warning and no opportunities to significantly adapt in advance have generally not happened yet, even as like I said, clearly, between that Sumerian farmer and us there has to have been at least one.<p>Hard AI takeoff is one. But another one that is back in the news is nuclear war. That would also constitute a singularity in its own way; we could fall back on our understanding of previous civilizations at lower tech levels but it&#x27;s still very difficult to predict what would happen in the aftermath of such a thing. There&#x27;s a few other possibilities. But what one might call a &quot;hard&quot; singularity, one that completely blindsides the entire species, hasn&#x27;t generally happened, and there&#x27;s still good reason to suspect they&#x27;re not likely.<p>As this post demonstrates, I find this formulation far more interesting for thinking and saying things than the vague &quot;future tech might be weird&quot; ideas that tend to float about. You need the concept of anchoring the singularity to particular points of view to be able to say much about them, I think. Trying to treat it as an objective event is difficult, precisely because they recede from the observer.<p>This formulation also disposes of the &quot;Rapture of the Nerds&quot; scenario. There is no reason to believe that the singularity somehow must be accompanied by &quot;transcending the physical world&quot;, there is no reason to believe that such an outcome would even be good for humans in specific or humans in general, and there&#x27;s no reason that singularities must be limited to technological process. That&#x27;s just one particular part of the space (and IMHO, a very small one when it comes to hypothetical transcending the physical world) that happens to have some scifi stories written about it, but is far from the most likely possibility.
Ftuuky大约 3 年前
When a single computing device has more processing power than all humankind put together.
dgeiser13大约 3 年前
Unlimited energy. Until then it&#x27;s moving deck chairs around the Titanic.
yehosef大约 3 年前
I define it as when the simulation realizes it is a simulation.
Apreche大约 3 年前
The same way it&#x27;s defined in &quot;The Last Question&quot;
SpodGaju大约 3 年前
The singularity is a human trying to reach the horizon.
rainworld大约 3 年前
The ultimate act of hubris.
krapp大约 3 年前
The rapture for nerds.
disambiguation大约 3 年前
when machines can beat us at all &quot;games&quot;