TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Will We Get Close to a General AI in 2017?

31 点作者 Mister_Y超过 8 年前

17 条评论

Quarrelsome超过 8 年前
No. Stop reading reddit.com&#x2F;r&#x2F;futurology or that _awful_ article by waitbutwhy. Sure its a possibility but we&#x27;re still making baby steps and tiny tools, pastiches of intelligence as opposed to genuine intelligence or conscious.<p>People who ask questions such as this often don&#x27;t consider that it remains eminently possible that AGI is an impossibility for us to build. Also remember that anything an AI can do in the future a human + an AI can probably do better. Right now at least they&#x27;re just tools we use and will remain so for the foreseeable future.
评论 #13320169 未加载
评论 #13318337 未加载
评论 #13317814 未加载
评论 #13317822 未加载
评论 #13320387 未加载
simonh超过 8 年前
We don&#x27;t even have a general outline of a theoretical approach to designing a general purpose intelligence, let alone implementing one. Until we do, any speculation about a time horizon for implementation is a pure guess. How are those guesses working out so far?<p>1960s Herbert Simmons predicts &quot;Machines will be capable, within 20 years, of doing any work a man can do.&quot;<p>1993 - Vernor Vinge predicts super-intelligent AIs &#x27;within 30 years&#x27;.<p>2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.<p>So the distance into the future before we achieve strong AI and hence the singularity is, according to it&#x27;s most optimistic proponents, receding by more than 1 year per year.<p>I am not in any way denying the achievability of strong AI. I do believe it will happen. I just don&#x27;t think we currently have any idea how or when. If pushed to it, I&#x27;d say probably more than another 100 years from now but I don&#x27;t know how much more.
评论 #13319668 未加载
pps43超过 8 年前
The key point is self-learning, or ability of AI to build AI that&#x27;s better, if only a little.<p>This is different from, say, AlphaGo playing against itself to train its neural network - we want AI 1.0 to <i>write</i> AI 2.0, not just tweak some coefficients in 1.0.<p>At the moment all automatically generated code is <i>less</i> complex than source code of code generator itself. There can be more of it in terms of lines of code, but it&#x27;s usually pretty repetitive.
hacker_9超过 8 年前
If you read the research, there is lots of incremental progress being made. Mainly with pixels - classifying them into objects, matching object locations to text, attempting to predict future pixel values etc. But this stuff is very &#x27;surface level&#x27;, not even close to the way our brains effortlessly interpret light - classify objects, detect depth, account for lighting, complete objects we can&#x27;t see, invoke feeling of the material we are looking at, invoke past memories, detect threats, and so on - every single millisecond.<p>This doesn&#x27;t even begin to get into the core of AGI, which is the &#x27;thinking&#x27; component. Given this amazing mass of data, how do we then make the machine work towards it&#x27;s goals? Is this just a neural network? Is it a billion neural networks? Too many variables to tell.<p>And even then, if every action it takes is a reaction to the environment, does it then not have freewill? Do we have freewill? Is &#x27;consciousness&#x27; somehow the key to freewill?<p>But anyway if you listen to Musk or Hawking, doomsday AI is just round the corner.
评论 #13318039 未加载
shpx超过 8 年前
If anyone who thinks yes wants to bet $1000 I&#x27;ll do 1:10 odds.<p><a href="https:&#x2F;&#x2F;longbets.org" rel="nofollow">https:&#x2F;&#x2F;longbets.org</a>
评论 #13318115 未加载
ragebol超过 8 年前
No. This [0] is 4-5 years old and I don&#x27;t think much progress has been made in getting a computer to classify that image as &#x27;funny&#x27; and explain why. And if&#x2F;when it could, I doubt we&#x27;d call it intelligent. And this is just computer vision, not mentioning other branches of AI.<p>[0] <a href="http:&#x2F;&#x2F;karpathy.github.io&#x2F;2012&#x2F;10&#x2F;22&#x2F;state-of-computer-vision&#x2F;" rel="nofollow">http:&#x2F;&#x2F;karpathy.github.io&#x2F;2012&#x2F;10&#x2F;22&#x2F;state-of-computer-visio...</a>
评论 #13318203 未加载
评论 #13318161 未加载
edgarvm超过 8 年前
Better automation != General AI
评论 #13317984 未加载
评论 #13317824 未加载
tener超过 8 年前
Closer, but not close!
评论 #13317750 未加载
onion2k超过 8 年前
No.
评论 #13317885 未加载
AnimalMuppet超过 8 年前
My own personal pet theory (guaranteed right or your money back): We won&#x27;t have AGI until we have something that can dream.<p>Will we get close in 2017? No. Not if my pet theory is right, and not if it&#x27;s wrong.
spiderfarmer超过 8 年前
Define &quot;General AI&quot;. An AI that can decide by itself which model it should use to make sense of any given dataset?
评论 #13317816 未加载
Buttons840超过 8 年前
Will Google release an AI that can play StarCraft on the same level as humans in 2017?<p>General AI will have to wait until after that.
richardboegli超过 8 年前
No
skilesare超过 8 年前
Yes. Depends on what you mean by close though.
评论 #13320442 未加载
ArkyBeagle超过 8 年前
No. That which can be done is no longer considered AI.
评论 #13317884 未加载
rl3超过 8 年前
No, I don&#x27;t think so. We&#x27;ll inch closer, but I doubt we&#x27;re anywhere near AGI on the path of software and algorithms running on traditional networked computing architectures.<p>That isn&#x27;t to say the resources don&#x27;t exist to create AGI. It&#x27;s possible they were available a long time ago. If you were to ask some omnipotent future superintelligence for a way humans could have bootstrapped AGI in the year 2005 using the available technology of the day, it could probably come up with an answer. Maybe even further back than that, or maybe even present day wouldn&#x27;t suffice—who knows.<p>Trying to emulate biological architectures on silicon can be grossly inefficient, and may actually be harder from a design perspective. It is the attempt to formalize and adapt something created by an optimization process that spanned millions of years, a process that had zero regard for how easy its creation would be to understand or otherwise reverse engineer.<p>At the same time, algorithms vastly more efficient than the human brain&#x27;s remain a possibility. They need not include the large amounts of evolutionary baggage that humans have.<p>Approaching AGI as a raw optimization problem may yield better results. However, not formally specifying or understanding the underlying mechanisms is a massive safety issue in the long run.<p>By the same token, ditching silicon entirely may be a vastly quicker path. Throwing ethics out the window and experimenting with large quantities of lab-grown neural tissue might be one way. Creating a synthetic biological computing substrate another. It&#x27;s not hard to imagine something like copying human neural tissue&#x27;s design, but using materials capable of latencies an order of magnitude lower, or significantly higher degrees of interconnectivity.<p>Looking at the problem from the perspective of strictly space, it&#x27;s funny to think that we&#x27;re unable to recreate the functionality of some tissue contained within a space that&#x27;s less than one cubic foot-even though we have seemingly endless <i>acres</i> of computing power to do it with—that&#x27;s excluding the brains of the thousands of scientists and engineers working on AI. Even if you stacked up <i>just</i> the microprocessors in question, they would occupy a cubic volume far, far greater than a single human brain—each containing billions of transistors, and each operating at latencies far lower than the brain. Despite all this, the human brain requires far lower amounts of energy.<p>The reason we don&#x27;t have AGI yet is that it simply takes a lot of time and effort to invent, regardless if it&#x27;s ultimately possible with today&#x27;s technology. Of course, as other commenters have suggested, ruling out the possibility that the human brain somehow has seemingly magical quantum properties that render its recreation an impossibility (on silicon at least) may be unwise.
jdimov11超过 8 年前
The term AGI suffers from a greatly exacerbated version of the same problem that AI suffers. The problem, mind you, has NOTHING to do with science or technology - it is purely a naming problem.<p>The term &quot;Artificial Intelligence&quot; is a contradiction - intelligence can NOT be artificial. Intelligence is the ability of a being to get what it wants. It is always organic, as it originates in desire.<p>Just stop calling it &quot;Artificial Intelligence&quot; and enjoy the wonderful progress that we are making towards getting our machines to help us achieve what we want.<p>(To be clear, I&#x27;m not saying stop calling it &quot;artificial&quot;. I&#x27;m saying stop calling it &quot;intelligence&quot;, because it is not, and never will be. Using the word &quot;intelligence&quot; in the context of machine automation sets entirely unreasonable expectations and inhibits progress. )
评论 #13318137 未加载