TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The dark ages of AI: A panel discussion at AAAI-84 (1985)

57 点作者 1e超过 5 年前

8 条评论

Animats超过 5 年前
<i>&quot;To sketch a worst case scenario, suppose that five years from now (from 1985) the strategic computing initiative collapses miserably as autonomous vehicles fail to roll. The fifth generation turns out not to go anywhere, and the Japanese government immediately gets out of computing. Every startup company fails. Texas Instruments and Schlumberger and all other companies lose interest.&quot;</i><p>All of which happened. That was the &quot;AI Winter&quot;.<p>The &quot;Fifth Generation&quot; was an initiative by the Ministry of International Trade and Industry in Japan to develop a new generation of computers intended to run Prolog. Yes, really.[1]<p>The &quot;Strategic Computing Initiative&quot; was a DARPA-funded push on AI in the 1980s. DARPA pulled the plug in 1987.<p>I got an MSCS from Stanford in 1985. Many of the AI faculty from that period were in deep denial about this. I could see that expert systems were way overrated. I&#x27;d done previous work with automatic theorem proving, and was painfully aware of how brittle inference systems are.<p>Each round of AI has been like that. Good idea, claims that strong AI is just around the corner, good idea hits its limit, field stuck. I&#x27;ve seen four cycles of this in my lifetime.<p>At least this time around, machine learning has substantial commercial applications and generates more than enough revenue to fund itself. It&#x27;s a broadly useful technology. Expert systems were a niche. There&#x27;s enough money and enough hardware now that if someone has the next good idea, it will be implementable. But strong AI from improvements to machine learning? Probably not.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Fifth_generation_computer" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Fifth_generation_computer</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Strategic_Computing_Initiative" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Strategic_Computing_Initiative</a>
评论 #21876104 未加载
评论 #21876592 未加载
ProfHewitt超过 5 年前
Because of strategic challenges, Reusable Scalable Intelligent Systems will be developed by 2025 with the following characteristics:<p>• Interactively acquire information from video, Web pages, hologlasses (electronic glasses with holographic-like overlays), online data bases, sensors, articles, human speech and gestures, etc.<p>• Real-time integration of massive pervasively inconsistent information<p>• Self-informative in the sense of knowing its own goals, plans, history, provenance of its information and having relevant information about its own strengths and weaknesses.<p>• Close human interaction using hologlasses for secure mobile communication.<p>• No closed-form algorithmic solution is possible to implement the above capabilities<p>• Reusable so that advances in one area can readily be used elsewhere without having to start over from scratch.<p>• Scalable in all important dimensions meaning that there are no hard barriers to continual improvement in the above areas, i.e., system performance continually significantly improves.<p>A large project (analogous to Manhattan and Apollo) is required to meet strategic challenges for Intelligent Systems and S5G (Secure 5G).<p>See the following for an outline of technology involved:<p><a href="https:&#x2F;&#x2F;papers.ssrn.com&#x2F;sol3&#x2F;papers.cfm?abstract_id=3428114" rel="nofollow">https:&#x2F;&#x2F;papers.ssrn.com&#x2F;sol3&#x2F;papers.cfm?abstract_id=3428114</a>
评论 #21876551 未加载
sysbin超过 5 年前
I find it odd that people get most excited when thoughts of AI are aimed towards education in the classroom as the research hinted. I&#x27;ve always thought the most exciting thing about AI would be making robots that can cover the work needed so all human beings get to focus on what&#x27;s meaningful for them. The interest as hinted from the article makes me question where peoples priorities are when it comes to the next generation getting to live life.
评论 #21875352 未加载
yters超过 5 年前
What if the human mind is not computable? Why does no one test this hypothesis instead of throwing billions of dollars and our brightest minds against an unsubstantiated hypothesis? Why are we so unscientific in testing assumptions when it comes to AI? It is not difficult. I&#x27;ve thought of tests myself. But, the closest I&#x27;ve seen in academic literature is Penrose&#x27; microtubules and silly hypercomputation. Nothing with empirical tests. I blame materialistic bias, since if materialism is true then the human mind must be a computation. But, materialism does not need to be true in order to have empirical tests whether the mind is computable.
评论 #21878805 未加载
评论 #21877305 未加载
评论 #21877288 未加载
gautamcgoel超过 5 年前
One of the panelists (B. Chandrasekaran at Ohio State University) was my dad&#x27;s PhD advisor (my dad is Ashok Goel at Georgia Tech). Pretty cool to come across his name on HN!
netwanderer3超过 5 年前
AI is going to be huge no doubt. However, in my opinion there would likely be some costly mistakes made before humans can reap its full benefits. We have been seeing a lot of AI developments but in reality it hasn&#x27;t really brought us many meaningful changes as we had expected. In general our daily lives still remain pretty much the same as before. Our civilization has never experienced significant AI impacts at a large scale so mistakes may be hard to avoid, and it will serve as lessons for later generations not to repeat those same errors.<p>I have noticed human emotions and intelligence seem to be at odds with each other. Sometimes they are even a trade-off. The increase of one may lead to the decrease of the other. If we look around, humans today have the most advanced technologies in history, but are our lives really better compared to people&#x27;s in the past? Materialistic wise, certainly yes because they are products directly produced by technologies, but mentally and emotionally it could arguably be worse.<p>AI and techs keep getting better and better everyday, but then human have to work more with longer hours and higher stress. We all thought the machines are supposed to help us human but it&#x27;s actually the other way around. We work tirelessly days and nights in order to keep making those machines better and more advanced, but in return our lives have not seen many meaningful improvements, and even arguably worse than before in some areas. Individually our personal ability has limits and naturally it evolves very slowly, but the power of AI machines is potentially unlimited and growing at an even faster rate than Moore&#x27;s law. We seem to be collectively working to make machine much better than us while we are remaining relatively the same individually. Are technlogies actually enslaving us?<p>We keep buying things that don&#x27;t really serve us much. We have a lot of stuff now but they don&#x27;t mean much. If something broke, meh we will just get another one. It&#x27;s just another item and it will get shipped here tomorrow. We didn&#x27;t have as much in the past but every little thing carried much greater value. Even the most simplest thing could fascinate and brought us joy.<p>We humans today already operate based on rules and algorithms dictated by the machines. We still don&#x27;t know how our brains function organically (memory, consciousness, etc...), but in the quest of trying to make AI becoming human-like, we have created AI neural networks to simulate our brain. The danger is that even though we still don&#x27;t know how our real brain functions, but we have now turned around and claimed that the human brain works in a similar way under the same principles of an AI neural network. We are enforcing AI rules onto ourselves.<p>This is a dangerous assumption to make simply because AI does not have emotions. Once we begin to operate strictly under these rules and principles that are dictated by AI, we would soon lose the attributes and characteristics of what made us human. Our emotional spectrum may get increasingly shorten.<p>TV shows and movies are an example as they are a form of story telling that have biggest influences on us at the emotional level. It&#x27;s no coincidence that &quot;Seinfeld&quot; and &quot;Friends&quot; are still the two best tv shows today. Many movies that are considered as best were also made from a while ago. Despite the most advanced technologies, why is it that today we can&#x27;t seem to tell stories that bring out the same level of emotional reponse and intensity as before? They all seem to lack the genuity and inspiration that the previous generation once had.<p>Is it because AI do not understand human emotions so its algorithms cannot accurately factor that into consideration? One can say that today humans are the ones who write those algorithms so maybe we can add in compenents to account for that? But just like the example above, if we don&#x27;t even understand how our brain works, how can we simulate the machine to accurately reflect us? In the future, machines are supposed to learn and write all the codes by itself without human intervention, what would likely happen then? Would we still retain the ability to even understand those codes? Would it possible that human may slowly evolve into machines? In trying to make those machines becoming like us, we may instead become like machines.
codingslave超过 5 年前
Dark ages of AI is a meme like dunning krueger effect
account73466超过 5 年前
This time it is different.^{tm}<p>More seriously, we are almost about to<p>i) Generate a good book using a short intro.<p>ii) Generate a meaningful video using a few photos and a basic text scenario.<p>Which makes us closer to generate movies on demand (say in 2025) and then good luck to people claiming that the current progress in AI is a bubble.
评论 #21874928 未加载
评论 #21875095 未加载
评论 #21874739 未加载