TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What is the current state of "logical" AI?

32 点作者 mtlb超过 1 年前
The kind of AI that gets the public attention right now lacks a quality that can be described as &quot;formal correctness&quot;, &quot;actual reasoning&quot;, &quot;rigorous thinking&quot;, &quot;mathematical ability&quot;, &quot;logic&quot;, &quot;explainability&quot;, etc.<p>This is the quality that should be studied and developed in symbolic AI approach. However, the actual symbolic AI work I know of seems to fall in one of the two buckets: 1. &quot;Let&#x27;s solve a mathematical problem (e.g. winning at chess) and say that the solution is AI&quot; (because humans can play chess, and now computers can too!) 2. &quot;Let&#x27;s make something like Prolog but with different solver algorithm &#x2F; knowledge representation&quot;. Products like Cyc and Wolfram seem to work essentially in this manner, although with lots of custom coding for specific cases to make them practical. There&#x27;s lots of work on separate aspects of this as well, like temporal and other modal logics.<p>I see the first bucket as just applied maths, not really AI. The second bucket is actually aimed at general reasoning, but the approaches and achievements in it are somewhat uninspiring, maybe because I don&#x27;t know many of them.<p>So my broad question is: what is happening in such &quot;logical AI&quot; research&#x2F;development in general? Are there any buckets I missed in the description above, or maybe my description is wrong to begin with? Are there any approaches that seem promising, and if so, how and why?<p>I would be grateful for suggestions of the books&#x2F;blogs&#x2F;other resources on the topic as well.

13 条评论

nicklecompte超过 1 年前
AI is not even close to having true logical reasoning, that&#x27;s probably decades away. The issue is that cognitive scientists are clueless. Scientists have a good model for <i>associative reasoning</i>, which is the basis of modern neural networks, but we don&#x27;t have a clue how <i>abstract</i> reasoning actually works. All birds and mammals have advanced abstract reasoning and are far more intelligent than GPT-4:<p>- birds and mammals are inherently able to count in almost any context because they understand what numbers actually mean; GPT-4 can only be trained to count in <i>certain</i> contexts. GPT-4 would be like a pigeon that could count apples, but not oranges, yet biological pigeons can count anything they can see, touch, or hear. There&#x27;s a profound gap in true quantitative reasoning, even if GPT-4 can fake this reasoning on specific human math problems.<p>- Relatedly, birds and mammals are far faster at general pattern recognition than GPT-4, unless it has been trained to recognize that specific pattern.<p>- Birds and mammals can spontaneously form highly complex plans; GPT-4 struggles with even the simplest plans, unless it has been trained to execute that specific plan.<p>The &quot;trained to do that specific thing&quot; is what makes GPT-4 so much dumber than warm-blooded vertebrates. When we test the intelligence of an animal in a lab, we make sure to test them on a problem <i>they&#x27;ve never seen before.</i> If you test AI like you test an animal, AI looks incredibly stupid - because it is!<p>There was a devastating paper back in 2019[1] proving that Google&#x27;s BERT model - which at the time was world-class at &quot;logical reasoning&quot; - was entirely cheating on its benchmarks. And another paper from this year[2] demonstrates that LLMs definitely don&#x27;t have &quot;emergent&quot; abilities, AI researchers are just sloppy with stats. It is amazing how much bad science and wishful thinking has been accepted by the AI community.<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1907.07355" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1907.07355</a><p>[2] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2304.15004" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2304.15004</a>
评论 #38773562 未加载
评论 #38773143 未加载
评论 #38773021 未加载
评论 #38773546 未加载
评论 #38780334 未加载
cevi超过 1 年前
This may only be tangentially related, but you might be interested in the recent research on Qualitative Constraint Satisfaction Problems - a good introduction to the topic is Manuel Bodirsky&#x27;s habilitation thesis [1].<p>The purpose of the subject is, roughly speaking, to exhaustively characterize all types of reliable reasoning which can be carried out efficiently - some people say they are searching for &quot;a logic for P&quot;. The techniques used are a mix of ideas from model theory, universal algebra, Ramsey theory, and computer science. Given the ridiculously ambitious scope of the project, I think the rate of progress (especially in the past few years) is astounding.<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1201.0856.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1201.0856.pdf</a>
kakaz超过 1 年前
Statistical models based on gigantic text databases doe not make logical reasoning closer . Even if called AI.
Nevermark超过 1 年前
Something that would massively improve language models ability to reason is whiteboarding. Being trained to make, review, improve, and add to notes. While maintaining a consistent goal.<p>I am unaware of anyone who can reason to any serious depth without a paper, computational, or actual version of a whiteboard.<p>This doesn’t seem like a particularly challenging thing to add to current shallow (but now quite wide) reasoning models.<p>Imagine how fast you could think if you had a mentally stable whiteboard that you could perceive as clearly as you can see, and update as fast as you can think the changes.<p>Our brains have probably been tragically speed limited by our slow vocal &amp; finger speeds for some time.<p>That will take AI’s to a wide AND deep reasoning level far beyond us very quickly.<p>Now add mental file cabinets and an AI could trivially keep track of many goals and it’s progress on them. Again, not likely to be a huge challenge to add.<p>Now, given all that long term reasoning ability, let the AI manage instances of itself working across all the problems with speed adjusted for priority &amp; opportunity.<p>Finally, have the model record every difficult problem it solved, so it’s fast wide (non-whiteboard) abilities can be tuned, moving up level after level. Occasionally do a complete retraining on all data and problem-solution pairs. Again, straightforward scaling.<p>Every new dimension they scale quickly surpasses us &amp; keeps improving.<p>At this point, IMHO, anyone pessimistic about AI has expectations far behind the exponential curve we are in. Our minds constantly try to linearize our experiences. This is the worst time in history to be doing that.
评论 #38776263 未加载
hcarlens超过 1 年前
An interesting approach I came across at NeurIPS a few weeks ago is called &quot;ML with Requirements&quot;[1]: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2304.03674" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2304.03674</a><p>My basic understanding is that it combines &quot;standard&quot; supervised learning techniques (neural nets + SGD) with a set of logical requirements (e.g. in the case of annotating autonomous driving data, things like &quot;a traffic light cannot be red and green at the same time&quot;). The logical requirements not only make the solution more practically useful, but can also help it learn the &quot;right&quot; solution with less labelled data.<p>[1] I don&#x27;t know if they had a NeurIPS paper about this; I was talking to the authors about the NeurIPS competition they were running related to this approach: <a href="https:&#x2F;&#x2F;sites.google.com&#x2F;view&#x2F;road-r&#x2F;home" rel="nofollow noreferrer">https:&#x2F;&#x2F;sites.google.com&#x2F;view&#x2F;road-r&#x2F;home</a>
PaulHoule超过 1 年前
See <a href="https:&#x2F;&#x2F;cacm.acm.org&#x2F;magazines&#x2F;2023&#x2F;6&#x2F;273222-the-silent-revolution-of-sat&#x2F;fulltext" rel="nofollow noreferrer">https:&#x2F;&#x2F;cacm.acm.org&#x2F;magazines&#x2F;2023&#x2F;6&#x2F;273222-the-silent-revo...</a> and also modern production rules engines like <a href="https:&#x2F;&#x2F;drools.org&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;drools.org&#x2F;</a><p>Oddly, back when “expert system shells” were cool people thought 10,000 rules were difficult to handle, now 1,000,000 might not be a problem at all. Back then the RETE algorithm was still under development and people were using linear search and not hash tables to do their lookups.<p>Also <a href="https:&#x2F;&#x2F;github.com&#x2F;Z3Prover&#x2F;z3">https:&#x2F;&#x2F;github.com&#x2F;Z3Prover&#x2F;z3</a><p>Note “the semantic web” is both an advance and a retreat in that OWL is a subset of first order logic which is really decidable and sorta kinda fast. It can do a lot but people aren’t really happy with what it can do.
评论 #38771704 未加载
wisnesky超过 1 年前
At <a href="https:&#x2F;&#x2F;www.categoricaldata.net" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.categoricaldata.net</a> we claim that symbolic AI is also generative, when eg used in data warehousing. Instead of eg new images, the generatively gives you new primary and foreign keys, new ontologies, contradiction detection, etc.
评论 #38773506 未加载
mirekrusin超过 1 年前
One of approaches is currently on the front page <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38767815">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38767815</a>
jschveibinz超过 1 年前
&quot;Formal reasoning&quot; or &quot;logic&quot; as you suggest is a model for finding &quot;truth&quot; from static inputs and simple operations. However, if the inputs are random variables (they have an associated distribution) then so (likely) are the outputs, and &quot;truth&quot; is still a random variable. The world we live in is better modeled by the latter than the former, and as such the &quot;decision tree&quot; approach of AI seems like a more reasonable approach and model to finding &quot;truth&quot; than a strictly mathematical approach.
评论 #38772528 未加载
jncfhnb超过 1 年前
Chess is best solved by fuzzy fake logic or whatever you want to call it.<p>Formal correctness is drastically different from “actual reasoning”.
wruza超过 1 年前
Is there “I” part in logic at all? We ourselves aren’t logical. We happened to invent&#x2F;discover logic as a way to interact closer with the world and learned to basically simulate a weak, leaky logic machine runtime in our minds. Later someone smart offloaded it to electronics (made with that exact principle, btw, which is one of these “hidden right before your eyes” type of nuances). Custom coding is probably the correct answer.
heckraiser超过 1 年前
Zephyr is pretty good. Real pragmatist that one.
评论 #38771607 未加载
bkovacic超过 1 年前
Gemini Ultra should show good progress according to Google - it&#x27;s supposed to perform better than 85% of computer science competitors, which requires a lot of logical reasoning. Lets see it once it goes live, but sounds promising.
评论 #38771264 未加载
评论 #38772603 未加载