TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A first lesson in meta-rationality

159 点作者 alboaie将近 4 年前

22 条评论

tlb将近 4 年前
An interesting category of problems are like Bongard problems in that you have to deduce the rule from examples, but the examples are presented one at a time at random long intervals so you have to work from memory. Most real-world learning is like this.<p>When working from memory, it&#x27;s normal for your memory to have already parsed the previous situation into features. As some of the later examples in the blog illustrate, it&#x27;s easy to fall into parsing examples into the wrong set of features, which is how you&#x27;ll remember them.<p>While I could solve all the problems in the article, I doubt I could solve any but the simplest if I was shown 1 image per day over 12 days and not allowed to write anything down.<p>Perhaps the lesson is that when you&#x27;re trying to deduce a rule (say, for what conditions your software crashes in) you can increase your rule-discovering power greatly by making notes and being able to look at several examples side-by-side.
评论 #27413489 未加载
评论 #27412962 未加载
评论 #27417091 未加载
评论 #27418155 未加载
mckirk将近 4 年前
Am I the only one that gets driven kind of crazy by these kinds of problems?<p>I&#x27;m not completely sure so far what it is, but I&#x27;m guessing it&#x27;s the frustration of having to find a needle in a haystack of essentially infinite size, as depending on how complicated you want to see the problem, there&#x27;s an infinitude of potential &#x27;solutions&#x27; and you never really know which level of complexity the author had in mind.<p>I love logic puzzles, where the system is constrained and you have to work within it, but these find-the-rule problems really aren&#x27;t my thing so far. Maybe I&#x27;d need to develop a higher frustration tolerance for them, heh.
评论 #27414502 未加载
评论 #27412977 未加载
评论 #27414997 未加载
评论 #27413604 未加载
评论 #27412971 未加载
评论 #27412767 未加载
评论 #27412670 未加载
chubot将近 4 年前
<i>From the Church-Turing Thesis, we know there’s nothing special going on! We know humans can’t do anything more than a computer can.</i><p>I see people making such claims about human cognition all the time, and I have no idea how it follows. (note the author is paraphrasing &quot;people&quot; here)<p>The Church-Turing Thesis says nothing about human cognition.<p>It is perfectly plausible that a human can do things a computer can&#x27;t. (Scott Aaronson has a paper &quot;Why Philosophers Should Care About Computational Complexity&quot; which sheds some light on why that might be, but it&#x27;s far from the only possible reason.)<p>The burden of proof is on people who claim that human cognition can be simulated by computer, not the other way around. To me, it seems far more likely that it can&#x27;t.<p>Human cognition can obviously be simulated by &quot;the laws of physics&quot;, since brains are material, but it seems very likely that computers are less powerful than that.<p>That&#x27;s my refutation of the (silly IMO) &quot;simulation argument&quot;. I&#x27;d argue it&#x27;s simply not possible to simulate another universe. You can simulate something like SimCity or whatever, but not a real universe. The people who make that argument always seem to leave out the possibility that it&#x27;s physically impossible.<p>In fact I would actually take the simulation argument (&quot;we are almost certainly living in a simulation&quot;) as proof by contradiction that simulation is impossible.
评论 #27413243 未加载
评论 #27413545 未加载
pontifier将近 4 年前
I have recently come up with a model that has been useful to me for thinking about thinking.<p>It involves the realization that different brain regions must communicate, but also contain their own representation of reality.<p>Partial thought precursors echo back and forth between these regions, with each region amplifying or dampening parts of the idea that it recognizes as valid.<p>When multiple brain regions begin to agree on its validity to a high level, the aha moment occurs.<p>This model has some characteristics of waveform collapse, and discrete task specific neural networks. When multiple tasks specific networks arrive at consensus that a model matches experience, the proto-idea forms. This proto-idea can then be evaluated, and inspected. New scenarios are reflected off this new idea, to see if it continues to make sense.<p>Converting an idea into words makes it useful to others, and allows sharing of ideas. This process requires refinement by echoing back and forth with the proto-idea until the words match it&#x27;s shape.<p>In order for these words to be understood effectively, they need to make sense to the brains that are receiving them. That means the words chosen need to activate multiple brain regions that the listener may use to evaluate this new idea and have the aha moment themselves.<p>This process is easier when the 2 brains have many shared experiences to draw on, or communication is bi-directional to allow message refinement.
评论 #27418150 未加载
wydfre将近 4 年前
If anyone is interested in this, they should look into Alfred Korzybski and general semantics. He invented the term &quot;The map is not the territory&quot;, in case you want an idea of who he is.<p>IIRC, he at one point says in his book, &quot;Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics&quot; something along the lines that the mistakes most people make are in categorization. Something along the lines of &quot;Some things look the same but they are different, and some things look different but they are the same&quot;. It&#x27;s a very interesting book and I loved how Non-Aristotelian logic was used in Null-A by A.E. van Vogt, which introduced it to me.
hliyan将近 4 年前
I&#x27;ve always considered &quot;pattern recognition for the purpose of prediction&quot; a core function of the human brain that precedes language, rationality or any form of logic. So it feels counter-intuitive to me to label this meta-rationality or even associate it with rationality. I subscribe somewhat to the idea that much of our rational decision making is <i>ex post facto</i>, i.e a verbal narration that comes after the decision making to explain the decision. The actual decision making being an opaque process that takes place inside our brain&#x27;s neural network. Confession: I lost the author about halfway through the article.
评论 #27414055 未加载
abss将近 4 年前
Another good introduction: <a href="https:&#x2F;&#x2F;drossbucket.com&#x2F;2017&#x2F;09&#x2F;30&#x2F;metarationality-a-messy-introduction&#x2F;" rel="nofollow">https:&#x2F;&#x2F;drossbucket.com&#x2F;2017&#x2F;09&#x2F;30&#x2F;metarationality-a-messy-i...</a>
评论 #27411982 未加载
cousin_it将近 4 年前
A site with lots of Bongard problems: <a href="https:&#x2F;&#x2F;www.foundalis.com&#x2F;res&#x2F;bps&#x2F;bpidx.htm" rel="nofollow">https:&#x2F;&#x2F;www.foundalis.com&#x2F;res&#x2F;bps&#x2F;bpidx.htm</a>
brudgers将近 4 年前
I am reminded that the simplest regex for the words &quot;apex, ibex, index&quot; is &#x27;apex|ibex|index&#x27;.<p>A commonality of all the boxes on the right is being on the right.<p>A commonality of all the boxes on the left is being on the left.<p>There is no offside in golf. The rules of the game only apply if when we are playing the game.<p>Here, Wittgenstein might have said Bongard problems are another language game and the confusion arises from using words in a peculiar way...the game is pretending there is a problem in a Bongard problem.
评论 #27413337 未加载
评论 #27413136 未加载
lorepieri将近 4 年前
These problems remind me of <a href="https:&#x2F;&#x2F;github.com&#x2F;fchollet&#x2F;ARC" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;fchollet&#x2F;ARC</a><p>&quot;ARC can be seen as a general artificial intelligence benchmark, as a program synthesis benchmark, or as a psychometric intelligence test. It is targeted at both humans and artificially intelligent systems that aim at emulating a human-like form of general fluid intelligence.&quot;
timboy03将近 4 年前
One tricky thing about Bongard problems is that for any given problem there are likely many different rules that could distinguish the six positive examples from the six negative examples.<p>For example, maybe a problem that is &quot;really&quot; about circles vs. triangles also happens to have more black pixels in the left images than in the right images.<p>A key skill in solving these problems is not just to find a compact and discriminating description, but to find such a description that is <i>also</i> one that a human Bongard problem designer would be likely to think was a cool and elegant puzzle that needs an &quot;Aha&quot; moment to recognize. If you find such a description, then you&#x27;re very likely to be right.<p>I suspect that that last part (recognizing when you have found a solution that is pleasing enough to be the answer) is likely to be the biggest challenge for ML-based approaches to Bongard problems
OisinMoran将近 4 年前
This was a good read and I really enjoyed it (I&#x27;m another person who was turned onto Bongard problems by Hofstadter), but two parts weren&#x27;t particularly strong.<p>The first one was the dismissal of intuition in a way that seemed pretty straw man like to me: &quot;Mostly, “intuition” just means “mental activity we don’t have a good explanation for,” or maybe “mental activity we don’t have conscious access to.” It is a useless concept, because we don’t have good explanation for much if any mental activity, nor conscious access to much of it. By these definitions, nearly everything is “intuition,” so it’s not a meaningful category.&quot;<p>I think the author could have spent longer trying to come up with a better definition of what someone would mean by intuition with relation to these problems instead of just setting up a poor one then immediately tearing it down. Intuition here would be contrasted against the deliberate procedural thinking of &quot;let&#x27;s list out qualities of these shapes&quot; and would be something like seeing the solution straight away, but can also be combined with the procedural thinking too with the intuition originating possible useful avenues and then the deliberate part working through them. The contrast is that you could easily write down one set of the steps to be replicated by others (the deliberate part: &quot;I counted the sides on all shapes&quot;) but less so the other (intuition: &quot;I thought x&quot;, &quot;x jumped out&quot;).<p>The second is that the example they use for mushiness really isn&#x27;t. There is a perfectly concrete solution to that that doesn&#x27;t involve any mushiness and is simply that the convex hull of one set is triangular while the others are circular. The only mushiness involved is that saying &quot;triangles vs circles&quot; feels like enough of answer to us to not need to specify any more. We think that we can continue with just this answer and be able to correctly identify any future instances so it seems mushy but you can probably think of examples that would confound the mushy solution but be fine under the more concrete convex hull one.
评论 #27416008 未加载
yamrzou将近 4 年前
Very Interesting. This reminds me of <i>The Abstraction and Reasoning Corpus</i> [1] by François Chollet, accompanying his paper <i>On the Measure of Intelligence</i> [2]...<p>Edit: Found a recent article mentioning both and discussing a NeurIPS paper on using Bongard problems to test AI systems [3].<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;fchollet&#x2F;ARC" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;fchollet&#x2F;ARC</a><p>[2] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1911.01547" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1911.01547</a><p>[3] <a href="https:&#x2F;&#x2F;spectrum.ieee.org&#x2F;tech-talk&#x2F;artificial-intelligence&#x2F;machine-learning&#x2F;how-do-you-test-the-iq-of-ai" rel="nofollow">https:&#x2F;&#x2F;spectrum.ieee.org&#x2F;tech-talk&#x2F;artificial-intelligence&#x2F;...</a>
yewenjie将近 4 年前
David Chapman has multiple sites for approaching the same problem. However, all the sites (or &#x27;books&#x27;) are incomplete.<p>Every time I come across something by him I just want to read a complete book on the topic of metarationality from cover to cover.
评论 #27416025 未加载
评论 #27412865 未加载
评论 #27414638 未加载
sonkol将近 4 年前
I know this is a little bit out of topic but I think meta-rationality is more about organizing other people&#x2F;machines intelligences to achieve your goal even when you are not highly intelligent.
mikhailfranco将近 4 年前
For the third I got:<p><i>triangle never in circle - circle never in triangle</i><p>compared to the given answer:<p><i>triangle bigger than circle - circle bigger than triangle</i><p>My solution is more general (worse), because it ignores size in non-containment arrangements, but also slightly more specific (better), because it constrains the single containment example in each set.<p>Neither of the rules say anything about overlapping cases, but there are no overlapping examples in the given sets. So there is a underlying constraint of <i>no overlaps,</i> but it applies to both sides, so it is not a distinguishing factor.
评论 #27495807 未加载
评论 #27412933 未加载
评论 #27413723 未加载
评论 #27415911 未加载
raghuveerdotnet将近 4 年前
I still don’t understand how this is any different from the notion of problem solving. And I really didn’t understand the human-AI equivalence, is it just the substrate independent nature of turing-completeness? If so, I think we still lack the epistemic toolkit to say anything conclusive about creating an equivalence, or for that matter any form of comparative relationship, especially given that AGI is still not a problem that is well-defined, let alone discussing the solution space. No?
tomcatfish将近 4 年前
&gt; However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.<p>Is this tongue in cheek or is it a very strange thing to say? That&#x27;s definitely not a statement that should be made with no justification. I can guess what the author meant, but I don&#x27;t really want to guess at basics of their argument.
评论 #27415464 未加载
alboaie将近 4 年前
As people and societies we evolve towards metarationality. The reality is too complex to be handled within a single theory. We have to develop metarationality (wisdom) and pragamaticaly make jumps between contradictory theories. Any other aproach is doomed because the complexity will be always too much for any &quot;single rationality&quot; approach.
zeroonetwothree将近 4 年前
I don’t see how Bongard problems are human complete if the author can’t even solve half of them. Does that mean he doesn’t have human intelligence?<p>I think a better candidate for human complete is “knowing what other humans are thinking”. AKA “theory of mind”.
评论 #27414022 未加载
评论 #27414600 未加载
hamilyon2将近 4 年前
Author links to Turing-Church thesis, which is widely assumed to be right to talk about things not related to any single thing that thesis is about.<p>It is about computable functions and abstract machines.
slx26将近 4 年前
We should keep one morning of school a week, no matter our age, to get together with other humans and discuss about stuff like this. Individual reading and HN comments are fine, but we are missing the best part of learning, which is actually verbalizing and battle-testing those vague ideas. It would really help in cases like this... Have no friends? Make a club!