If anything in life can be reasoned by first principles, whether it’s physics, computer science, economics or mathematics, would an AI that is able to reason at 100% accuracy be capable of understanding our world in all its detail and derive ideas and outcomes from it, thus leading to AGI?
Reasoning is heuristic in nature, so unless an AI has a complete understanding of every situation possible it cannot be relied on for accurate reasoning. The world is too complex to present perfectly reliable reasoning, and you can't Monte Carlo simulate your way to the single authoritative truth.<p>Plus, LLMs are so inherently stupid that I don't think we have to worry about "AGI" for another 10-20 years. All anyone wants is their glorified markov chain anyways.
> <i>If anything in life can be reasoned by first principles</i><p>Big if.<p>> <i>mathematics</i><p>Specifically on the topic of mathematics, we <i>know</i> that there are statements which are true but cannot be proven.<p>> <i>would an AI that is able to reason at 100% accuracy be capable of understanding our world in all its detail and derive ideas and outcomes from it</i><p>Assuming the big if is true, as if we were writing a science fiction novel, I guess <i>maybe</i>, but why would we expect it to be fast?
Reaction: How many quetta-ronna-yonna-zetta-exa-watts* of power were you figuring that this AGI might draw, to understand the world, at scale, from a "just solve the quantum equations" basis?<p>*<a href="https://en.wikipedia.org/wiki/Metric_prefix" rel="nofollow">https://en.wikipedia.org/wiki/Metric_prefix</a>
Possibly, but we don't yet understand the mechanics of reasoning. What more fundamental components are sequenced to produce the activity of reasoning?<p>For example, I personally think reasoning is downstream of at least generation and discrimination.