Wait, the whole argument hinges on assuming P != NP correct? Which isn’t something that is proven... so by following the author’s analogy, I think left with the conclusion that there is definitely a possibility that there are people exponentially smarter than others.<p>The other assumption was based on the determinism of the machine. As far as I understand, the brain is not a deterministic computer. We don’t really understand how our brains work at all, but they definitely don’t work in any way shape or form to how we understand a computer to work, this leaving even more possibility for interpretation to an opposite conclusion.<p>Lastly, what about all the evidence of people who actually did accomplish exponentially more work than others? We have the benefit of the hindsight to check that real quick and, yup, I’d say 100% there are people who have done it. Elon, Jobs, Gates, etc...<p>However, I’d agree with the author if they argued that we can’t predict who will be exponentially smarter. To do that, we would have to simulate the future or have an algorithm that can tell us, which obviously presents some contradictions.<p>I think we all just sort have to wait and see.
I think a better model is not that some are exponentially smarter than others, but that some folks are operating close to the problem manifold and the rest aren't.<p>AKA, some people are solving classes of problems in approximately the most efficient way (or relatively more efficient way) and most people are computing solutions in exponentially inferior ways.
To put it real lightly, it's a real stretch to make any kind of analogy between the human brain does work and the way that semiconductor based processors do work. The author's reference to Stephan Wolfram's idea of universality (developed in a prior essay) seems unfortunate:
>“The key unifying idea that has allowed me to formulate the Principle of Computational Equivalence is a simple but immensely powerful one: that all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations.”<p>To borrow a metaphor from a recent HN thread about an eponymous paper called "How to recognize AI snake oil" [1], there's an "incomplete and crude but useful breakdown" on you can apply towards AI problems: genuine and useful progress in perception, imperfect but improving work in automating judgment, and fundamentally dubious attempts to predict social outcomes.<p>Let's think about where the idea of a theory of mind based on computational complexity to determine "smartness" lands -- it's certainly not stimulus detection, it's certainly not automating judgment, but it is about predicting or modeling social outcomes. I would say that this application Wolfram's idea is fundamentally dubious. Because of this, it's hard for me to say that the premise, argument or conclusion of this essay is anything but fundamentally dubious.<p>To at least leave a useful suggestion: this essay is missing an adequate definition and exploration of what "smart" is, why it's a facet of human nature and history, and what issues the concept causes. To the author, I'd recommend starting off by building a better foundation there before jumping to conclusions that are hard to take seriously.<p>[1] <a href="https://news.ycombinator.com/item?id=21577156" rel="nofollow">https://news.ycombinator.com/item?id=21577156</a>
The PhD example late in this essay is one that I have lived. I am a failed PhD student who worked for 4 years on my research with no first author publications to show for it. The professors from my old school always express surprise and confusion that I didn't succeed since I apparently have the right combination of smarts. But, to me, my failure is more down to bad luck and unfortunate circumstance than any inate ability.<p>I would love to hear other's experiences with PhDs in the context of the essay's example.
Related to the G's essay[1] on genius. AMA!<p>[1] <a href="http://paulgraham.com/genius.html" rel="nofollow">http://paulgraham.com/genius.html</a>
I guess for the same reason we don’t have giants walking around either, these are depending upon many random distributions of interactions that ultimately lead to a Gaussian distribution of said characteristic (height or intelligence etc) via the central limit Theorem.
1. Author writes and posts article on HN.<p>2. Commenters criticize article.<p>3. Author responds to criticism with " why didn't you write this yourself" over and over.<p>Wow..
If knowledge and skill is uniformly hierarchically decomposablbe, then everyone can do exponentially more work, over time. Technology's "increasing returns".<p>But there may be thresholds, such as working memory needed, because the skill or knowledge cannot be decomposed further due to interconnections.<p>It seems likely there exist some potential skills or knowledge that require more working memory than any human has, had or could ever have.
Intelligence is hard to define. I believe that some people are "exponentially" better for very specialized tasks, for instance, solving logical puzzles, or internalizing rhythm in music, but it isn't necessarily noticeable, and doesn't translate to great accomplishments.
Even if the conclusion may be true, I find the argument unconvincing. There exist large differences in intelligence between some species, so it's plausible to think the same could be true within a species.
One doesn’t have to be exponentially smarter than others in given moment – by compounding smart ideas and decisions one can get exponentially better outcomes in time.