"The point we will be making here is that logically, neither trial and error nor "chance" and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress."<p>This is an extremely flawed initial assumption. There is no requirement for chance to be centered around zero. Consider rolling a dice: sometimes you'll get more than the mean, sometimes less, but you'll never roll a negative number. You can certainly win on chance in the long run, that's the foundation of casinos and insurance companies. It's hard to imagine a scenario where trial and error can possibly lead to knowledge regressing.<p>Consider randomly digging holes in the ground: after enough holes you will eventually strike gold, and you will never lose physical gold in the process. However, you may lose significant time, wealth, and effort that could have been better converted to gold. The optimal way to strike gold is not to dig more, shallower holes, but to learn enough geology to understand where gold is likely to be found and concentrate your prospecting there.<p>No experiment could ever possibly hurt scientific knowledge. People tinkering will certainly make occasional discoveries. In a brand new field with a lot of low hanging fruit, these discoveries will be numerous and the cost will be low. But in a developed field where people have a good idea where the remaining discoveries are likely to be found and the effort to conduct such experiments is substantial, targeted approaches become optimal. Reducing the unit cost of experiments is always nice, but is not generally feasible. This strategy of "convexity" is a very poor substitute in the real world for understanding.
Is anyone surprised by the notion that ratcheting is critical for consolidating gains made from chance events? I thought that was a trivial observation (maybe my biologist bias is showing here).
“By definition chance cannot lead to long term gains (it would no longer be chance)“<p>If this was modified to “chance alone” then it might be correct. The way it’s worded now makes it sound like chance cannot contribute to long term gains, which is clearly false. Evolution depends on chance (generation of diversity) followed by a selection process and clearly that works pretty well.
The basic point of this article seems valid to me.<p>The point the author is trying to make is that the structure of the payoff function matters a lot. Specifically, you need it to be convex for a try-and-error (or random walk) process to become very rewarding.<p>For example, think about fuzzing C programs, which has been proven to be very productive in terms of software security. But why is it so productive? This is essentially because a bug in a C program can have a quite significant implication (e.g. remote code execution), thus its payoff function is extremely convex. If there was no such property, fuzzing just wouldn't be so much rewarding (This explains why fuzz tests are less used for programs written in memory-safe languages).<p>The author believes this idea of "convexity" can explain a broad range of phenomena in the human world. I'm not so sure about its applicability, though.
Isn't he basically saying the economics of research works up to a point but there is an inflection point after which it's broken?<p>And he's saying you can't just keep the model as is, but you need to make certain adjustments to the incentive structures.<p>It's becoming blindly obvious that this is the case in psychology at least with the replication crisis and the "publish or perish" mentality. We can see these things playing out.<p>Does the economics of the scientific machine need to be revisited and tweaked? I'd say there is a good conversation to be had about that. I can already see a little evidence of a minor self-correction, but given economics drives absolutely everything then yeah I'd say it's likely there are some changes that would produce different results that might be better than what the current system is producing. Though it's not easy to compute ahead of time whether changes themselves would have unintended consequences.<p>He probably needs to spend more time trying to explain things to 5 year olds to offset his "I am so smrt" persona.
I think an important way to understand the optionality/convexity is to imagine science without it. Here is a paper about that: <a href="https://nickbostrom.com/papers/vulnerable.pdf" rel="nofollow">https://nickbostrom.com/papers/vulnerable.pdf</a> - the idea is: what if there are inventions that could destroy our civilisation? One example of such invention would be a bomb as powerful as an atomic bomb - but from materials and technologies readily available to anyone. Surely there would be terrorist/mafias/suicidal individuals who would build and use them.
From the title I assumed that the author was going to make a different point: that understanding a problem does not mean that said problem can be represented by a convex function, i.e. it still may have many local optima that a problem solver may get stuck in, rather than a single global optimum.<p>So, I expected some more general point about theoretical understanding of a thing being distinct from the actual computation of that thing, and that theoretical understanding does not necessarily lead to optimal outcomes.<p>I wish he’d made that point instead.
I'm not certain where the author wants to go with this. Comparing research with an Airplane flight seems not correct to me.<p>Also<p>> By definition chance cannot lead to long term gains (it would no longer be chance)<p>Heh. The whole universe might be made by "chance". It is, in fact, quite possible that the total energy of the universe is 0. Our existence is a fluctuation.<p>Define long term gains. Since infinity is out of our possible reach, it is possible (though unlikely) to make long term gains just by chance. Especially, if you have a large audience. Some of them will get lucky.
><i>A "1/N" strategy is almost always best with convex strategies (the dispersion property): following point (1) and reducing the costs per attempt, compensate by multiplying the number of trials and allocating 1/N of the potential investment across N investments, and make N as large as possible. This allows us to minimize the probability of missing rather than maximize profits should one have a win, as the latter teleological strategy lowers the probability of a win. A large exposure to a single trial has lower expected return than a portfolio of small trials.</i><p>Isn't that YC Combinator in a nutshell?
I read the article in full but still have no idea what he is trying to say. Perhaps someone can explain what he is talking about? Because as far as I'm concerned its postmodernism meets statistics.
The writing style of this article strikes a tone that seems overly eager to place an eloquent vocabulary on display.<p><pre><code> Hey guys! Look at all the
big, big words I can use!
Don’t I sound smart???
</code></pre>
Seriously. It’s like they wrote, and proof read the original draft, then performed a search/replace for any polysyllabic synonym they could opportunistically inject.<p>Why do they need to <i>sound</i> smart? Is it really because they know they’ve got nothing to say? Is this an SAT reading comprehension test?<p>You could sum up the sentiment with an anaology to paraphrase the concept: “<i>defensive programming is no replacement for accomplished programming skill</i>” (to borrow a concept comparable to investing)<p>Big words, small mind.