A test of chess puzzles can reliably predict a player's ELO rating and what kinds of game elements they struggle with. My late dad did work on this in the 1980s to assess machine and human chess performance which culminated in the Bratko-Kopec Test[0], which eventually became a part of a standard suite for assessing the performance of new chess programs. He also ended up running the test on hundreds of human players to test its calibration.<p>He created several subsequent tests and wrote a book about it [1]. I make a version of a few of the tests for iPhone if you're so inclined [2].<p>0: <a href="https://www.sciencedirect.com/science/article/abs/pii/B9780080268989500097" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/abs/pii/B97800...</a>
1: <a href="https://amzn.to/3PVOne9" rel="nofollow noreferrer">https://amzn.to/3PVOne9</a>
2: <a href="https://apps.apple.com/us/app/test-your-chess/id362448420" rel="nofollow noreferrer">https://apps.apple.com/us/app/test-your-chess/id362448420</a>
> "What stops you, I think, is a combination of not really believing you’ll get it and not really caring. Is that too harsh – or is it somewhere close to the truth?"<p>This reminds me of the curse of working with really good senior engineers. They already know the answers, they've already solved the puzzles. It can be very easy to just defer to them all the time.<p>If you are a senior engineer who really understands a system, you need to be conscious of this effect if you ever want someone else to start learning your system.
Interesting observations, I would add my own:<p>- during long game, chess grand masters have physiology comparable to marathon runner, while he runs. Deep thinking for several hours, takes huge load on body. All the logic and critical thinking, is not going to save you, if you are not fit, and your brain does not work correctly.<p>- real life is not about solving puzzles. Real life is a rigged game where rules are not enforced. Instead of finding problems to solve, you need to find oportunities (and loopholes) and exploit them!<p>- game is rigged, and oportunities close fast. What worked a couple of years ago, probably does not work anymore.
The article is fine, inspirational, interesting, and all that, but one quibble: reporting ratios is potentially misleading. If grandmasters spend 4 minutes falsifying for every minute ideating, and amateurs spend .5 / 1, that's great. But what if amateurs spend 30 minutes coming up with a move vs 1 for masters? Could be the grandmaster is faster at ideation by a larger fraction than he is faster at falsification. That also makes sense in a "just so" sense, because maybe falsification is brute force with a large depth of search, and ideation is more like a lookup table - just see where your pieces can move.<p>I thought maybe I could find some primary sources, but the [1] notation is just footnotes.
The examples of Dropbox and iPod being criticized on tech sites, but going on to become very successful, seems practically a part of the mythology at this point - but it seems to be _always_ those two examples.<p>Are there legitimately multiple good examples of "Criticized on HN pre-launch, yet became surprisingly successful"? I'm curious if the lesson to learn from Dropbox & iPod is more of "believe in a product, despite the criticism" or "sometimes, even accurate predictions are wrong"
I really enjoyed this article. I would recommend others check out "Advice That Actually Worked For Me" by the same author. This same topic is mentioned in #6.
<a href="https://nabeelqu.substack.com/p/advice" rel="nofollow noreferrer">https://nabeelqu.substack.com/p/advice</a>
So: good (chess) players spend more time mentally countering their proposed moves before moving.<p>For developers or managers on HN, one outcome would be that it's best to start one's career in testing, or to respect the resumes of those who started in QA. If/since there are hundreds of ways things can break, it's a harder problem to show how it will, or prove it won't; and building a mental library of fault models helps in vetting designs and implementations.<p>Or, we could teach fault models directly, instead of accumulating by experience. See e.g., Robert Binder, "Testing Object-oriented Systems" (and ignore the model-driven-development gloss from later editors).<p>But the most important note is the aside: the author avoids chess as addictive. Should we ask ourselves: how can this be? Should that change how I think about my own work?
> It’s not as simple as “founders are optimists, scientists are skeptics”, though.<p>It's not much deeper either. There's having a skeptical yet positive mindset which is somewhere in-between. I once made a complex toolkit of runtime query optimizations that was a hodge-podge of kludgy things, which worked out well in practice. Someone asked me how I ever came up with the whole thing. I said I just started out and made one new improvement after another. They said that it was so fraught with obstacles (listing them off) that they'd be daunted to even start. I said I wished I'd talked to you sooner, you just gave me a roadmap that I didn't have while I was figuring out the pieces. Note that he was a high-level chess player (unlike no-rank me), but perhaps not as much of an optimist or risk-taker.<p>If I had to say what a good founder mentality is, I'd say that they have experience navigating uncharted territories and finding success, whether on small or large scales. That kind of practice makes them good at sizing up risks and rewards. Related to this might be of a kinesthetic learner type--learn by interacting.
THIS concept-looking for all the ways that a solution won’t work (i.e. fail)-is the key to the ideation stage of a business startup.<p>Thank you for posting this.
Hah, what a great article.<p>I play chess (poorly for the time spent on it) and I'm also a reasonably successful founder of a couple software companies. I find my struggle with chess is that I want to act intuitively, something that has served me well all my life in other avenues. But the board doesn't lie and if you don't think thoroughly you will get punished.<p>I have the capacity for it, I can think thoroughly in puzzles and perform much better there than my on board play but I just struggle so much with the discipline during regular games to falsify my moves. So much so that I've mostly given up on trying to improve despite really loving the game, it just grates on me. I know I could be better but I lack the discipline and I guess I just don't want to exercise that discipline in a game.<p>Anyways, great article.
Where is the linked post getting the 4:1 vs 1:2 time-spent-on-falsification ratios that it's claiming? It's like the heart of the entire argument, but it's not sourced.<p>Edit: Ah, okay, it's probably in the book being discussed where he says they recorded thought process while playing ( <a href="https://www.amazon.com/Think-Like-Super-GM-Michael-Adams/dp/1784831670" rel="nofollow noreferrer">https://www.amazon.com/Think-Like-Super-GM-Michael-Adams/dp/...</a> ).
Apparently I try too hard to falsify the falsification. I became convinced that h4/g3 pawns could be used as a trap while I march the b-pawn. Bd5 Bxh4 b5 Bxg3 Qxf7 Qxf7 Bxf7 Kxf7 b6! and the pawn can't be stopped.<p>Except it doesn't work, I needed to falsify the falsification of the falsification 4 move down the line to see why :)
i am quite confused . It started saying good chess players are more careful and spent more time falsifying ideas but then he later gave startup examples which is the opposite ( not so careful with falsifying. Just jump into the water with conviction and figure things out on the way ) . Startup game is more like poker . It is very different from chess . Somehow the author drew the wrong conclusions. Very confusing