Lasting and solid foundations are made by experiencing<p>Doing With your hands. Seeing in reality. Getting burned with the
soldering iron. Smelling the flux. Hearing the signals and seeing them
on the oscilloscope. We need presence, feeling, the ownership of
knowledge as personal experience, not vicarious hand-me-down accounts
or diagrams.<p>As a kid I wired up NAND gates and transistors. When it came to logic
it felt like there was something tangible I could reach out and touch
through tactile imagination. Building a computer from chips,
wire-wrapping hundreds of connections to a 68000, RAM and EEPROM chips
took a whole summer. After that I could see a data-bus and an
address-bus. I know what they feel, and smell like. I got good at
patching dataflow DSP because 20 years earlier I spent hours in the
studio patching analogue synths.<p>Descartes Error is a book by Antonio Damasio [1] that talks about the
weakness of purely rationalist epistemology. The foundation is laid
long before we are even aware of knowing and learning. That book had
an influence on me to understanding cognitive activity as embodied.<p>This is why we need to let kids fix bikes, fall off skateboards and
climb trees. It's why giving them tablets and chromebooks instead of
things that get their hands dirty is no good.<p>[1] <a href="https://www.goodreads.com/work/quotes/100151-descartes-error-emotion-reason-and-the-human-brain" rel="nofollow">https://www.goodreads.com/work/quotes/100151-descartes-error...</a>
> A so-called "senior" developer started screaming at the compiler, then at the IDE, then at the operating system, then at his colleagues. He was frustrated.<p>This is one of the worst traps to fall into. I call it out whenever I can to people who fall into this: It's never the compiler, it's never the CPU, and if you're an application developer, it's never the OS. And if it is you can only get to that conclusion by assuming it still isn't, unless, Sherlock Holmes style, you are left with no choice. Never let it be your working hypothesis, always try to find out how those things working correctly matches your observations instead.<p>Working on very low level code, I <i>do</i> run into actual compiler and CPU bugs, and just two weeks ago or so I deeply regretted assuming something to be a CPU bug in an obscure part of it towards the end of a lengthy bug investigation, after the gathered data clearly suggested it was the CPU misbehaving. It still wasn't: I missed a crucial half-sentence in the spec.
I can't find the exact quote, but I believe Richard Feynman said at one point something about how creating theories is easy, the hard part is making sure your new theory matches every single other theory out there.
The analogy is fun. If you believe something false, everything you build upon it is also questionable (though not necessarily false - it might be true for other reasons).
I think the house of cards is also potentially diamond shaped, where a lot of systems are built on top of other systems the organisations no longer understand, which are held up by a single engineer that has been there long enough and holds enough IP to know where to look when trouble strikes.<p>Basically, if a pulled card collapses a few levels you're maybe ok, but hit the widest part of the diamond and you better hope that engineer is around. Take the engineer away and you have a reeaally precarious, potentially very expensive house of cards. I have a hunch this is not uncommon, especially as the upper levels of cards fill with 'automation' and abstraction.
Very relevant: <a href="https://en.wikipedia.org/wiki/Principle_of_explosion" rel="nofollow">https://en.wikipedia.org/wiki/Principle_of_explosion</a>
OP applies a wrong model in their analysis of models. They say the JS code is the reality and you can have a perfect or an imperfect model of such reality.<p>I'd say the reality in this context is this triad: "input -> output". The input, the arrow, and the output. Thus, the code is not part of the reality, but just <i>the model</i> of that arrow. The code is (an inherently imperfect) description of how a real input is to be transformed into a real output, an artificial text in an artificial language that allows a human programmer to make any progress whatsoever.<p>What follows is that OP proposes that a model of the model (i.e. their understanding of their JS code) can be imperfect or it better be perfect. In the latter case it calls it "a solid foundation", which introduces extra mental category for no benefit. This can be said simpler: for the reality, use directly the model that fits in your head. If it doesn't fit in, make more room for it by "reading the docs on type coercion and truthy values". Or, drop it into the nearest trashcan and search for a smaller model that would fit in comfortably.<p>But do not fall into the local minimum where you develop an imperfect model of an imperfect model of the reality and call it a day.
Arguably, if this 'structure' is so flimsy, it is not knowledge. This house of cards is simply faux knowledge, built on assumptions and misunderstandings. Once you truly understand something, to the degree that you do, it is solid.<p>If we talk about programming, which stems from math and logic... it is a solid as it gets, perhaps a diamond castle. Unfortunately I've come to the realization that software companies don't really appreciate this fact, and invest no effort in making logically correct systems. Therefore engineers spend a lot of their time navigating this flimsy structure that often falls apart because the engineers that came before had to make assumptions to finish things quickly because of deadlines and all that.
I think a useful generalization of this is Mental Models [1]. As with all models, they might not be perfect, but some are useful.<p>Also for the purpose of this article, I think its ok to have simplified or imperfect mental models of things, until we need more details. For example, we might think about hardware in an abstracted high level way, until we need to deal with low level programming, high performance, weird hardware bugs, etc. Being aware of Mental Models helps you to find your blind spots and work on them as necessary.<p>[1] <a href="https://fs.blog/mental-models/" rel="nofollow">https://fs.blog/mental-models/</a>
A practise I’ve found very fruitful in my programming journey is to do deliberate practise, even when (especially when) I’m knee deep in another problem. For me this looks like an explicit “learning” directory structure where I write short examples for myself, generally to explore an unfamiliar package/module, or to work up a simple but self-contained piece of code. I keep this separate from any other active project, and I often find myself referring to my own examples to refresh my memory, or use it as starter code for similar situations in the future.
> A so-called "senior" developer started screaming<p>I am amused by the “senior” in quote. I think people don’t all agree on what it’s supposed to mean anymore, and as a title ornament it’s probably way past time to retire it.<p>I hope it is hyperbole, but I’m also slightly baffled the focus of the example is the problematic mental model and not the screaming behavior.<p>Are people screaming in your offices ?