The problem with this article, though it presents model checking and state space exploration well, is that it doesn't present weak memory models[1], which are the reality on all multiprocessor/multicore systems these days. Thus is perpetuates misconceptions about concurrent programming. This is a poor example of model checking because it assumes sequential consistency and actually confuses people by reinforcing incorrect assumptions.<p>[1] With weak memory hardware, it is effectively impossible to write correct programs without using at least one of atomic read-modify-write or barriers, because both compilers and hardware can reorder both reads and writes, to a maddening degree.<p>(I previously posted this comment as a reply to a comment that got flagged.)
These kinds of analyses are all nice and funny and so on but the issue here is that on real computers it does not quite work like this. One thing is that an optimizer may change the order in which statements are executed and then all guarantees go out of the window. Another is when writing to main memory the hardware may also reorder writes. So the whole process is filled with gotchas on every level. What one should remember is that if multiple threads do things with the same memory location where one of the 'things' that are done is writing, it is always wrong. The way to fix it then is to use the appropriate protection. E.g., use a mutex or use an atomic variable.
> If you’re implementing without formally verifying your solution through model checking, you only think you’re implementing it correctly.<p>This is a comforting idea, but enumerating every possible state of a real-world (integrated/distributed) software system is a fool's task.<p>Spend less time on formal verification (fancy word for perfectionism) and more time on risk analysis and cybernetics. Instead of praying that nothing ever goes wrong, plan for faults and design systems that integrate human and machine to respond rapidly and effectively.<p>"Correctable" is a much more practical target than "always correct".
A bit more visual guide:<p>Visualizing Concurrency in Go (2016)<p><a href="https://divan.dev/posts/go_concurrency_visualize/" rel="nofollow">https://divan.dev/posts/go_concurrency_visualize/</a>
Therein lies SIMDs advantage.<p>The instruction pointer is all synchronized, providing you with fewer states to reason about.<p>Then GPUs mess that up by letting us run blocks/thread groups independently, but now GPUs have highly efficient barrier instructions that line everyone back up.<p>It turns out that SIMDs innate assurances of instruction synchronization at the SIMD lane level is why warp based coding / wavefront coding is so efficient though, as none of those barriers are necessary anymore.
Coincidentally, i am currently going through <i>Distributed Algorithms: An Intuitive Approach</i> (second edition) by Wan Fokkink. It seems really good and is a catalog of classic algorithms for both Shared-Memory and Message-Passing architectures. It is a rather slim book with explanations and precise mathematical notation. Pseudo-code for a subset of these algorithms are given in the appendix.<p>The 1st edition divides the algorithms under two broad sections viz; "Message Passing" (eg. chapter Mutual Exclusion) and "Shared Memory" (eg. chapter Mutual Exclusion II) while the 2nd edition removes these section headings and simply groups under functional categories (eg. both the above under one chapter Mutual Exclusion).<p>Book website - <a href="https://mitpress.mit.edu/9780262037662/distributed-algorithms/" rel="nofollow">https://mitpress.mit.edu/9780262037662/distributed-algorithm...</a>
The initial comments here surprise me. I’ve never seen such poor quality comments on such a good article on Hacker News. The article is top notch, highly recommend. It explains from first principles very clearly what the (a?) mechanism is behind model checking.<p>So is the ability to compute the entire state space and prove that liveness and/or safety properties hold the single main basis of model checker’s effectiveness? Or are there other fundamental capabilities as well?
OK, great, formal verification gives some guarantees, that sounds nice and all. It's just that all the academic math nerds that are fluent in the checking tools are busy getting payed much more than average software developers, and I have a suspicion that they'll get less done per year in terms of features and refactorings of the code that immediately makes the monies flow.<p>I find the BEAM/OTP process model relatively simple to use, and it moves some of the friction from logic bugs to a throughput problem.