I love that sqlite article. It seems like "everyone" is certain that sqlite can only be used for up to a single query per second, anything more and you need to spin up a triple sharded postgres or Hadoop cluster because it 'needs to scale'.<p>I love being able to show that study, if you properly architect your sqlite system and am willing to purchase hardware, you can go a long long way, much further than almost all companies go, with your data access code needing nothing more than the equivalent of System.Data.Sqlite
I read this and thought, "oh, the author is calling out formal verification as overhyped? Hillel Wayne (<a href="https://hillelwayne.com/" rel="nofollow">https://hillelwayne.com/</a>) is going to be angry! Wait, who wrote this..."
I think the best takeaway from this is that the software industry makes lots of claims about development processes, but so little actual research is done in trying to validate those processes. It's all mostly based on opinion.
It's really hard to point at studies to evaluate these types of hyped development paradigms. Some thoughts, as someone who loves static typing and microservices:<p>My favorite thing about static typing is that it makes code more self-documenting. The reason I love Go specifically is because if you have 10 people write the same thing in Go, it's all going to come out relatively similar and use mostly built-in packages. Any API requests are going to be self-documenting, because you have to write a struct to decode them into. Any function has clear inputs and outputs; I can hover a parameter in my IDE and know exactly what's going on. You can't just throw errors away; you always are aware of them, and any functions you write should bubble them up.<p>Typescript addresses this somewhat, but basically offsets that complexity with more configuration files. I like Typescript in use, but I can't stand the fact that Javascript requires configuration files, transpilers, a million dependencies. Same for Python and mypy.<p>Yes, I could just look at class members in a dynamic language, but there's nothing that formally verifies the shape of data. It's much more annoying to piece apart. I don't use static analyzers, but my guess is that languages like Go and Rust are the most compatible with them. Go programs are the closest thing to a declarative solution to a software problem, of any modern language IMO. As we continue experimenting with GPT-generated programs, I think we're going to see much more success with opinionated languages that have fewer features and more consistency in how finished programs look.<p>Microservices are also great at making large applications more maintainable, but add additional devops complexity. It's harder to keep track of what's running where and requires some sort of centralized logging for requests and runtime.
We software engineers are still more like alchimists rather than chemists.<p>That list reminds me of [1], which rants about this state of affairs and [2] that puts many beliefs to the test.<p>[1] <a href="https://youtu.be/WELBnE33dpY" rel="nofollow">https://youtu.be/WELBnE33dpY</a><p>[2] <a href="https://www.oreilly.com/library/view/making-software/9780596808310/" rel="nofollow">https://www.oreilly.com/library/view/making-software/9780596...</a>
Hillel (the editor of this list) is one of the people in this industry that is going to make a tremendous difference to the world. His ability to make formal verification understandable, and therefore useful in practice, is unparalleled.
"Scalability! but at what COST?" Is a very good example on how frustrating it can be.<p>We are throwing a lot of resources against a problem because we are not able to educate people good enough to understand basic performance optimizations.<p>You are a Data Scientist/anyone else and you don't understand your tooling? You are doing your job wrong.
I wish it would be possible to have better studies for that. I believe that static typing has huge benefits as software scales. I also believe that the type system of TypeScript is actually stronger in practice than the Java or C# one (despite theoretical weaknesses). It has the right tradeoffs (e.g. structural equivalence, being able to type strings, being able to check that all cases are handled, etc.)<p>It would be nice to have proper studies, but it‘s difficult to control the other variables ...
I was really surprised docker or kubernetes wasn't one of the items on here. While I use both, they definitely both could use cold showers to make sure they provide value.
Cold showers are awesome! Get outta here!<p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5025014/" rel="nofollow">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5025014/</a>
A Cold Shower for (early) testing of software, maybe:<p>There used to be an often cited paper by Boehm about the cost of catching bugs early vs late on production, usually mentioned by advocates of testing early, where the quoted conclusion was something like "studies show it's 10 times more costly to catch bugs late on production" or something like that. This is a very well known study, I'm likely misquoting it (the irony!) and readers here are probably familiar with it or its related mantra of early testing.<p>I haven't read the paper itself (I should!), but later someone claimed that a- Boehm doesn't state what people quoting him say he said, b- the relevant studies had serious methodological problems that call into question the conclusion he did say, c- there are plenty of examples where fixing bugs late on production wasn't particularly costly.<p>edit: I'm not arguing testing isn't necessary, in case that upset someone reading this post. I'm not really arguing anything, except that the study by Boehm that most people quote was called into question (and was probably misquoted to begin with). This doesn't prove/disprove anything, except maybe hinting at a possible Cold Shower. It does show that we as a field have a serious problem in software engineering with backing up claims with well designed studies and strong evidence, but this shouldn't come as a surprise to anyone reading this.
Regarding the bare metal issue, there are many more caveats, see for example: <a href="https://jan.rychter.com/enblog/cloud-server-cpu-performance-comparison-2019-12-12" rel="nofollow">https://jan.rychter.com/enblog/cloud-server-cpu-performance-...</a>
This gets at the heart of one of my big gripes about how we talk about engineering and technology.<p>Often a fancy new thing is introduced with a very long list of pros: "fast, scalable, flexible, safe". Rarely, is a list of cons included: "brittle, tough learning curve, complicated, new failure modes".<p>This practice always strikes me as odd because the first law of engineering is "everything is a trade-off". So, if I am going to do my job as an engineer I really need to understand both the "pros" and "cons". I need to understand what trade-off I'm making to get the "pros". And only then can I reason about wether the cost is justified.
>Researchers had programmers fix bugs in a codebase, either with all of the identifiers were abbreviated, or where all of the identifiers were full-words. They found no difference in time taken or quality of debugging.<p>I would not have expected that. Still, I prefer to use full(er) identifiers. I don't like to guess how things were abbreviated, especially when consistency isn't guaranteed. If I were using a different language and IDE, this might be better.
The big data one is outstanding.<p>If you don't have more data than can fit on a reasonably large hard drive, you do not have big data and you are likely able to process it faster and cheaper on one system.<p>Today that threshold would be around 10TiB.
> Agile Methods: The Good, the Hype and the Ugly (Video) (<a href="https://www.youtube.com/watch?v=ffkIQrq-m34" rel="nofollow">https://www.youtube.com/watch?v=ffkIQrq-m34</a>)<p>Thoughts on this one? I found the presentation to be somewhat mixed.<p>I found the initial comb through of the agile principles to be needlessly pedantic ("'Simplicity... is essential' isn't a principle, it's an assertion!"); anyone reading in good faith can extract the principle that's intended in each bullet of that manifesto.<p>The critique of user stories (~35 mins in) was more interesting; it's something we've been bumping up against recently. I think the agile response would be "if your features interact, you need a user story covering the interaction", i.e. you need to write user stories for the cross-product of your features, if they are not orthogonal.<p>I'm not really convinced that this is a fatal blow for user stories, and indeed in the telephony example it is pretty easy to see that you need a clarifying user story to say how the call group and DND features interact. But it does suggest that other approaches for specifying complex interactions might be better.<p>Maybe it would be simpler to show a chart of the relative priorities or abstract interactions? E.g. thinking about Slack's notorious "Should we send a notification" flowchart (<a href="https://slack.engineering/reducing-slacks-memory-footprint-4480fec7e8eb" rel="nofollow">https://slack.engineering/reducing-slacks-memory-footprint-4...</a>), I think it's impossible (or at least unreasonably verbose) to describe this using solely user stories. I do wonder if that means it's impossible for users to understand how this set of features interact though?<p>Regarding the purported opposition in agile to creating artifacts like design docs, it's possible that I'm missing some conversation/context from the development of Agile, but I've never heard agile folks like Fowler, Martin, etc. argue against doing technical design; they just argue against doing too much of it too early (i.e. against waterfall design docs and for lean-manufacturing style just-in-time design) and that battle seems to have largely been won, considering what the standard best-practices were at the time the Agile manifesto was written vs. now.
I think functional reactive programming belongs on this list.<p>Rxjs, etc.<p>Angular uses typescript and rxjs excessively and, while I used to like typescript, the combo has made me reconsider.<p>Rxjs send like an overcomplex way to do common tasks. Has RRP caught on anywhere else? Is there a usage that doesn't suck?
> Static vs Dynamic Typing<p>All research is inconclusive? Sure. I wonder what kind of type systems were in there? I guess Java and similars are accounted and yet I wouldn’t put any faith in them.
ML, Swift, Haskell... now that’s something else.
I can confirm the issues with formal methods.<p>I was working on a new type of locking mechanism and thought I would be smart by modelling it in spin [<a href="http://spinroot.com" rel="nofollow">http://spinroot.com</a>], which has been used for these kind of things before.<p>I ended up with a model that was proven in spin, but still failed in real code.<p>Given that's anecdata with a sample size of 1, but still was a valuable experience to me.
The title doesn't really relate to its content very well; the concept of taking cold showers has some scientific backing ([1] & [2]), and is also slightly hyped. After taking cold showers and getting some (minor) benefits for some years, the term "cold shower" started to get a positive association in my mind.<p>This article isn't about showers, nor positive results, making the title quite confusing :)<p>[1] <a href="https://www.medicalnewstoday.com/articles/325725" rel="nofollow">https://www.medicalnewstoday.com/articles/325725</a>
[2] <a href="https://www.wimhofmethod.com/science" rel="nofollow">https://www.wimhofmethod.com/science</a>