I know this is a boring response, but I feel like there’s a formalism here to consider why a RCT would be “obvious.”<p>Let’s say you’re going to use some causal model, like a regression adjustment technique. You could for example assign people to the treatment group (receives parachutes) and the control (no parachutes), and then observe who lives and dies, as well as a bunch of potential confounders like altitude, age, fitness, whatever.<p>Fit a logistic regression to predict the outcome (survival) based on the treatment (parachute) controlling for the other characteristics. Then read off some effect size and ststistical significance.<p>Or better yet, and here’s the important part, you could make it a Bayesian logistic regression by considering prior distributions for the regression model’s fitted coefficients, and sampling draws from the posterior distribution of coefficients using the data set and your priors.<p>So what is the prior on the coefficient for the treatment term (parachutes)? Well, probably pretty damn high. Definitely some strongly informative prior, take your pick of historical data or effectiveness rates of physical safety equipment, whatever.<p>From this prior, and making some neutral assumptions via the priors on other weights, you could figure of what the effective sample size would be for a data set to disconfirm your prior (e.g. a posterior with a mode on the parachute coefficient far away from your strong prior). Sort of like a power analysis, but assuming a fake data set that shows nothing but failed parachutes. How much of that silly data would you need based on your prior?<p>What this would tell you is that you’d need some insane, physically ludicrous amount of data that flies in the face of an obvious prior, that what would be the point of running the study? You’re just going to confirm your prior.<p>So the real question is how often is this a realistic description of other situations when you want to study a treatment?<p>That’s the thing, right? That the author kind of wants to be snarky about.<p>But really, it’s pretty fair to say you don’t have such a strong prior that the study would be futile, even in cases when you sort of do feel like the conclusion is obvious (e.g. taking Tylenol leads to less pain, college kids prefer drinking instead of homework). While it passes some gut test of what’s obvious, that’s different from really betting on such a one-sided prior that a study is futile.<p>To me it suggests most of the sort of “duh” RCTs carried out are pretty much fine. Whether or not the study is worth it or is informative would be based on other priorities like cost, licensing or certification requirements, whether it’s of value to specialists who care about splitting hairs on accurate effect size measurement, etc.