Obviously human bodies are not machines, they're much more complex and less understood than computer hardware, but I can't help but think of the analogy to running web servers.<p>Medicine today seems a lot like running a web app and never knowing if anything is broken unless users complain. It sort of works, but with good monitoring and alerting set up you can catch issues earlier or prevent them altogether. And the stakes with your body are much higher than just a web app going down, the difference between catching something like cancer a few months earlier is literally life or death.<p>From what I've read in threads like these, almost everyone in the medicine field is very opposed to moving in this direction, even seemingly in principle, and I really can't understand why. I get that doing certain tests can be invasive on its own, so the cost/benefit has to be considered on a case by case basis. This argument also as an analogy to operations - you get false positives on your webservers too, and sometimes people get woken up in the middle of the night for no reason. But we work to fix noisy alerts one by one, and things generally improve over time. Why is this not possible in medicine?