FWIW local demographics have a non-negligible effect on a hospital's specialty. The nearest hospital to a retirement community will get more practice at heart attacks and strokes and the one in the college town will be good at dealing with alcohol poisoning.
This is so frustrating for me personally - I recently had to choose between several hospitals without knowing anything about the historical patient outcomes for each hospital.<p>A hundred years ago a doctor named Ernest Codman suggested that hospitals should be rated by an "End Result System" [1] where paient outcomes were measured and compared, yet only now do we have the "first comprehensive study comparing how well individual hospitals treated a variety of medical conditions". And we still do not know exactly which hospital is best and which is worst.<p>To me the best way to judge a hospital is blindingly obvious: For each patient at admission, estimate the chance he/she will be alive after 3 years. For example, a 65-year-old woman who is a smoker, is overweight and has stage 2 lung cancer might have a 25% chance of being alive in 3 years. Then compare the predicted outcome with the actual outcome. For example, say both Hospital A and Hospital B both admit 100 patients with 20% predicted chance of surviving 3 years. If after 3 years there are 30 such patients from Hospital A still alive but only 10 such patients from Hospital B, it would strongly indicate that Hospital A is better. We could look then at Hospital A for ways to improve Hospital B.<p>What's so disheartening is that this process doesn't involve any special technology. It could have been implemented 100 years ago, albeit with less rigorous statistical methods. We spend a billion dollars on evaluating a drug which has a marginal, one-off benfit for a few hundred thousand people. But we ignore a process which is cheap and offers long-lasting benefits to millions of people. It seems that the medical establishment is too powerful in this case.<p>[1] <a href="https://en.wikipedia.org/wiki/Ernest_Amory_Codman" rel="nofollow">https://en.wikipedia.org/wiki/Ernest_Amory_Codman</a>
Outcomes vary widely by doctor/surgeon as well, but the medical profession has successfully resisted meaningful measurement of results for many years now.
> The study did not disclose which hospitals had which results. Under the terms of the agreement to receive the data, the researchers agreed to keep the identities of the hospitals confidential.<p>Well that's not too useful for the patient oh I mean "consumer"
They link to a heart surgery compilation by the Society of Thoracic Surgeons (<a href="http://www.sts.org/quality-research-patient-safety/sts-public-reporting-online" rel="nofollow">http://www.sts.org/quality-research-patient-safety/sts-publi...</a>) which is fun to explore.<p>In metro LA, the top-tier hospitals (Cedars-Sinai, UCLA) have noticeably better outcomes than regular hospitals. The difference is like 98.7% of patients not dying versus 97.9% in a valve replacement ("AVR"). Flip it around and it's 1.3% versus 2.1% -- definitely noticeable.<p>OTOH, UCSF (presumably top-tier, SF residents correct me if I'm mistaken) has 2.3% chance of dying. Maybe it's not a top-tier facility for AVR?<p>And further, a small midwestern hospital in Hays, KS (pop. 21000) has a 3.6% chance of death. That's <i>huge</i> next to Cedars-Sinai at 1.3%!
So 4 options:<p>1. make the good hospitals worse<p>2. make the bad hospitals better<p>3. combination of 1 & 2<p>4. ignore<p>Most people will want to choose 2 but inadvertently choose 3. When in reality we should have chosen 4.
Do you think the Secret Service knows, for example, which hospital is best for gunshot victims? For stroke? For heart attacks? Of course they do. And yet, when Hillary collapsed at the 9/11 Memorial, they took her to no hospital, but to her daughter's apartment instead. Seems odd. Very odd.