Similarly the main calculator used in the US to calculate 10-year risk of cardiovascular incident literally cannot compute scores for people under 40.[0] There are two consequences to this. The first is that if you are under 40 you will never encounter a physician who believes you are at risk of heart attack or stroke, even though over 100,000 Americans under 40 will experience such an incident each year. The second is that even if you get a heart attack or stroke due to their negligence they will never be liable because that calculator is considered the standard of care in malpractice law!<p>Governing bodies write these guidelines that act like programs, and your local doctor is the interpreter.[1] When was the last time you found a bug that could be attributed to the interpreter rather than the programmer?<p>[0] <a href="https://tools.acc.org/ascvd-risk-estimator-plus/#!/calculate/estimate/" rel="nofollow">https://tools.acc.org/ascvd-risk-estimator-plus/#!/calculate...</a><p>[1] It’s worth considering what medical schools, emergency rooms, and malpractice lawyers are analogous to in this metaphor.
<p><pre><code> > The choice of a 5-year period seems to be because of data availability
</code></pre>
Also known as "looking for the keys under the lamp-post" <a href="https://en.wikipedia.org/wiki/Streetlight_effect" rel="nofollow">https://en.wikipedia.org/wiki/Streetlight_effect</a> (which links to <a href="https://en.wikipedia.org/wiki/McNamara_fallacy" rel="nofollow">https://en.wikipedia.org/wiki/McNamara_fallacy</a> which I hadn't heard of before, but which seems to fit very well here too).<p><pre><code> > An algorithmic absurdity: cancer improves survival
> [...]
> algorithmic absurdity, something that would
> seem obviously wrong to a person based on common sense.
</code></pre>
A useful term!<p>> optimize “quality-adjusted” life years<p><a href="https://repaer.earth/" rel="nofollow">https://repaer.earth/</a> was posted on HN recently as an extreme example of this hehe
I think I've worked in software/data long enough to be very very suspicious of a one-size-fits-all algorithm like this. I would be very hesitant to entrust something like organ matching to a singular matching system.<p>There are so many ways to get it wrong - bad data, bad algo design/requirements, mistakes in implementation, people understanding the system too well being able to game it, etc.<p>Human systems have biases, but at least there are diverse biases when there are many decision makers. If you put something important behind a single algorithm, you are locking in a fixed bias inadvertently.
I think the generalized take away from this article, and the position held by the authors is: "Overall, we are not necessarily against this shift to utilitarian logic, but we think it should only be adopted if it is the result of a democratic process, not just because it’s more convenient." and "Public input about specific systems, such as the one we’ve discussed, is not a replacement for broad societal consensus on the underlying moral frameworks.".<p>I wonder how exactly this would work. As the article identifies, health care in particular is continuously barraged with questions of how to allocate limited resources. I think the article is right to say that the public was probably in the dark to the specifics of this algorithm, and that the transition to utilitarian based decision making frameworks (ie algorithms) was probably -not- arrived by at by a democratic process.<p>But I think had you run a democratic process on the principle of using utilitarian logic in health care decision making, you would end up with consensus to go ahead. And then this returns us to this specific algorithmic failure. What is the scaleable process to retaining democratic oversight to these algorithms? How far down do we push? ER rooms have triage procedures. Are these in scope? If so, what do the authors imagine the oversight and control process to look like.
It’s worth noting that the algorithm in question is <i>not</i> any kind of AI or ML as we might know it from the tech industry. Underneath, it is plain old statistical modelling.<p>The article doesn’t make this clear, and the name of the blog doesn’t help.
The Financial Times article discussed on HN:<p><a href="https://news.ycombinator.com/item?id=38202885">https://news.ycombinator.com/item?id=38202885</a> (22 comments)
What can go wrong when you let government agencies with no expertise to develop and maintain AI models and algorithms, right?<p>And then we get articles saying that AIs are biased, racist and don’t work as expected and that AI in general as a technology has no future.<p>I can even predict what will be their solution lmao, to pay atrocious lump of money to big consulting agencies with no expertise to develop it for them and fail again.
So, if I as a 38-year-old had a mild liver impairment which could reduce my life expectancy to 60 (22 years from now) I should get priority over a 60-year-old with a debilitating, excruciating condition which will end his life in six months, merely because his life expectancy with the transplant may only be 70?<p>That’s an outrageous and obscene utility calculation to propose and it should be obviously so to just about anyone.