As far as I can tell the algorithm was not flawed. At least not in the sense that is produced different outputs than intended. The algorithm was designed to favor older patients, and it did just that. The result was that older patients received more livers. As was the intention.<p>So it not an algorithm that is flawed, it is a <i>policy</i> that is flawed. A policy that flawlessly and fairly executed by algorithm exactly as it was designed.<p>I'm not saying the policy was bad or fair. I have no idea to be honest. If you have one liver and two patients, then it's always going to be hard choice. But I don't think it is helpful to say the algorithm was misbehaving when it was not.<p>In fact, as mentioned in article, the outcomes of the algorithm are regularly checked by humans. And when they found a genuine bug (misclassifying people with liver cancer) the algorithm was fixed. Isn't that more or less exactly what you want. <i>Humans</i> thinking about policy, then having a computer executing the policy, while humans regularly check its output to see if the algorithm aligns with the intention.
Just finished reading this, I get why it's done but its really tragic and sad that an algorithm is deciding who lives or dies. Could you imagine working on something like that? I don't think I'd have the stomach.
I didn't get it from the article, even through it was written by the artificial intelligence editor, did the algorithm use some machine learning something technique or is it a manually programmed calculator? Although in both cases it seems it has gone through a lot of the stuff that can wrong with this sort of thing.
I believe (but am not 100% certain) that this is the calculator mentioned in the article:<p><a href="https://transplantbenefit.org/" rel="nofollow noreferrer">https://transplantbenefit.org/</a>