Overall this reflects a bias in the dataset that the "algorithm" was trained on, i.e. in the decisions humans made (be it the doctors, insurers, or the general context, since this is based on predict future cost of care, etc.). This reminds me of another example in a recruitment "algorithm" at Amazon that was shut off for bias against women[0].<p>That this was found in the "algorithm" means a) that it was checked for biases, which is already somewhat of a good news, and b) that it can probably be fixed, or at the very least tested.<p>This is just my opinion, but I think generally speaking this is good. Even though detecting and fixing those biases might not be straight forward, I like to think it's probably orders of magnitude simpler than fixing these biases in humans.<p>[0] <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G" rel="nofollow">https://www.reuters.com/article/us-amazon-com-jobs-automatio...</a>