The article is correct that machine learning doesn't remove bias. If the training data is biased, the output will be biased in the same way. But machine learning doesn't add bias, either. If the training data are unbiased, the output will be unbiased. In this sense, algorithms are fairer than humans.<p>So if someone wants to argue that a machine learning model is or is not biased, they should base that argument on how the model is trained. For example: suppose a bank wants to use a machine learning model to predict who to make loans to. Historically, human bank managers made those decisions, and they tended to have a bias against people from the wrong side of the tracks. There are several possibilities:<p>* If the bank trains the model on the bank managers' decisions, and it uses ZIP code as a feature, then it will discriminate against people from the wrong side of the tracks just like the human bank managers did.<p>* If the bank trains the model on the bank managers' decisions, but the only features it uses are monthly income and existing debts, then it will probably be unbiased (although it's still conceivably possible for there to be a bias).<p>* If the bank runs a controlled experiment by approving loans for 100 people at random, and trains the model on which loans were paid back, then the results of the model will be fair; it will accurately predict how likely people are to pay back loans, regardless of which side of the tracks they live on.<p>* If the bank trains the model on loans made by human bank managers, but it trains the model to predict loan repayment instead of loan approval, then the algorithm will actually _invert_ the bank managers' biases. If the bank managers never approved loans for people from the wrong side of the tracks unless they were an extraordinarily safe bet, then the algorithm will conclude "people from the wrong side of the tracks always pay back their loans!"<p>Arguments about machine learning bias should be based on these sorts of specific details, rather than assuming "algorithms aren't biased" or "algorithms are biased".