Here is the longread from the Dutch organisation that lifted the lid:<p><a href="https://www.versbeton.nl/2023/03/computer-zegt-vrouw-hoe-een-rotterdams-algoritme-jonge-alleenstaande-moeders-discrimineerde/" rel="nofollow">https://www.versbeton.nl/2023/03/computer-zegt-vrouw-hoe-een...</a><p>Scroll to the bottom and you can calculate your own risk-score!<p>The discussion focusses on the ethics, but what I gather from the article is that the risk-score is also amateur hour from an AI point of view:<p>From the Dutch article:<p>By mistake, we received the data with which the algorithm was trained in 2020, so that we could find out which patterns the model learned from.
We discovered that there are several problems with the algorithm. For example, the model learned to make generalizations based on a limited number of people in the data, subjective variables (grooming) and proxy variables (language) were used, and the final selection was made on the basis of a poorly performing calculation method.