> How can the minority applicants who have lower scores but high potential be distinguished from those who just have low scores?<p>> A machine-learning model would be fed historical admissions data, including candidates’ family background and academic achievement, and noncognitive skills such as grit and resilience, along with outcomes of past admission decisions. It would use these data to predict new applicants’ performance — as defined by each institution, such as college grade-point average or income 10 years after graduation. The model could figure out which characteristics best predict performance for various subgroups — for example, how salient SAT scores are for public-school Black students raised in the South by single mothers vs. private-school White kids from the Northeast. If we use only unadjusted test scores, all that context is lost.<p>I love the idea of using an algorithm to rate candidates [1] and having a feedback loop to rate the performance of the algorithm [2] but I think Fryer has too much faith in machine learning. I suspect a machine may simply learn that nothing beats a middle-class kid who did well at high school.<p>[1] The book Noise by Daniel Kahneman et al has a fascinating description about how algorithms are superior to human judgement: "Meehl reviewed twenty studies in which a clinical judgment was pitted against a mechanical prediction for such outcomes as academic success and psychiatric prognosis. He reached the strong conclusion that simple mechanical rules were generally superior to human judgment...You can surely imagine the response of clinical psychologists to Meehl’s finding that trivial formulas, consistently applied, outdo clinical judgment. The reaction combined shock, disbelief, and contempt for the shallow research that pretended to study the marvels of clinical intuition"<p>[2] This would fix the problem Paul Graham describes here:
<a href="https://twitter.com/paulg/status/1585890585621168128" rel="nofollow">https://twitter.com/paulg/status/1585890585621168128</a>