So, I know the authors of this report personally. I sincerely have the highest respect for them, they do solid work; if I see something by them I take it seriously because I know I will disagree at my peril.<p>However, I also feel like findings like this get distorted and manipulated, and used to refute strawmen versions of arguments out there against use of standardized testing. Some of the recent arguments being used to reinstate standardized testing as required seem irresponsible to me, even though I don't necessarily see required standardized testing as undefensible per se (my own opinions about all of this are complex and don't fall neatly into either "side", so take this all as coming from someone who is frustrated with how things get oversimplified on both sides, not as advocating for a particular position).<p>Here's some things to keep in mind:<p>1. This is all about <i>predicting freshman GPA</i>, which itself is questionable as a criterion. If your sole question is "how will this individual do in our college their first year", it's reasonable. But if you are asking "how well would this individual master the skills necessary to succeed in X, Y, or Z role" it's entirely different. Freshman GPA predicts graduating GPA less well than you might think, and both of those are poor proxies for "real world" behavior. They're not unrelated to "real world behavior", but the relationship is low enough that <i>this is the entire point of social justice advocates</i>.<p>A lot of people would say neither SAT <i>nor</i> freshman GPA are relevant, that that's the point.<p>2. A correlation of 0.47 or 0.44 is something to pay attention to, but it's also far from perfect. If you want to remind yourself of how much noise is in that correlation look here:<p><a href="http://www.analytictech.com/mb313/correl4.jpg" rel="nofollow">http://www.analytictech.com/mb313/correl4.jpg</a><p>That's a <i>ton</i> of noise at the individual level, which is the level we care about when we're talking about admissions decisions.<p>The problem with a lot of this isn't with the SAT, it's that what happens is this 0.45 r inevitably gets turned into a blind metric that ignores all that real variation because it puts too much pressure on the school to take people who have <i>all kinds of other evidence</i> of competency, because it makes their numbers look bad.<p>Let me put it this way: if I was selling you a bathroom scale, and showed you that scatterplot of my scale's numbers and actual weight, would you buy it? You shouldn't, but we're making consequential life decisions based on that level of noise.<p>The Dartmouth report conveniently ignored that noise, when that noise is about 75% of the problem.<p>3. Relatedly, if you look at the individual level, things are vastly different when you start taking these kinds of population-level trends and using them to make individual decisions around a threshold. That is, it's very easy to say "0.46 correlation is pretty substantial, we should require SATs", but that 0.46 correlation is across the <i>entire range.</i> It includes people whose SAT scores are very very low. It's not only in the range of where decisions tend to be made, and to make use of that test as a metric for decision-making, you have to assume that what's in the college board's SES index is a perfect summary of what might be said for every applicant to your college. Maybe someone is upper class but their parents were both killed in a plane crash. Maybe they are middle class but come from an abusive family. Maybe they are applying as a much older candidate, or much younger candidate, and the meaning of the scores is really different.<p>In a sane world where we can have nice things, the admissions committee would look at this and make exceptions. This is the idea of having test scores optional. But when you make them required, and are required to post them, then there's pressure to take the highest scores regardless of all the other information.<p>4. About the other information: in Table 2, they show that HS GPA is actually less correlated with SES than SAT score. Other papers with similarly large amounts of data have shown that HS GPA is actually slightly more correlated with freshman GPA than SAT score. So is the SAT <i>necessary</i>? Probably not. We can argue about grade inflation etc., but the numbers are the numbers, and if it's working as an alternative that is less correlated with SES, reflects a longer sample of school behavior than just a few hours on one day, why wouldn't you prefer that?<p>5. Note their Figure 1b. They spend a lot of time talking about this model, and in some ways it's the focus of their paper, evaluating it against the alternative. But related to my first point, the model in Figure 1b says nothing about the extent to which a college, or anyone else for that matter, might want to select on some process represented by Figure 1b. What does it matter if test score predicts, say, some other test score composite, and both are influenced by SES?<p>In widely used intelligence tests, there were questions about things like 19th century European literature and classical music (I think they're still there, but I can't remember offhand; they might have been made optional now). On the one hand, yes, 19th century European literature and classical music is great, knowledge of it probably reflects some memory ability etc, but are questions about that really how you want to evaluate someone's skillset? Maybe it is, maybe it isn't, but I can tell you if you're not knowledgable about that you would not get credit on those questions.<p>In many ways, for many, the SAT and college freshman GPA are kind of similar. I don't feel that way, but this is the classic "book smarts" versus "real smarts" issue that always has come up since the beginning of humanity probably. Showing book smarts predicts other book smarts just isn't important in some paradigms.<p>Again, I'm not anti-standardized testing. I think it's useful. But I also think the way it's been used in the past, and continues to be used, is in fact broken. It's not even necessarily a problem with the <i>tests</i>, it's a problem with the way they get <i>used</i>. But to paraphrase a famous educational psychologist, if you have a thing that people tend to misuse, every single time, and there are good alternatives, maybe there is <i>something</i> about that thing that's a problem when you put it the hands of people, and the thing either shouldn't be used, or there should be some rules put in place with teeth to prevent it from being misused.