When I took my AI class, my professor would assign one paper a week and we had to turn in a "critique" of that paper. These papers were, of course, the seminal classics in the field, because what else would you want your intro class to be reading? I complained a bit to the professor that it was a bit silly to critique the <i>seminal papers in the field</i> and if you want to be sure we read it, we could just turn in a summary (which our critiques typically started with anyhow), but we were supposed to learn how to critique papers.<p>In hindsight, I suppose I should have appealed to the fact that critiquing the <i>seminal papers in the field</i> is a serious data set bias and that trying to "learn to critique" on the <i>best papers ever written</i> in a field was less likely to produce a useful "critiquing" skill and more likely to produce some overfitted garbage skill, but, hey, I hadn't taken AI yet! I didn't know how to express that.<p>(It did produce a garbage skill, too. I tried writing "real" critiques using my brain, but after getting Cs and Ds for the first couple, I learned my lesson, and mechanically spit out "Needs more data", "should have studied more", and as appropriate, "sample sizes were too small". Except for that last one, regardless of the study. Bam. A series of easy As. Sigh. I liked college over all, but there were some places I could certainly quibble.)<p>Anyhow, <i>this</i> is the paper that needs to be assigned towards the end of the semester, and students asked to "critique" it. It's a much better member of the training data set for this sort of skill.