I'm on a selection committee for an organization at my university. The first part of our selection process involves having current members read incoming applications. Current members are broken into groups of O(10), and each group reads a set of applications, assigning 1-10 scores to each of five attributes.<p>Obviously, some reviewers will be more generous than others, so there is a need to normalize the feedback to make use of it (I look at it like cleaning up data from a bunch of uncalibrated sensors). I thought I would ask this here because I think it's an interesting problem that may have application elsewhere, too. I have some of my own ideas, but I'll withhold them for now so as to not bias any discussion.