TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: A scheme for normalizing crowdsourced evaluations?

2 点作者 shaddi超过 15 年前
I'm on a selection committee for an organization at my university. The first part of our selection process involves having current members read incoming applications. Current members are broken into groups of O(10), and each group reads a set of applications, assigning 1-10 scores to each of five attributes.<p>Obviously, some reviewers will be more generous than others, so there is a need to normalize the feedback to make use of it (I look at it like cleaning up data from a bunch of uncalibrated sensors). I thought I would ask this here because I think it's an interesting problem that may have application elsewhere, too. I have some of my own ideas, but I'll withhold them for now so as to not bias any discussion.

暂无评论

暂无评论