TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Observing many researchers using the same data (2022)

16 点作者 throwaway13337超过 1 年前

2 条评论

actuallyalys超过 1 年前
Many of the results had error bars overlapping zero or were fairly close (see figure 1). For results like this, it&#x27;s not too surprising that other researchers would detect a faint positive or a faint negative instead. That doesn&#x27;t really undermine the usefulness research, though. After all, it&#x27;s not that uncommon for studies to end up with small effects. When reading a paper with that magnitude of result, it&#x27;s good to remember that different researcher decisions could nudge that range into statistical insignificance.<p>I also noticed most teams produced multiple models, so it seems like part of the variation could be down to that. For example, most teams produced at least one model per survey question. It could be that basically all models based on question one showed a negative AME and all models based on question six produce a positive AME, resulting in the models disagreeing but the teams basically agreeing. Presumably their analysis to identify particular decisions explaining the variance between models would have picked that out if it were down to something that simple.
melagonster超过 1 年前
thanks for all authors spent time to do this experiment. it need courage to disclose something like this.