TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Best Papers vs. Top Cited Papers in Computer Science

113 pointsby fbeeperover 10 years ago

10 comments

sqrt17over 10 years ago
As an NLP researcher, I see lots of NLP papers on top of AAAI. I think that the high citation count is due to<p>(i) people in NLP actually bothering to tell others why their approach is interesting<p>(ii) other people being interested in the same &#x2F; a similar kind of thing [avoiding the discipline-of-one problem that niche AI applications would have] and<p>(iii) NLP having a reasonably developed &quot;canon&quot; about what counts as must-cite papers. This canon is heavily biased towards US work, and towards people who write decent explanations of what they do, but at least it makes sure that people know about the big problems and failed (or not-quite-failed-as-badly-as-the-others) solutions.<p>What you see in other conferences is that the &quot;Best paper&quot; awards get to (i) more theoretical papers which still have issues to solve before people can use the approach (nothing wrong with those!), in (ii) subfields that are currently &quot;hot&quot;. Whereas the most-cited papers are (i) more obviously about things that a dedicated person could apply in practice, and (ii) in a subfield that is obscure at the time but will become more popular in the following years.
评论 #8400323 未加载
jlduggerover 10 years ago
Reviewing SIGMOD, it appears that a lot of the citations earned are less about innovative research, and more about the everyone using the software tools they published.<p>And a survey paper in the field of big data analysis (survey papers are citation bate, but won&#x27;t be pulling in many grants or awards).
评论 #8399697 未加载
评论 #8398726 未加载
ep103over 10 years ago
Is it possible that papers that get the awards help give the scientists new ways of looking at problems, while the papers that are frequently cited are more likely to follow established viewpoints and back it with hard data I can use to justify later experiments?<p>What I mean is, if a paper makes me think &quot;wow, I&#x27;ve never though of this that way before, I wonder if I could try something like that with this....&quot; I probably wouldn&#x27;t cite it, right? Its not directly related. But I would probably give it an award for best paper because it helped me come up with a new approach to my own problem.<p>disclaimer: I am not a scientist.
评论 #8399142 未加载
评论 #8399077 未加载
thomasahleover 10 years ago
You could also see this as &#x27;citation counting&#x27; not being a very precise measure of paper quality&#x2F;importance.
评论 #8399838 未加载
Bill_Dimmover 10 years ago
Nitpicking, but why are they claiming to provide MAP (mean average precision) scores when their description and equation indicates that they are computing average precision, not MAP. According to the definition of MAP [1] that they link to, MAP is computed across multiple queries while average precision is computed for one [2]. Furthermore, they truncate their calculation to only consider the top 3 cited papers (i.e., they don&#x27;t go all the way to 100% recall), so it&#x27;s not even really the average precision.<p>[1] <a href="http://en.wikipedia.org/wiki/Information_retrieval#Mean_average_precision" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Information_retrieval#Mean_aver...</a><p>[2] <a href="http://en.wikipedia.org/wiki/Information_retrieval#Average_precision" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Information_retrieval#Average_p...</a>
psuterover 10 years ago
Conference organizers are well aware that best paper awards are not perfect predictors of importance or popularity. Many top conferences specifically introduced separate awards (&quot;most influential&quot;, &quot;test of time&quot;, etc.) granted e.g. 10 years after publication.
评论 #8398672 未加载
评论 #8398720 未加载
raphman_over 10 years ago
In 2009, Bartneck et al. did a scientometric analysis of papers presented at CHI - the most prestigious academic HCI conference [1]:<p><i>&quot;The papers acknowledged by the best paper award committee were not cited more often than a random sample of papers from the same years.&quot;</i><p>[1] <a href="http://www.bartneck.de/publications/2009/scientometricAnalysisOfTheCHI/" rel="nofollow">http:&#x2F;&#x2F;www.bartneck.de&#x2F;publications&#x2F;2009&#x2F;scientometricAnalys...</a>
评论 #8400059 未加载
j2kunover 10 years ago
I think it&#x27;s far more interesting to just see what the top cited papers are every year (after the fact) than to compare with the best paper. Best paper awards are given for a lot of reasons that aren&#x27;t consistent across conferences or even across years of the same conference.
eksithover 10 years ago
A cursory browse shows an interesting pattern in the names of the researchers.<p>edit_ Perhaps I should clarify: It&#x27;s entirely possible that exposure in the West has a large part to do with the media, who often don&#x27;t wade too deeply into scientific matters.
zenciadamover 10 years ago
Citation are just a game and are thoroughly gamed.