It's interesting to note that the 5th highest ranked publication is arXiv. For those who aren't familiar with it, arxiv.org is an open-access repository of academic papers, mostly in quantitative science. In my field (computer science) it is standard practice to deposit a copy of one's papers in arxiv before submitting them for publication, and arxiv is the place to find the latest research.<p>There is currently a lot of hand-wringing in academia about open access publications. Everyone wants it, and it is trivial to switch a field to it (machine learning has done so, for the most part), but it requires the leaders in the field to lead the change and they are normally too invested in the status quo. What the high ranking of arxiv suggests to me is that while people maintain lip service to the idea that the (mostly closed) publications are important and the maintain the definite version of a publication, the reality is that no-one gives a damn and goes to arxiv when they want to read something.
So it looks like with this method, if a journal publishes more papers, this will give it more of a chance to boost its h5-index? This probably accounts for the high level of arXiv, and PLoS One beating out PLoS Biol.<p>One problem with impact factors is the way that a few articles can account for the majority of citations. For instance, a bioinformatics method that is widely used could attract thousands of citations, boosting the impact factor of the journal by a few points. This method doesn't solve this, as it expressly focuses on the top n articles and ignores the impact of the remainder. For instance, PLoS One's score of 100 is because the top 100 articles got 100 citations - it says nothing about the distribution of the rest.
It's nice to see that Google is adding features to Scholar. There's concern in the library community that it will go away since its not a revenue producing service.<p>Incidentally, Microsoft Academic Search is pretty impressive so far. They've added many features. They also have an API that is pretty easy to use, which Scholar doesn't.<p><a href="http://academic.research.microsoft.com/" rel="nofollow">http://academic.research.microsoft.com/</a>
There are definitely things that skew the index that might not necessarily reflect the quality of the journal. For example, the 20th ranked journal by H-5 index is Nucleic Acids Research (NAR). However, when you look at the H-index articles for NAR, you see that they are dominated by articles announcing or simply cataloging an important database. These get cited very extensively, becuase anytime you use a database you need to cite it, but they aren't what I would call high impact research articles. NAR just happens to be a journal that has a special annual Database issue where bioinformaticists can drop an article describing their useful database.<p>EDIT: It would be fair to say that since a database is so widely cited it is important. So maybe the index is more robust than I originally considered. But something still seems skewed here.
Rob J Hyndman has a very nice review on Google scholar metrics [1]. Here is his ending quote:<p><pre><code> In summary, the h5-index is simple to understand, hard to
manipulate, and provides a reasonable if crude measure of
the respect accorded to a journal by scholars within its
field.
While journal metrics are no guarantee of the quality of a
journal, if they are going to be used we should use the
best available, and Google’s h5-index is a big improvement
on the ISI impact factor.
</code></pre>
[1] <a href="http://robjhyndman.com/researchtips/google-scholar-metrics/" rel="nofollow">http://robjhyndman.com/researchtips/google-scholar-metrics/</a>
The only CS conference/journal I saw on the list was "IEEE Conference on Computer Vision and Pattern Recognition, CVPR". That's not the top CS venue I know of.