One thing to keep in mind is that none of this, even when it comes to Elsevier, is black and white. Yes, Elsevier has been one of the staunchest opponents of the move to Open Access. And yet simultaneously, Elsevier has made one of the most aggressive pushes into OA publishing. They are likely the largest OA publisher by article volume today. Yes, they have the most to lose with a big shift to OA, but they also have the most to gain. And OA publishing has proven that it can be very very profitable, particularly for Elsevier.<p>The argument in the piece is largely that Elsevier is a bad actor in the larger academic publishing space, so therefore they shouldn't be given the contract/power to produce reports for the EU about the movement to open access. But even if you hate Elsevier, they are indeed in a position to have a lot of the data needed. Elsevier isn't just a publisher, they're a data company as well. They compile a database of all journals and citations (as do a few others, like for-profit Clarivate and non-profit Crossref). They have bought up a number of analytics companies (Mendeley, Plum), as the article makes note of. So like it or not (and I assume most people don't like it), they do indeed have a huge amount of the data needed for a project like the EU Open Science Monitor.<p>The argument about conflict of interest is a valid argument, but feels like it's getting overblown in this context. The author argues that Elsevier's CiteScore metric (a competitor with Journal Impact Factor from Clarivate) is biased toward Elsevier content. Except CiteScore isn't being used at all in the Open Science Monitor [1]. They're only using total citation counts that come from a variety of sources. So the author manufactures a conflict of interest to bolster his point that isn't backed up by the facts.<p>Plum analytics is mentioned as a conflict, except the only data from Plum that's being used is Twitter mentions, so I don't see what the perverse incentive is - nobody would be inclined to use Plum Analytics, they'd just be inclined to game Twitter.<p>The use of Mendeley readership stats I do find bullshit. That's a clear case of Elsevier pushing the use of their product, and I think that should be removed as a metric entirely. I think most "social" signals should probably be removed, like Twitter mentions, because IMO they're not good indicators of quality at all.<p>What we really need is some set of metrics that can quantify real-world impact of scientific findings. For social science this would be something like the effect on public policy and societal outcomes. For medicine, something like the number of people in the global population positively impacted. These are really hard things to measure and quantify, but I do think there's a need for something more than citation-based impact metrics.<p>Disclaimer: I'm a family owner and director of Sage Publications, a private for-profit publisher that does a lot of both paywall and open access academic publishing.<p>[1] <a href="https://ec.europa.eu/info/sites/info/files/open_science_monitor_methodological_note_v2.pdf" rel="nofollow">https://ec.europa.eu/info/sites/info/files/open_science_moni...</a>