While the principle is sound, I have a few issues with the explanatory text.<p>> By building a hub for research software, where we would categorize it and aggregate metrics about its use and reuse, we would be able to shine a spotlight on its developers,<p>What are these metrics? Download statistics? Number of forks? Number of stars? How do they help 'shine a spotlight'?<p>Organizations have download statistics already, though they are far from accurate. For example, I co-authored the structure visualization program VMD. It included several third-party components, for example, the STRIDE program to assign secondary structures, and the SURF program to compute molecular surfaces. How would the original authors know about those uses?<p>(In actuality, we told them we used their software, and the SURF developer's PI once asked us for download statistics.)<p>> if you’re a department head and a visit to our hub confirms that one of your researchers is in fact a leading expert for novel sequence alignment software, while you know her other “actual research” papers are not getting traction, perhaps you will allow her to focus on software.<p>The hub proposal offers nothing better for this use case than the current system. People who use a successful sequence alignment program end up publishing the results. These papers cite the software used. If the software is indeed one of the best in class, then the department head right now can review citation statistics. What does the hub add?<p>Suppose, as is often the case, that one of the researchers is a contributor to a large and successful project. How does the department head evaluate if the researcher's contribution is significant to the overall project?<p>As it says, this is a rabbit hole. But it's one that has to be solved, and solved clearly enough for the department head to agree with the solution, in order to handle this use case. I'm not sure that it can.<p>Personally, the best solution I know of is a curated list (like ASCL).<p>Perhaps as good would is something like PubPeer, to allow reviews of the software.<p>> Research software is often incredibly specific, and trying to Google for it is more often than not, an exercise in futility ... “sickle”<p>More often, research software that people write is incredibly generic. "Call four different programs, parse their outputs, combine the results into a spreadsheet, and make some graphs." This might take a couple of weeks, and doesn't result in any publishable paper or good opportunities for code reused.<p>Yet this is surely more typical of what a 'research software engineer' does, than developing new, cutting edge software.<p>This leads to another possible use case. Suppose you want to read in a FITS file using Python. Which package should you use? A search of ASCL - <a href="http://ascl.net/code/search/FITS" rel="nofollow">http://ascl.net/code/search/FITS</a> - has "WINGSPAN: A WINdows Gamma-ray SPectral Analysis program" as the first hit, and the much better fit "FTOOLS: A general package of software to manipulate FITS files" as the second.<p>Way down the list is 'PyFITS: Python FITS Module'. And then there's 'Astropy: Community Python library for astronomy' which has merged in "major packages such as PyFITS, PyWCS, vo, and asciitable".<p>The task then is, which metrics would help a user make the right decision?