There are two things humans are really bad at:<p>1) Thinking<p>2) Communicating<p>It's difficult to tell which one we're worse at because they depend on each other. Muddy thoughts can be clearly communicated, or clear thoughts can be poorly communicated, and it has the same result. In the usual case we have muddy thoughts that we communicate badly. Hilarity ensues.<p>In this case, the replication team didn't initially use precisely the same molecule as the original work because the original team just assumed the replication team would process the raw molecules the same way they had.<p>This is a very common failure mode in both thinking and communicating: we implicitly assume something, then proceed as if it were generally true (it isn't) or that everyone knows it (they don't). That's not the only failure mode, but definitely a very popular one.<p>Good scientific communication involves over-communicating nit-picky details, and even then it can be stymied by the use of conventions that are less generally known than the authors assume.<p>I got a call once from someone working on a similar experiment to one I'd published, asking where a particular factor of two had come from in an equation: I had left implicit the limits on an integral that some people took over a full sphere and some people took over a half-sphere (with a factor of two due to symmetry). He basically wanted to make sure I hadn't screwed up, which would have given an extra factor of two in my result and explained a difference with his. If I had been explicit about the limits of integration I'd used it would have saved a phone call.<p>That was an easy and obvious case. When attempting to express complex ideas that you yourself are frequently unsure of (that's the nature of research) things can get far, far worse, to the point where it's fairly amazing we can communicate our imperfect thoughts at all.
It saddens me that people look at this and don't see a new experement demonstraiting the initial paper was wrong. Instead they treat it like the old paper was right and the new experement just added pointless clarification.<p>Sorry, if someone following your procedure as written would fail to replicate your results then your paper is flawed and it should be withdrawn. Otherwise labs have a huge incentive to fudge things slightly so they get a little longer to explore the new information without competition.
This sounds like what would happen if I built a new javascript library, but instead of publishing source code, I just wrote blog post describing how to build it again in natural language. I've never been in a lab or witnessed one of these experiments, but wouldn't it be great if you could write up a set of instructions and feed that into a machine anywhere in the world.