This is ludicrous, but this kind of thing have been going on in science and physics in particular for a long time. Let's call the phenomenon maths side-blinders.<p>One experiment finds the mass of thw W boson to be 80370 +/- 19 MeV, another 80434 +/-9 MeV. Clearly, both results are incompatible, their rnage don't overlap. Of course, these are statistical ranges. But even with 95% certainty, their differences is <i>many times</i> the uncertainty, so it's not just that they're a bit off. IOW, we can be 100% (not 95%) sure that at least one, if not both, are incorrect.<p>Yet they are boldly reported with those uncertainty ranges, even though, clearly , those ranges cannot be correct. And then ATLAS double down by "apply more statistical analysis" to narrow their uncertainty range!<p>There should not be work on "improved stats analysis", but more work on finding where the systemic error between the two experiment lies. I truly don't see the point of retreading the same data set to change the value and uncertainty range when clearly there is something wrong with the data, the science, the experiment or all of the above.<p>PS: what I'd like to see, is the labs to say something along the line: given result A and B being incompatible, statistically there is a (say) 99.99% chance that one or both experiment has a hidden flaw or that there is a major flaw in the standard model.