<i>To reduce the occurrence of future similar programming errors, the Johns Hopkins Biostatistics Center has instituted a new standard operating procedure for checking randomization assignment to be followed in all trial analyses. To ensure that the group assignment used in any of the trial analyses is correct, a verification process will be included at the beginning and end of each analysis program. This process is intended to confirm that the group assignment separately provided by the trial team matches the group assignment used in the analysis program. The matching confirmation is reviewed by a second biostatistician/analyst before its use in the results.</i><p>I don't know what software quality control is already in place at this organization, but this corrective measure seems on its face wholly inadequate to me: they're just preventing a recurrence of <i>the same exact problem</i>, rather than the much broader <i>class of problems</i> due to programming errors. Do they have a code review process in place?<p>This speaks to a larger issue: if you write software for manipulating data as part of the production of a scientific paper, then the source code should be available for review as an attachment to that paper, and review of said code should be part of the peer review process in any reputable journal. Professional software engineers write bugs all the time that invalidate the correctness of their programs, never mind individuals whose primary job is research, not software.
I previously made a big list of papers that were retracted due to software bugs. It was intended to go in a manuscript but I had to cut it out because the conference limited the number of references for the camera-ready version. If anyone is interested I can try to dig up the list again!
This isn't surprising, and I'm sure has happened many times. If you get the result you expect, you are much less likely to check for a mistake. The authors deserve a lot of credit for owning up to it.<p>They did the analysis with Stata.
> Given the corrected finding of a paradoxical increase in acute care use in the intervention group<p>Now I’m curious why long term
intervention/support increased the number of acute cases. Maybe people were more likely to find themselves sick when provided with additional monitoring after they leave the hospital? Some sort of psychological connection or being overly careful?<p>Plenty of doctors will simply blame your past diagnosis for any broad new symptoms, without doing much critical thinking or investigating. I’ve seen this personally many times in the years following a colitis diagnosis. The symptoms are quite broad and easily mistaken.<p>Anyone know if the new article is available yet?
Curious what is the incentive for an author to retract a study?<p>Shoulnd't they just leave it out? Continuing to accrue more publications and quotes, or whatever the metrics are in research.
>Over the course of this reanalysis, we detected an error in imputing missing values for the SGRQ, whereby the worst possible score (100) was incorrectly imputed for missing values of participants who had died beyond the 6-month study period. The correct approach would have been to classify those values as missing because those participants had not died by the 6 months after discharge study end point.<p>The reassignment error is possibly forgivable, but I think this second error should have been easier to catch and is much less easy to forgive. A simple filter check between possible score and some other status variable in the dataset would of caught this mistake. I am doing a Masters in Biostatistics and this kind of checking is being taught to us early on, I hope there is more focus on it later to help avoid mistakes like this.