I wish university PR teams will stop peddling "cutting edge" science to the masses. I know people in the field know that fmri is problematic, but when the masses don't.
We need to stop confusing the public with PR pieces and silly Ted talks. The public are not idiots, they just end up not trusting science, and rightly so because they are exposed to it mainly through half assed puff pieces that peddles half baked science. This ends up as an incoherent and wrong narrative. Thus causing mistrust.
I'm friends with one of the labs that took part in this study, so to give a perspective of how this was received in the field.<p>For the most part it wasn't a radical finding, but really good to quantify and see the effects. It also demonstrated the importance of including and documenting the parameters of the analysis pipeline.<p>Something else to keep in mind, especially when thinking about cognitive neuroscience (where scanning is an important tool), is that analysis are not done in isolation. Every experiment is motivated by behavioural results, neurobiological results or both. The goal of fMRI is to gain insight into the activity of the brain, but its also not very precise (given the scale of the brain).<p>Basically this data is analyzed in the context of what has been shown before, and how it matches our current hypothesis. This doesn't mean such variation is acceptable (see first part of my comment), but it does mean that this doesn't cause all results on fMRI to be invalid.
There have been large problems with fMRI studies for a long time, even leaving aside the potentially sketchy coupling between the BOLD signal and actual neural activity and the difficulties accounting for movement artifact.<p>This paper published in 2016 suggests that commonly used statistical packages for analysis of fMRI data can have a false discovery rate of up to 70%: <a href="https://www.pnas.org/content/113/28/7900" rel="nofollow">https://www.pnas.org/content/113/28/7900</a><p>More fun, this poster presents the results of fMRI in dead salmon given an open ended mentalising task:
<a href="https://www.psychology.mcmaster.ca/bennett/psy710/readings/BennettDeadSalmon.pdf" rel="nofollow">https://www.psychology.mcmaster.ca/bennett/psy710/readings/B...</a>
This is a big problem and not just in neuroscience. Reproducible science is inherently hard and there aren't a lot of great tools that make it easy. The key to solving it is having a way to track data lineage and a reproducible way to run the processing. I've been building a system that implements those ideas for the past several years called Pachyderm [0]. We've helped many scientists across different fields run their pipelines in a reproducible way. If you're suffering from this problem we'd love to chat with you.<p>[0] <a href="https://www.pachyderm.com/" rel="nofollow">https://www.pachyderm.com/</a>
I don't really see why standardized pipelines are the answer. That just makes things reproducible while ignoring the question of accuracy. Phrenology can be perfectly reproducible, but unless there's a real effect to measure it's just chasing ghosts. So few people actually understand what these calculations are doing. I don't think making things even more cook-by-numbers is really going to achieve anything other than empowering more people to not know what they're doing.
Unfortunate, this is big news because it means all the brain imaging science suffers from the reproducibility crisis.<p>This means that all the TED talks saying ‘we know this because fmri’ should be assumed incorrect until proven otherwise with multiple double blind studies (which is too expensive in the current era to actually reproduce all of these studies).
From the ingress:<p><i>This finding highlights the potential consequences of a lack of standardized pipelines for processing complex data.</i><p>Couldn't it at least well be concluded to 'highlight the potential consequences should standardized pipelines for processing complex data be introduced' ?<p>If they had all been doing the same thing, aren't odds the results would be just as fishy but it would have been even harder to notice?
Seems to me the way you know something really is there is generally that it keeps showing up in analysis with a multitude of different appropriate methods.
It seems like building stats consensus on analysis pipelines for given study designs could be very worthwhile. Still surprised there's no declarative statistical programming language to analyze RCTs with code like "Maximize primary outcome, control age, account for attrition." Of course, written this way it sounds extremely naive—well <i>how</i> do you account for attrition? But, of course, people analyze data this way all the time, just with more verbose code. And that's the point of declarative programming: focus on what you want rather than how to get it.<p>(Also, not sure what the HN norms are, but to avoiding self-plagarization, note this is cross-posted from my twitter account.)
Some of us like to talk of numerical accuracy as an arcane and trivial detail, but there is no automated technique for always eliminating accuracy problems [0], and this article is showing us that there are real consequences to applied practices when our arithmetic isn't careful.<p>[0] <a href="https://people.eecs.berkeley.edu/~wkahan/Mindless.pdf" rel="nofollow">https://people.eecs.berkeley.edu/~wkahan/Mindless.pdf</a>