Current MR physicist / data scientist here. There seems to be a lot of misapprehension in this thread.<p>First, this work is about taking data in the sensor domain ("k-space") and reconstructing it into an image. Doing this with partial k-space data and hand-coded heuristics is a <i>completely standard</i> part of the MRI research agenda and has been for quite some time. See, for example, <a href="http://mriquestions.com/k-space-trajectories.html" rel="nofollow">http://mriquestions.com/k-space-trajectories.html</a>. Further, several of these techniques have already made it into routine clinical work, and this acquisition-side stuff generally happens before the radiologist even sees the image (reliable acquisition is in the interaction of radiographer with the scanner manufacturer's software).<p>There's also various claims here that seem to imply learned reconstruction inherently implies the risk of hallucinations without recourse. Naturally, one should be careful about this, but it's just a matter of careful cross validation: hold out examples of abnormal anatomy for the test set. There's other ways to attack this problem too: training can be done partly or mostly on synthetic data because we have reasonably good forward models of the physics. In this case, one could choose a wide variety of arbitrary synthetic anatomies during training, to further ally the fear of always hallucinating the "typical human brain" from any scan.<p>Slow acquisition and image artifacts in MRI are a fact of life for people in the field and I believe there's huge scope for improvement if we had more intelligent reconstruction and acquisition. Ideally the reconstruction would feed dynamically back into the acquisition to gather more context as necessary; the MR machine is, after all, one giant programmable physics experiment. This is already done in a limited way, but in what I've seen it relies on a lot of hand-coded heuristics. And guess what's the logical step after hand-coded heuristics? Yes, learned models where you objectively optimize for a final result, rather than hand-coding based on a few examples.<p>Final note - publicly releasing human data is a massive effort in data cleaning and careful anonymization. Not to mention that the acquisition of each sample is extraordinarily expensive. So bravo to these guys for going to the effort.