A few months ago there were articles going around about how Samsung galaxy phones were upscaling images of the Moon using AI [0]. Essentially, the model was artificially adding landmarks and details based on its training set when the real image quality was too poor to make out details.<p>Needless to say, AI upscaling as described in this article would be a nightmare for radiologists. 90% of radiology is confirming the <i>absence</i> of disease when image quality is high, and <i>asking for complementary studies</i> when image quality is low. With AI enhanced images that look "normal", how can the radiologist ever say "I can confirm there is no brain bleed" when the computer might be incorrectly adding "normal" details when compensating for poor image quality?<p>[0] - <a href="https://news.ycombinator.com/item?id=35136167">https://news.ycombinator.com/item?id=35136167</a>
It is just weird that papers like this can be published. "Deep learning signal prediction effectively eliminated EMI signals, enabling clear imaging without shielding." - this means that they have found a way to remove random noise, which if true, should be the truly revolutionary claim in this paper. If the "EMI" is not random you can just filter it so you don't need what they are doing. If it isn't random, whatever they are doing can "predict" the noise, they even use the word in that sentence. They are claiming that they can replace physical filtering of noise before it corrupts the signal (shielding) with software "removal" of noise after it has already corrupted the signal. This is simply not possible without loss of information (i.e. resolution). The images that they get from standard Fourier Transform reconstruction are still pretty noisy so on top they "enhance" the reconstruction by running it through a neural net. At that point they don't need the signal - just tell the network what you want to see. The fact that there are no validation scans using known phantoms is telling.
> We conducted imaging on healthy volunteers, capturing brain, spine, abdomen, lung, musculoskeletal, and cardiac images. Deep learning signal prediction effectively eliminated EMI signals, enabling clear imaging without shielding.<p>So essentially, the neural net was trained to what a healthy MRI looks like and would, when exposed to abnormal structures, correct them away as EMI noise leading to wrong diagnostics?<p>I won't be very dismissive of this approach and probably deep learning has a strong role to play in improving medical imaging. But this paper is far, far from sufficient to prove it. At a minimum, it would require mixed healthy / abnormal patients with particularities that don't exist in the training set, and each diagnostic reconfirmed later on a high resolution machine. You need to actually prove the algorithm does not distort the data, because an MRI that hallucinates a healthy patient is much more dangerous than no MRI at all.
I can’t access the full paper, but from the abstract, is it accurate that they’re using ML techniques to synthesize higher-quality and higher-resolution imagery, and <i>that’s</i> the basis for their claim that it’s comparable to the output of a conventional MRI scan?<p>Do clinicians really prefer that the computer make normative guesses to “clean up” the scan, versus working with the imagery reflecting the actual measurements and applying their own clinical judgment?
As a practicing radiologist, I think this is great. We can have AI enabled MRI scanners hallucinating images, read by AI interpreting systems hallucinating reports!
I'm a radiologist and very sceptic about low-field MRI + ML actually replacing normal high-field MRI for standard diagnostic purposes.<p>But in a emergency setting or especially for MRI-guided interventions these low-field MRIs can really play a significant role. Combining these low-field MRIs with rapid imaging techniques makes me really excited about what interventional techniques become possible.
The application of a system like this could be as augmentation to imagers like CT and ultrasound. Because of its up resolution techniques and lower raw resolution (2x2x8mm), it might not be used for early cancer detection. But it looks <i>really</i> useful in a trauma center or for guiding surgery, etc. These same techniques could also be applied to CT scans, I could see a multi sensor scanner that did both CT and NMRI use super low power, potentially even battery powered.<p>Regardless, this is super neat.<p>> We developed a highly simplified whole-body ultra-low-field (ULF) MRI scanner that operates on a standard wall power outlet without RF or magnetic shielding cages. This scanner uses a compact 0.05 Tesla permanent magnet and incorporates active sensing and deep learning to address electromagnetic interference (EMI) signals. We deployed EMI sensing coils positioned around the scanner and implemented a deep learning method to directly predict EMI-free nuclear magnetic resonance signals from acquired data. To enhance image quality and reduce scan time, we also developed a data-driven deep learning image formation method, which integrates image reconstruction and three-dimensional (3D) multiscale super-resolution and leverages the homogeneous human anatomy and image contrasts available in large-scale, high-field, high-resolution MRI data.
The idea sounds great, but the examples they provide aren’t encouraging for the usefulness of the technique:<p>> The brain images showed various brain tissues whereas the spine images revealed intervertebral disks, spinal cord, and cerebrospinal fluid. Abdominal images displayed major structures like the liver, kidneys, and spleen. Lung images showed pulmonary vessels and parenchyma. Knee images identified knee structures such as cartilage and meniscus. Cardiac cine images depicted the left ventricle contraction and neck angiography revealed carotid arteries.<p>Maybe there’s more to it that
I’m missing, but this sounds like the main accomplishment is being able to identify that different tissues are present. Actually getting diagnostic information out of imagining requires more detail, and I’m not sure how much this could provide.
This is remarkable. 1800W is like a fancy blender, amazing to be able to do a useful MRI at that power.<p>For anyone who is unaware, a standard MRI machine is about 1.5T (so 30x the magnetic strength) and uses 25kW+. For special purposes you may see machines up to 7T, you can imagine how much power they need and how sensitive the equipment is.<p>Lowering the barriers to access to MRIs would have a massive impact on effective diagnosis for many conditions.
I have a hard time picturing the radiologist whose reputation and malpractice rely on catching small anomalies being comfortable using a machine predicated on inferring the image contents.
There are some non-ML based approaches for ultra low field MRI that are starting to work: <a href="https://drive.google.com/file/d/1m7K1W--UOUecDPlm7KqFYzfkoewZtlRl/view" rel="nofollow">https://drive.google.com/file/d/1m7K1W--UOUecDPlm7KqFYzfkoew...</a> . You can still add AI on top of course, but at least you get a better signal to noise ratio to start with!
It may miss some scans because there could be special cases which the model wasn’t trained with and would predict a different result/error. Maybe it’s acceptable in places where you may not even get a chance to be diagnosed
With a voxel size of 2x2x8mm^3, this would do what X-rays/CT's do now, and a bit more (but likely not replace high-energy MRI's? I'm not understanding how they rival high-energy accuracy in-silico, but that's how the paper's written)<p>In the acute setting, faster and more ergonomic imaging could be big. E.g., in a purpose-build brain device, if first responders had a machine that tells hemorrhagic vs ischemic stroke, it would be easier to get within the tPA time window. If it included the neck, you could assess brain and spine trauma before transport (and plan immobilization accordingly).
I can't read the full article but low-T MRI is potentially a big deal IMO because a 0.05T magnetic coil can be air or water-cooled but higher T-magnets (like 1.5 and 3T MRI magnets) have to use superconducting wire and thus must be cooled to sub 60K temperatures (even down to sub 10K) using Helium refrigeration cycles. I worked for a time at a company that made MRI calibration standards (among many other things).<p>helium refrigeration cycle equals:<p>- elaborate and expensive cryogenic engineering in the MRI overall design.<p>- lots of power for the helium refrigeration cycle.<p>- requirements for pure helium supply chain, which is not possible in many parts of the world, including areas of Europe, North America, etc.
This problem has already been solved by the MRI machines at your local airport. Cost and performance are not the issues. For $25 your luggage gets an MRI that automatically differentiates between organic molecules in seconds. How easily could this be adapted for free annual screenings at the mall?<p>But that's not the objective, and so your research is doomed<p>The medical equipment industry will not suffer fools who don't understand 'regulatory capture' and 'rent seeking.'<p>Those hospital machines are expensive and rare for reasons that have very little to do with cost or performance
Wow, this seems like it could be a DIY project! I know people are complaining about the AI stuff but look at the images <i>before</i> AI enhancement. They look pretty awesome already!
> Each protocol was designed to have a scan time of 8 minutes or less with an image resolution of approximately 2×2×8 mm³<p>very cool, but is it clinically useful if one edge of your voxel is 8mm?
I think this could be useful as a starting point for diagnostics - a cheaper, lower-power device massively lowers the barrier to entry to getting <i>an</i> MRI scan, even if it's not fully reliable. If it does find something, that's evidence a higher-quality scan is worth the resources. In short, use the worse device to take a quick look, if it finds anything, then take a closer look. If it doesn't find anything, carry on with the normal procedure.
> applying machine learning to the output of a lower-power MRI device<p>So, we get worse SNR data from the device and then enhance it with compressed knowledge from millions of past MRI images? Isn't it like shooting the movie with Grandpa's 8mm camera and then enhancing and upscaling it like those folks on YouTube do with historical footage?
Awkward, I was expecting to be reading an article about Tesla moving into the medical industry.
Who's idea was it to name the unit of magnetic flux density after a car company?
This is worse than that time I ended up reading an article about a shallow river crossing
Low power MRI can be a salvation to people who have some metal inside their body. Of course imaging those parts might be still impossible, but maybe other parts can be imaged
"The lower-power machine was much cheaper to manufacture and operate, more comfortable and less noisy for patients, and the final images after computational processing were as clear and detailed as those obtained by the high-power devices currently used in the clinical setting."
Medical imaging devices and medical devices in general are a racket. There are only a few companies and they are legal and lobbying departments first and foremost. This isn't the first time radical and radically cheaper prototypes have been proposed, but the unsolved bit it actually convincing anyone to buy.<p>A colleague had a device and a veteran adviced him to 10x the price.