This is not exactly new. I remember seeing models that did really well many years ago. And again caught many that humans had miss.<p>The problem is that they fail differently than humans do, in a way that humans wind up not trusting the results.<p>It turns out that there are parts of the breast that are easy to spot tumors in, and parts that are hard. A human scans quickly over the easy areas, and focuses on the hard. The result is that humans make careless errors on the easy areas, and catch hard tumors. Computers make no careless errors, but can't catch the hard ones. Thus when a human sees what the computer caught that the human did not, the mistake is easily dismissed. But when the human sees the ones that the computer missed, it becomes, "It doesn't know how to do the real work."<p>Ideally the two would be used together for better results than either alone. But humans wind up resenting the computer...
Anything related to AI coming out of IBM should be viewed with a huge dose of skepticism. They're honestly one of the worst offenders in overselling the capabilities of their products, bordering out outright fraud. There is certainly a lot of promise to the application of recent computer vision algos on medical imaging data, but I wouldn't bet much on IBM being anything close to a leader in this space.
I was lucky to date a girl who was into math, and who was coding those "machine learning" algorithms for a radiology startup here in Shenzhen.<p>She had a lot of scepticism for what she did. One of biggest showstoppers she said was the unpredictability of errors.<p>An algo can catch 99% tumors, including tiny ones, bur can randomly pass over very obvious ones which a human radiologist will spot with his eyes closed.<p>They had a demo day with radiologists, and them throwing tricky edge case xrays at the computer. Edge cases were all ok, but one radiologist pulled his own xray from his bag, with a 100% obvious, terminal stage tumor, and to company's embarrassment, the algo failed to detect it no mater how they twisted and scaled the xray. The guy then just walked out.
The reason I'm skeptical of this is that there is no actual comparison to human level performance. I.E. they didn't have radiologist actually read their images to compare against the model. Notice that the title of the paper is "Predicting Breast Cancer by Applying Deep Learning to Linked Health Records and Mammograms" it's only in the press release that they seem to imply a comparison to radiologists was actually done.
The real problem here is when the society will allow a machine to diagnose them and if the society is ready to believe that most diagnostics are probabilistically made.<p>Up to date we allow humans to be at a 70% error level without problems, but we ask machines to be 100% effective.<p>The very same happens with autopilot, the big numbers say they drive better than humans but...
Old news from a major source of AI hype.<p>Here's some previous results
<a href="https://med.stanford.edu/news/all-news/2018/11/ai-outperformed-radiologists-in-screening-x-rays-for-certain-diseases.html" rel="nofollow">https://med.stanford.edu/news/all-news/2018/11/ai-outperform...</a>
@moderators: would it make sense to change the link to the journal article rather than IBM's article? It's free access.<p><a href="https://pubs.rsna.org/doi/10.1148/radiol.2019182622" rel="nofollow">https://pubs.rsna.org/doi/10.1148/radiol.2019182622</a>
Doctors will be some of the first to be replaced by AI. My physicians walk around with a computer already checking all the boxes for symptoms and seeing what it says. I wish I could find one with a true intuition for medicine