> Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. [...]<p>> In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.<p>So, there you have it: AI not "either/or" humans, but <i>both</i>, in conjunction, as a <i>composition</i> of the best of both worlds.<p>At the very least, that's how civilization will massively and intimately introduce true assistant AI.<p>It's also somewhat counter-intuitive to think that the most specialized tasks are the low hanging fruits; i.e. that the "difficult" to us, culminating years of training and experience for humans (e.g. how to read a medical scan) may be, per its natural advantages (like speed and parallelism), "easy" to the machine.<p>That space (where machine expertise is cheaper than human) roughly maps to the immense value attributed to the rise of industrial-age narrow AI; therein lies not a way to replace humans — we never did that in history, merely destroyed <i>jobs</i> to create ever more — but rather to <i>augment</i> ourselves once more to whole new levels of performance.<p>Anything more than this is AGI-level, science-fiction so far — and there's not even a shred of evidence that it's theoretically a sure thing, possible in the first place. Which is not to say that AI safety research isn't <i>extremely important</i> even for the narrow kind (manipulation comes to mind), but we shouldn't go as far as to bet future economic growth on its existence. Like fusion or interstellar travel, we just don't know. Yet, and for the foreseeable future, because scale.