TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

MIT AI tool can predict breast cancer up to 5 years early

127 点作者 codermobile将近 6 年前

7 条评论

aabaker99将近 6 年前
Take these results with a grain of salt. There&#x27;s a large class imbalance in this dataset and ROC curves can be misleading in this case. The test set contains 269 positive examples and 8482 negative examples.<p>From [1]:<p>&gt; Class imbalance can cause ROC curves to be poor visualiza- tions of classifier performance. For instance, if only 5 out of 100 individuals have the disease, then we would expect the five posi- tive cases to have scores close to the top of our list. If our classifier generates scores that rank these 5 cases as uniformly distributed in the top 15, the ROC graph will look good (Fig. 4a). However, if we had used a threshold such that the top 15 were predicted to be true, 10 of them would be FPs, which is not reflected in the ROC curve. This poor performance is reflected in the PR curve, however.<p>The authors seem to be aware of this in the supplement and also evaluate performance by a hazard ratio they define:<p>&gt; We calculated the ratio of the observed cancer incidence in the top 10% of patients over the incidence in the middle 80% and referred to this metric as the top decile hazard ratio. We calculated the ratio of the observed cancer incidence in the bottom 10% of patients over the incidence in the middle 80% and referred to this metric as the bottom decile hazard ratio.<p>However, binning is a form of p-hacking [2]. And I&#x27;m still wondering why they don&#x27;t just post the Precision-Recall curves.<p>[1] <a href="https:&#x2F;&#x2F;doi.org&#x2F;10.1038&#x2F;nmeth.3945" rel="nofollow">https:&#x2F;&#x2F;doi.org&#x2F;10.1038&#x2F;nmeth.3945</a><p>[2] <a href="https:&#x2F;&#x2F;doi.org&#x2F;10.1080&#x2F;09332480.2006.10722771" rel="nofollow">https:&#x2F;&#x2F;doi.org&#x2F;10.1080&#x2F;09332480.2006.10722771</a><p>[Edit] to add link to [2]
评论 #20446019 未加载
wccrawford将近 6 年前
Without information about false positives, this is just basically saying they wrote an algorithm that sometimes points out cancer early. But if it is only correct 1% of the time, nobody is going to listen to it. It&#x27;d do even less than the current &quot;You really need to check for cancer!&quot; statements that we already have.<p>Edit: From the paper:<p>&gt; A deep learning (DL) mammography-based model identified women at high risk for breast cancer and placed 31% of all patients with future breast cancer in the top risk decile compared with only 18% by the Tyrer-Cuzick model (version 8).<p>So better than before, but still only detects 31%. If I&#x27;m reading correctly, it&#x27;s 95% correct? I guess that means 5% false positives? That wouldn&#x27;t be bad.
评论 #20442791 未加载
评论 #20442756 未加载
评论 #20442723 未加载
评论 #20443722 未加载
评论 #20443393 未加载
melling将近 6 年前
According to Craig Venter, early detection is what we need to eliminate cancer:<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;iUqgTYbkHP8?t=15m37s" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;iUqgTYbkHP8?t=15m37s</a><p>The reason most people die from pancreatic cancer, for example, is because we almost always detect it in a late stage.
评论 #20443943 未加载
评论 #20442875 未加载
评论 #20443233 未加载
评论 #20443198 未加载
b_tterc_p将近 6 年前
Addressing model bias by adjusting which data the model has access to is a bad idea. Tweaking the data so that the model output looks equitable is going to make your model across the board. You should train your model on what you have and then add explicit biases to the classifier for different groups. That way you have the best model and are clear on your biases.<p>If this model is equally accurate for black and white women that means that either race is not a factor in predictability, that it is a factor but easily adaptable into a model, or race is a factor and they’ve reduced their ability to diagnose one group in the name of equity.<p>The linked article suggests accuracy gains are due to better risk models, that use more than age. I’m not sure if that means it’s tied into the image neural net. Would like to see false positive rate too.
jszymborski将近 6 年前
Here&#x27;s the paper in question.<p><a href="https:&#x2F;&#x2F;pubs.rsna.org&#x2F;doi&#x2F;full&#x2F;10.1148&#x2F;radiol.2019182716" rel="nofollow">https:&#x2F;&#x2F;pubs.rsna.org&#x2F;doi&#x2F;full&#x2F;10.1148&#x2F;radiol.2019182716</a>
stakhanov将近 6 年前
I wish they would call it something other than AI. Like &quot;Diagnostics&quot; or if there MUST be a buzzword there, then call it &quot;Predictive Diagnostics&quot;.<p>Once upon a time, a necessary precondition to call something AI was that it should be something where there is at least the hope that it could one day generalize to pass the Turing test or something along those lines.<p>Medical diagnostics is one of the primary applications of pattern processing, and since it&#x27;s pretty damned impressive as it is, it&#x27;s a bit pointless to try and make it even more impressive by suggesting that you might one day enjoy a chat with your medical diagnostic tool over breakfast, exchanging views on how the Knicks&#x27; season is shaping up... (Which both the informed readers, and the people writing this, know pretty damned well is never going to happen, and was never intended to happen).
magwa101将近 6 年前
JFC this is great.