TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI has the worst superpower medical racism

16 点作者 woliveirajr将近 4 年前

6 条评论

enriquto将近 4 年前
&gt; Racial identity is a social, legal, and political construct that consists of our own perceptions of our race, and how other people see us.<p>This seems a pretty US-centric conception. I for one wouldn&#x27;t know how to assign a &quot;racial identity&quot; to myself, and most of my acquitances would surely say the same thing. I doubt that this study holds much meaning outside the USA.<p>Now if it was about genetic ancestry it would make some sense (and not really be surprising). But if it is about &quot;self-reported identity&quot; I guess it&#x27;s only finding the correlation of that with actual genetically transmitted features.
评论 #28116890 未加载
wheybags将近 4 年前
I can think of some reasons why this might be a problem, but the article doesn&#x27;t seem to make a strong case? Without extrapolating a bit with my imagination, the only conclusion I can draw is &quot;medical ai considers race in it&#x27;s diagnosis, and it is neither a good nor a bad thing&quot;. They seem to leave it as a given that it&#x27;s bad, but tbh as a layman I could imagine arguments both ways[1]. An article from a professional like this one is where I would expect to find them, along with a summary that effectively &quot;picks a side&quot;, with a justification for picking that side.<p>1: Eg: using race as a clue that some reading might indicate a disease common in that race could be good, on the other side there might be some very expensive scan that could throw off the diagnosis because the training data is skewed towards old white men by the sheer price of doing the scan.
reviewedys将近 4 年前
I&#x27;m sympathetic to the questions being asked, but I think the author is too quick to assume the worst here.<p>The key assumption is shown in the diagram that &quot;shows a hypothetical pneumonia detection model that, during optimization, has learned to recognize racial identity.&quot; -- the author asserts that this is obviously incorrect and dangerous.<p>Look at the HPF 50 results, and the fact that the location being imaged is nearly irrelevant -- it&#x27;s clear that image shape features are irrelevant to the results. It seems like the signal itself is picking up (is modified by) racial indicators. If that&#x27;s true, it only makes sense that being good at detecting pneumonia is going to require correcting for that signal. That makes (this use of) AI anti-racist in a very important way.<p>This is nonetheless a fascinating result!
desktopninja将近 4 年前
I sometimes think &#x27;race&#x27; is nothing more than a fantastical vanity construct. Really its tribalism. Furthermore, I find it hard as well to comprehend how it holds weight in the medical industry. Race is not real science. Race is entertainment science. AI is mostly entertainment science.
评论 #28121443 未加载
cfcf14将近 4 年前
This is bewildering and concerning. I absolutely applaud the authors for releasing their code and asking the public to have a look, as well as raising the alarm. Given the degree of due diligence they&#x27;ve performed, it seems too much to hope that there&#x27;s just some really nightmarish data leakage occurring; but even if that were the case, the difficulty of identifying its origin or even determining the true characteristics of what the models are doing is extremely problematic regardless of whether this <i>particular</i> instance of bias can be explained.
bArray将近 4 年前
Disclaimer: Worth noting that this paper is apparently pre-print [1] and was submitted to the Lancet, who are not particularly reliable in recent times (flaws hydroxychloroquine study [2], stamped out early investigation in COVID lab-leak possibility [3]).<p>Regarding the blog and paper:<p>&gt; AI can trivially learn to identify the self-reported racial identity of patients to an absurdly high degree of accuracy<p>If it is &#x27;trivial&#x27; for a model to learn race, then what they are detecting is likely pretty obvious.<p>In their paper they use &#x27;private and public datasets&#x27; [4]. There is probably exactly where the problem occurs. People of the similar race tend to live in groups together, if you collect data in some area, there&#x27;s a good chance you source data from the same ethnic pool. What it appears they are detecting is the difference between different machines and practices. To test this, they should collect their own medical data on the same machine. They also need to figure out how to remove the bias of the medical staff themselves.<p>Another assumption is that just because the medical doctors cannot see differences in the anatomy of races, it doesn&#x27;t mean there isn&#x27;t one. One of the commentators on this blog indicated that dark skin may affect the absorption of x-rays, for example.<p>&gt; AI does learn to do this when trained for clinical tasks<p>Just because AI models <i>can</i> detect race from medical data, doesn&#x27;t mean they <i>will</i>. They may in the early stages learn such simplistic relationships, but as accuracy increases, it will be forced to look for more complex classifiers&#x2F;indicators.<p>Finally, this blog really skirts around the important thing - how does this affect patient care? Is it beneficial to actually have race-based medical plans? We know that some medicines affect different races - for example in pain and pain management [5].<p>I think the point to be made here isn&#x27;t &#x27;racism&#x27; but the <i>possibility</i> of bias that people training models should be aware of. Some of this bias will have a positive outcome, some will have a negative outcome.<p>[1] <a href="https:&#x2F;&#x2F;www.bibsonomy.org&#x2F;bibtex&#x2F;c3e2ede2f835d35c08c54a670951276b#citation_BibTeX" rel="nofollow">https:&#x2F;&#x2F;www.bibsonomy.org&#x2F;bibtex&#x2F;c3e2ede2f835d35c08c54a67095...</a><p>[2] <a href="https:&#x2F;&#x2F;www.nbcnews.com&#x2F;health&#x2F;health-news&#x2F;lancet-retracts-large-study-hydroxychloroquine-n1225091" rel="nofollow">https:&#x2F;&#x2F;www.nbcnews.com&#x2F;health&#x2F;health-news&#x2F;lancet-retracts-l...</a><p>[3] <a href="https:&#x2F;&#x2F;www.washingtonexaminer.com&#x2F;news&#x2F;wuhan-lab-collaborator-peter-daszak-recused-from-lancets-covid-19-origins-investigation" rel="nofollow">https:&#x2F;&#x2F;www.washingtonexaminer.com&#x2F;news&#x2F;wuhan-lab-collaborat...</a><p>[4] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2107.10356.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2107.10356.pdf</a><p>[5] <a href="https:&#x2F;&#x2F;www.ncbi.nlm.nih.gov&#x2F;pmc&#x2F;articles&#x2F;PMC3654683&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.ncbi.nlm.nih.gov&#x2F;pmc&#x2F;articles&#x2F;PMC3654683&#x2F;</a>
评论 #28121512 未加载