TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI recognition of patient race in medical imaging: a modelling study

76 点作者 aunterste将近 3 年前

23 条评论

tech-historian将近 3 年前
The interpretation part hit home: "The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging."
评论 #31414811 未加载
评论 #31410165 未加载
评论 #31410280 未加载
MontyCarloHall将近 3 年前
Not too surprising that physical differences across ethnicities are literally more than skin deep. It wouldn’t be shocking that a model could identify one’s ethnicity based on, for example, a microscope image of their hair; why should bone be any different?<p>I’m more surprised that the distinguishing features haven’t been obvious to trained radiographers for decades. It would be cool to see a followup to this paper that identifies salient distinguishing features. Perhaps a GAN-like model could work—given the trained classifier network, train 1) a second network to generate images that when fed to the classifier, maximize the classification for a given ethnicity, and 2) a third network to discriminate real from fake X-Ray images (to avoid generating noise that happens to minimize the classifier’s loss function). I wonder if the generator would yield images with exaggerated features specific to a given ethnicity, or whether it would yield realistic but uninterpretable images.
评论 #31414257 未加载
评论 #31415122 未加载
uberwindung将近 3 年前
..”In this modelling study, we defined race as a social, political, and legal construct that relates to the interaction between external perceptions (ie, “how do others see me?”) and self-identification, and specifically make use of self-reported race of patients in all of our experiments.”<p>Garbage research.
评论 #31415491 未加载
评论 #31417287 未加载
评论 #31416081 未加载
评论 #31416107 未加载
dang将近 3 年前
The submitted title (&quot;AI identifies race from xray, researchers don&#x27;t know how&quot;) broke the site guidelines by editorializing. Submitters: please don&#x27;t do that - it eventually causes your account to lose submission privileges.<p>From the guidelines (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html</a>):<p>&quot;<i>Please use the original title, unless it is misleading or linkbait; don&#x27;t editorialize.</i>&quot;
评论 #31415654 未加载
Imnimo将近 3 年前
The fact that the model seems to be able to make highly accurate predictions even on the images in Figure 2 (including HPF 50 and LPF 10) makes me skeptical. It feels much more probable that this is a sign of data leakage than that the underlying true signal is so strong that it persists even under these transformations.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2011.06496.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2011.06496.pdf</a><p>Compare the performance under high pass and low pass filters in this paper on CIFAR-10. Is it really the case that differentiating cats from airplanes is so much more fragile than predicting race from chest x-rays?
jl6将近 3 年前
&gt; Models trained on low-pass filtered images maintained high performance even for highly degraded images. More strikingly, models that were trained on high-pass filtered images maintained performance well beyond the point that the degraded images contained no recognisable structures; to the human coauthors and radiologists it was not clear that the image was an x-ray at all.<p>What voodoo have they unearthed?
评论 #31410307 未加载
评论 #31410263 未加载
评论 #31411791 未加载
评论 #31414359 未加载
评论 #31414996 未加载
tomp将近 3 年前
If you’re interested in “hard to describe features that can be learned with enough expiration”, look up <i>chick sexing</i><p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chick_sexing#Vent_sexing" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chick_sexing#Vent_sexing</a>
评论 #31410349 未加载
civilized将近 3 年前
It would be nice to see more genuine, enthusiastic scientific curiosity to understand how the ML algorithms are doing this, rather than just abject terror and alarm.
评论 #31410244 未加载
tejohnso将近 3 年前
&quot;This issue creates an enormous risk for all model deployments in medical imaging: if an AI model relies on its ability to detect racial identity to make medical decisions, but in doing so produced race-specific errors, clinical radiologists would not be able to tell, thereby possibly leading to errors in health-care decision processes.&quot;<p>Why would a model rely on its ability to detect racial identity to make decisions?<p>What kind of errors are race-specific?
评论 #31414445 未加载
评论 #31410324 未加载
评论 #31410417 未加载
评论 #31410297 未加载
daniel-cussen将近 3 年前
It could actually be the skin, it&#x27;s designed to block rays, it might also have a different x-ray opacity, and that can be judged from the whole picture in particular where there&#x27;s several layers of melanin, or there&#x27;s transitions from melanin to very little like on hands and feet. Eyelids too, if they&#x27;re retracted. And at the perimeter, the profile, different angle for the ray.<p>And the intention is for melanin to block x-rays too, block all rays, not just UV but deeper. Well it has a spectrum, that cannot be denied. And if you&#x27;re taking all the pixels in an image, there might be aggregate effects as I described. You get a few million pixels, let AI use every part of the buffalo of the information of the picture, and you can get skin color through x-rays.<p>The question is what this says about Africans with light-skin strictly because of albinism, ie lack of pigmentation, but otherwise totally African.
mensetmanusman将近 3 年前
What does this mean in terms of race being a social construct&#x2F;concept?
评论 #31410173 未加载
评论 #31410631 未加载
评论 #31410232 未加载
评论 #31410080 未加载
评论 #31410142 未加载
评论 #31410144 未加载
评论 #31410088 未加载
评论 #31410086 未加载
评论 #31410524 未加载
评论 #31410301 未加载
评论 #31410118 未加载
评论 #31410116 未加载
评论 #31410126 未加载
hellohowareu将近 3 年前
Simply go to google image and search: &quot;skeletal racial differences&quot;.<p>subspecies are found across species-- they happen based on geographic dispersion and geographic isolation, which humans underwent for tens and hundreds of thousands of years.<p>Welcome to the sciences of anatomy, anthropology, and forensics.<p>other differences:<p>- slow twitch vs fast twitch muscle<p>- teeth shape<p>- shapes and colors of various parts<p>- genetic susceptibility to &amp; advantages against specific diseases<p>Just like Darwin&#x27;s finches of the Gallapogos, humans faced geographic dispersion resulting in genetic, diet (e.g. hunter-gatherer vs farmer &amp; malnutrition), and geographical (e.g. altitude) differences which over the course of millennia affect anatomical differences. We can see this effect across all biota: bacteria, plants, animals, and yes, humans.<p>help keep politics out of science.
评论 #31410400 未加载
评论 #31410195 未加载
评论 #31413551 未加载
评论 #31410225 未加载
评论 #31410174 未加载
bb123将近 3 年前
One idea is that there is some difference in the x-rays themselves that could potentially be explained by racial disparities in access to (and quality of) healthcare. Maybe white people tend to visit hospitals with newer, better equipment or better trained radiographers and the model is picking up on differences in the exposures from that.
评论 #31410308 未加载
评论 #31410189 未加载
评论 #31410213 未加载
评论 #31410246 未加载
mathieubordere将近 3 年前
I mean, if color of skin, form of eyes and other visible, &quot;mechanical&quot; characteristics can be different it&#x27;s not that big of a leap to observe that certain non-visible characteristics can differ too between humans.
samatman将近 3 年前
Physiologies are created by genetics, and differences in ancestry are the basis for self-identified race.<p>Ordinary computer vision can also identify race fairly accurately, the high pass filter thing is merely pointing out that ML classifiers don&#x27;t work like human retinas.<p>It&#x27;s astonishing how many epicycles HN comments are trying to introduce into a finding that anyone would have predicted. Research which confirms predictable things is valuable of course, but no apple carts have been upset.
bitcurious将近 3 年前
I would guess a causal chain through environmental factors, given how much archeologists are able to tell about prehisotric humans’ lives based on bone samples.<p>Bone density, micro fractures and deviations in shape. The mongols had famously had bowed legs from spending a majority of their waking lives on horseback.
oaktrout将近 3 年前
I recall seeing a paper in the early 2010s with an algorithm that could discriminate between white and Asian based on head MRI images. I&#x27;m having trouble finding it now, but this finding to me is not too surprising.
ppqqrr将近 3 年前
So there’s material differences that supports certain prejudices; big surprise, turns out human societies have been (and still is) working very hard for thousands of years to craft those differences - isolating, separating, enslaving, oppressing, exiling their scapegoat “others”. The question is not whether the differences are real, but whether we can prevent AI from being used to perpetuate those differences. TBH, we don’t stand a chance; we live in a society where most people cannot even wrap their heads around why it <i>shouldn’t</i> perpetuate those differences.
kerblang将近 3 年前
&gt; Importantly, if used, such models would lead to more patients who are Black and female being *incorrectly* identified as healthy<p>I think this is the point a lot of people are missing; they think, &quot;So what if &#x27;black&#x27; correlates to unhealthy and the model notices? It&#x27;s just seeing the truth!&quot;<p>However, I&#x27;m still wondering how this incorrectness works; can anyone explain?<p>Edit: Clue: The AI is predicting <i>self-reported</i> race, and the authors indicated that self-reported race correlates poorly to <i>actual</i> genetic differences.
评论 #31411360 未加载
ars将近 3 年前
If this is true I suspect a human could be trained the same way.<p>I read once that a radiologist can&#x27;t always explain what they see in an image that leads them to one diagnosis or another, they say that after seeing many of them they just know.<p>So I suspect the same could be done for race. This would be a super interesting thing to try with some college students - pay them to train for a few days on images and see how they do.
omgJustTest将近 3 年前
Given the complexity of datasets, and what is known about the quality of medical scanners, is it possible that underserved communities (ie higher noise scanners) serve a specific community that is heavily skewed in race distributions?
评论 #31410182 未加载
HWR_14将近 3 年前
A lot of people are proposing simple reasons why this could be the case. They did so last year when the study that inspired this got published.<p>Maybe this needs to be updated from physicists: <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;793&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;793&#x2F;</a>
wittycardio将近 3 年前
I don&#x27;t trust medical journals or experimental AI research to be particularly scientific so I&#x27;ll just throw this into the meaningless bin for now.