TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI Recognises Race in Medical Images

128 点作者 stuartbman超过 3 年前

22 条评论

throwaway894345超过 3 年前
Please forgive me for asking a controversial question (particularly so early in the morning), but if there are all of these biological correlations with race, what does it mean that “race is a social construct”? Is the idea that black people have greater bone mineral density (per TFA) due to social or environmental causes (e.g., diet)? For what it’s worth, I’m a staunch egalitarian and I don’t see that changing either way.<p>EDIT: Really pleased with the largely constructive conversation in this thread. Was worried that this was going to be coopted as an ideological flame thread. Thanks for the insightful answers and good faith engagement. Keep up the good work!
评论 #28527175 未加载
评论 #28524419 未加载
评论 #28524299 未加载
评论 #28524412 未加载
评论 #28524201 未加载
评论 #28524601 未加载
评论 #28531501 未加载
评论 #28524629 未加载
评论 #28526755 未加载
评论 #28524450 未加载
评论 #28524321 未加载
评论 #28528035 未加载
评论 #28527824 未加载
评论 #28524687 未加载
评论 #28525122 未加载
评论 #28525432 未加载
评论 #28527238 未加载
评论 #28524574 未加载
评论 #28524260 未加载
评论 #28525035 未加载
评论 #28531582 未加载
评论 #28524237 未加载
评论 #28527236 未加载
评论 #28527003 未加载
评论 #28524272 未加载
评论 #28527948 未加载
评论 #28534539 未加载
评论 #28527648 未加载
评论 #28524522 未加载
评论 #28529747 未加载
评论 #28525493 未加载
评论 #28534635 未加载
评论 #28524728 未加载
评论 #28524593 未加载
评论 #28524316 未加载
评论 #28524714 未加载
评论 #28524466 未加载
hgial超过 3 年前
It might be helpful for folks to look at the blog post written by one of the authors:<p><a href="https:&#x2F;&#x2F;lukeoakdenrayner.wordpress.com&#x2F;2021&#x2F;08&#x2F;02&#x2F;ai-has-the-worst-superpower-medical-racism&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lukeoakdenrayner.wordpress.com&#x2F;2021&#x2F;08&#x2F;02&#x2F;ai-has-the...</a><p>or the paper itself<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2107.10356.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2107.10356.pdf</a><p>I see a lot of &quot;oh it&#x27;s probably just picking up on x y z&quot; when x, y, and z are things they explicitly checked for:<p>1) &quot;It&#x27;s probably just the names or other metadata&quot; – they only gave it pixel data to train on. To control for things like metadata overlaid on the image (e.g., a name written on the image) they divided the images into 3x3 sections and trained classifiers on each section separately.<p>2) &quot;It&#x27;s probably some artifact of how the hospital marked up the images&quot; – they used something like 7 different datasets from different hospitals and different modalities (X-Ray and CT).<p>If it is cheating somehow, it&#x27;s not doing it in an obvious way that you can think of in a minute or two. Also note that they had more than just medical folks working on the paper; the author list includes plenty of computer scientists. It&#x27;s unlikely they&#x27;re making an elementary ML mistake here.
评论 #28526717 未加载
umvi超过 3 年前
I don&#x27;t see why this is necessarily bad. An ML model is picking up on subtle anatomical or physiological differences between races. So what, that doesn&#x27;t automatically mean the AI is racist or biased...
评论 #28524101 未加载
评论 #28524193 未加载
abrichr超过 3 年前
Previous submission of the paper itself: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28050699" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28050699</a><p>We know that various features visible in medical images correlate with race, eg breast density, bone density, etc. Most likely the network is just learning a classifier on top of these features.<p>This is trivially verifiable but conspicuously absent from the paper.
评论 #28524027 未加载
评论 #28525783 未加载
评论 #28524082 未加载
knicholes超过 3 年前
I wonder if it has anything to do with the machines that are being used to take the images. Maybe some groups have access to one type of imaging machine where other groups have access to some other type of imaging machine.
评论 #28528000 未加载
Havoc超过 3 年前
Surprised that’s possible given the usual refrain about it being basically just melanin
lostlogin超过 3 年前
We were marvelling at a surface shaded render made on a new Siemens MR from an T1 MPRage on a very still and compliant patient. It basically looked like a black and white photo (though with the tools at hand we could cut the image in half and look at the brain). You could see the facial hair and you could identify the patient if you knew them. Medical imaging is moving along at pace and it would be interesting to see what could be inferred from a dataset of images of this quality.
andi999超过 3 年前
AIs do not have magical abilities, I do not trust this result. AI can pick up though easily on technical artifacts. Something like a cofactor: Since they used different databases, maybe one dataset had a high number of people of one self declared race and the other the other self declared race; and each using a different intensity maximum or so.
评论 #28526757 未加载
askesum超过 3 年前
Greyhound and Schaefer are separate races. The fastest Schaefer would lose a race with the slowest Greyhound. Jamaicans seem to be faster than swedes. But still, the fastest swede is faster than almost every jamaican. Swedes and jamaicans are not separate races.
soundnote超过 3 年前
This caused glorious meltdowns on Twitter. Some people just don&#x27;t want to face reality.
评论 #28529541 未加载
stuartbman超过 3 年前
I&#x27;m very aware that I&#x27;m a HN novice, but can I ask why my post title was edited? The new title is much less descriptive, and x-rays are different from medical images, after all.
评论 #28524721 未加载
lmilcin超过 3 年前
And... they found it looks at the name of the patient on the border of the image or something similar.<p>Like the time some team tried to evolve an FPGA net to solve some problem efficiently with a genetic algorithm and it learned to use a bunch of FPGA transistors as an antenna to communicate with another part of FPGA chip through interference. Unfortunately, it would not work on other FPGA chips even from the same lot.
评论 #28529736 未加载
iandanforth超过 3 年前
I asked the authors if they had compared the results with the participant&#x27;s skin color. They had not. The hypothesis would be that melanin is interacting with X-rays and would explain how the system can classify &quot;race&quot; even at extremely degraded resolutions.
literallyaduck超过 3 年前
&quot;Yeah, we are going to need your chest x-rays to approve you for a loan.&quot;
shadowgovt超过 3 年前
This is probably a great time to remind everyone that the reason the blood types are A, B, AB, and O (as opposed to, say, A, B, C, D or another nomenclature) is that when the first blood type experiments were run, only people with A and B protein configurations were available for testing in the lab where the tests were executed.<p>I&#x27;d be <i>very</i> cautious drawing sweeping conclusions from research like this. The researchers have a heavy burden to prove that what they don&#x27;t mean is &quot;recognizes race <i>in this training dataset</i>.&quot;
评论 #28528783 未加载
threshold超过 3 年前
You want AI reviewing medical imaging to recognize race because the likelihood of certain diseases is higher for some races than others.
poulpy123超过 3 年前
AFAIK there is no scientific definition of race so I don&#x27;t see what could be recognised by an algorithm
评论 #28544627 未加载
nxpnsv超过 3 年前
I&#x27;m struggling to understand what it&#x27;s good for? Couldn&#x27;t you just look? Or better yet, ask?
评论 #28525255 未加载
JoeAltmaier超过 3 年前
Never mind the images; who was deciding what &#x27;race&#x27; the training data matched against? In this modern age of globalism, they must have searched hard to find anyone with any kind of historically-categorized dna.<p>I&#x27;m guessing, they just used folks&#x27; self-identification for race on some form. Which is largely a social construct.
评论 #28526858 未加载
motohagiography超过 3 年前
Any clustering similarity scheme for biometric data would yield similarity categories that we may or may not name &quot;races&quot; though.<p>We could probably do the same with text analysis, where the emergent distinct flavours would create categories. A previous HN story that did specifically this (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27568709" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27568709</a>) could have just as easily been called &quot;tribes.&quot;<p>The bigger question is whether the categories provide heuristics with valuable predictive illumination. &quot;Valuable,&quot; being the key term to solve for.<p>Ethnicity information in medicine may be a fast heuristic for testing for things like melanoma and diabetes, but even that this fast sorting rule might provide a time&#x2F;steps shortcut or intuitive leap to test for a diagnosis is likely really more an artifact of the cost of testing and examination than the result of a physical&#x2F;biological determinant.<p>I&#x27;d conjecture that a world with tricorders where the cost of scanning for disease is equal and controlled, would likely yield results that were less-ethnically correlated - and then edge cases that were exclusively ethnically correlated, e.g. over a very polarized distribution. There&#x27;s also the question of whether the tricorder measures complete things, and who decides.<p>This is to say, there are differences and combinations that may aggregate into categories, but the meaning of the differences is dynamic, subjective, and a function of what level of abstraction you are looking at them from. E.g. at the level of a statement like &quot;most foo people are bar,&quot; you&#x27;ve already cancelled out most of the information about your sample, so the coherence of something that low-information is going to be limted as well.<p>In this sense, the &quot;social construct,&quot; description is a response to these noisy dynamics, and it&#x27;s consistent to a point. In this view, race is only ever a determinant when we let it be, as the result of chosen and learned interpretations of these cognitive grouping dynamics. When the cost of errors is low, we can afford to unlearn these abstractions. Modernity and civilization implies the cost is low.<p>Taking that further, when the real cost of errors is high enough, you get a reinfocement effect on the bias where the surviving population is made up mainly of people who exercised that fast heuristic (hence long-lived homogenous populations), because the tolerant ones evoltionarily select out as a result of that high error cost.<p>I could even extend this further to define racists today as people who percieve a high cost to being wrong in their generalizations, which correlates well with being poor, but also, very rich, just less so in the middle between. Anti-racism becomes a kind of signal that shows you can afford to be wrong, and oddly, racism in this model is intended to signal you have a lot to lose. If you want to reduce racism, solve for the security issues for people who percieve a high cost to being wrong about openness. If you want more racism, just antagonize people who percieve that they have a lot to lose. I&#x27;d wonder how well that generalizes.
FourthProtocol超过 3 年前
There&#x27;s only one race. Ethnicity may vary.
评论 #28524404 未加载
评论 #28524162 未加载
评论 #28524127 未加载
desktopninja超过 3 年前
Previous discussions: <a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?query=AI%20has%20the%20worst%20superpower%20medical%20racism&amp;type=story&amp;dateRange=all&amp;sort=byDate&amp;storyText=false&amp;prefix&amp;page=0" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?query=AI%20has%20the%20worst%20super...</a><p>Personally think &#x27;race&#x27; is nothing more than a fantastical vanity construct. Really its tribalism. Furthermore, I find it hard as well to comprehend how it holds weight in the medical industry. Race is not real science. Race is entertainment science. AI is mostly entertainment science.
评论 #28554205 未加载