TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Covert Racism in LLMs

44 pointsby truedukeabout 1 year ago

6 comments

gandalfgeekabout 1 year ago
The cited paper seems to be really bending over backwards to find some trace of bias. Unwinnable game for LLMs.<p>E.g. cited work claims &quot;LLMs assign significantly less prestigious jobs to speakers of African American English... compared to Standardized American English&quot;. You don&#x27;t say! Formal&#x2F;business language has higher association with prestigious jobs than informal&#x2F;street&#x2F;urban language. How is that even classified as &quot;bias&quot;?
评论 #39610445 未加载
评论 #39610420 未加载
评论 #39610421 未加载
评论 #39610796 未加载
评论 #39612035 未加载
ortusduxabout 1 year ago
Reminds me of this study -<p>Race effects on eBay (2015)<p><i>&quot;Abstract. We investigate the impact of seller race in a field experiment involving baseball card auctions on eBay. Photographs showed the cards held by either a darkskinned&#x2F;African-American hand or a light-skinned&#x2F;Caucasian hand. Cards held by African-American sellers sold for approximately 20% ($0.90) less than cards held by Caucasian sellers, and the race effect was more pronounced in sales of minority player cards. Our evidence of race differentials is important because the on-line environment is well controlled (with the absence of confounding tester effects) and because the results show that race effects can persist in a thick real-world market such as eBay. &quot;</i><p><a href="https:&#x2F;&#x2F;ianayres.yale.edu&#x2F;sites&#x2F;default&#x2F;files&#x2F;files&#x2F;Race_effects_on_ebay.pdf" rel="nofollow">https:&#x2F;&#x2F;ianayres.yale.edu&#x2F;sites&#x2F;default&#x2F;files&#x2F;files&#x2F;Race_eff...</a>
smusamashahabout 1 year ago
This means that for coding or any other specialised queries, there exists a specific style when asked in will return best answer for that query.<p>It means we can not just ask a question in any form and expect the answer to be same quality. This is in a way obvious because the text is generated based on tokens extracted from text, not the concepts.
评论 #39610369 未加载
belornabout 1 year ago
It would surprise me a lot of the same effect was not detected, with significant stronger effect, when the covert bias being studied is gender rather than race. I would also bet that linguistic features that signal wealth would also provide a bias.
firejake308about 1 year ago
I think it&#x27;s absurd to ask OpenAI to just &quot;recall&quot; their trillion-dollar cash cow, but there should absolutely be legislation limiting the use of LLM&#x27;s (or really any black box AI) in the criminal justice system
评论 #39610608 未加载
评论 #39610172 未加载
评论 #39610313 未加载
mewpmewp2about 1 year ago
Have they tried any other grammatically incorrect English?
评论 #39610181 未加载
评论 #39610198 未加载
评论 #39610314 未加载