TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A discussion of discussions on AI Bias

63 pointsby davezatch11 months ago

11 comments

aetherson11 months ago
I think Dan performs a mild sleight-of-hand trick here: he asks why we don&#x27;t consider this a <i>bug</i> when any other software would consider it a bug. But in fairness, the question was not, &quot;Did this prompt have a bugged output,&quot; it&#x27;s &quot;Did it have a racially biased output,&quot; and that&#x27;s a more emotionally charged question.<p>If I wrote software that choked on non-ASCII inputs for say a name, and then someone said, &quot;Hey, that&#x27;s a bug,&quot; cool, yes, fair. If someone said, &quot;This is evidence of racial bias,&quot; I mean... I&#x27;d probably object. Even if there is some nugget of truth to the idea that there may be some implicit language bias which relates to race in there.<p>I think Dan does a decent job showing that there is some level of racial bias -- not on the level of &quot;the model refuses to show asian people,&quot; but on a level of &quot;the model conflates certain jobs with certain races,&quot; and that&#x27;s fair. But I just found the lede of, &quot;Why don&#x27;t people admit there&#x27;s a bug in AI&quot; to be a little obtuse.
评论 #40706472 未加载
评论 #40706482 未加载
评论 #40706799 未加载
评论 #40706223 未加载
kelseyfrog11 months ago
&quot;Bias in relation to what?&quot; is the huge unanswered question I left the article with.<p>Are we even talking about the same thing when we discuss this topic? Are we talking about bias with respect to the training set, or bias in the training set with respect to reality, bias with respect to our expectations, or bias in reality? Each of these are completely different problems and each have different causes, importance, and consequences.<p>I can&#x27;t help but think that at least some people are uncomfortable with reality being reflected back at them. When we generate images conditioned on occupation, should they reflect the racial proportions documented by the BLS[1]? It feels very, &quot;I don&#x27;t care what you do about them, but I don&#x27;t want to <i>see</i> homeless people.&quot; Being confronted with reality is deeply unsettling for some people. Likewise, I&#x27;d be unsurprised to hear that some people would be uncomfortable if images generated by conditioning on occupation <i>did</i> accurately reflect reality because in their minds, reality is <i>more</i> diverse than it really is.<p>1. <a href="https:&#x2F;&#x2F;www.bls.gov&#x2F;cps&#x2F;cpsaat11.htm" rel="nofollow">https:&#x2F;&#x2F;www.bls.gov&#x2F;cps&#x2F;cpsaat11.htm</a>
theptip11 months ago
I’m gently optimistic on the subject. When bias is encoded in the synaptic weights of a human brain it’s extremely hard to quantify. You can’t run an ablation experiment or try different combinations of similar inputs to determine if a judge is biased, for example.<p>AI is materializing existing biases, perhaps amplifying some in the short term. This is object-level bad if we start hooking up important systems without building guardrails, eg I’d be worried about a “legal advice from AI” service right now.<p>At the meta-level this is an opportunity to run those experiments and root out some of the areas where bias does creep in. I think a lot of the coverage on the issue (not referring to OP here) fails to look past the object level and in doing so misses the big opportunity.<p>Of course, when you actually start having these conversations, you get to some very tricky discussions about what “fixing bias” actually means. In so many areas it’s a lot easier to throw around fuzzy rhetoric rather than a quantitative model that encodes a specific solution. But AI systems require precisely that.
lsy11 months ago
I think it&#x27;s difficult to conceptualize a program&#x27;s behavior as &quot;buggy&quot; when the program <i>has no specified behavior</i>. The vexing thing about LLMs and image generators is that they ultimately have no intended purpose or principled construction as artifacts, being mostly discovered through trial-and-error and pressed into service due to various misconceptions about what they are — human imitators, fact retrievers, search engines, etc. But they are really just statistical regurgitators over whatever the training dataset is, and while that type of an artifact seems to sometimes prove <i>useful</i>, it&#x27;s not something we have yet developed any kind of principled theory around.<p>One example is in DALL-E initially going viral due to its generated image of an astronaut riding a unicorn. Is this a &quot;bug&quot; because unicorns don&#x27;t exist and astronauts don&#x27;t ride them? One user wants facts and another wants fancy. The decision about what results are useful for which cases is still highly social and situational, so AIs should never be put in a fire-and-forget scenario or we will see the biases Dan discusses. AIs are more properly used as sources of statistical potential that can be reviewed and discarded if they aren&#x27;t useful. This isn&#x27;t to say that the training sets are not biased, or that work shouldn&#x27;t be done to rectify that distribution in the interest of a better society. But a lot of the problem is the idea that we can or should trust the result of an LLM or image generator as some source of truth.
Terr_11 months ago
I feel like we need to disentangle a bunch of layers here. To take a stab at a quick categorization:<p>1. Policy bias, like if someone put in a system prompt to try to trigger certain outcomes.<p>2. Algorithmic&#x2F;engineering bias, like if a vision algorithm has a harder time detecting certain details for certain skin tones under certain lighting.<p>3. Bias inside the data set which is attributable to biased choices made by the company doing the curation.<p>4. Bias in the data set which is (unlike #3) mostly attributable to biases <i>in the external field or reality</i>.<p>I fear that an awful lot of it is #4, where these models are highlighting distasteful statistical trends that already exist and would be concerning even if the technology didn&#x27;t exist.
评论 #40708859 未加载
tootie11 months ago
If I could channel some of my less imaginative QA colleagues, it&#x27;s not a bug if it&#x27;s not specified in the acceptance criteria. A bug is something contrary to the expected output. If the AI tool producers never cared about bias, then it&#x27;s not a bug. And it will probably never be treated as such until they have a liability issue to contend with.
datadrivenangel11 months ago
The data is correct, but bad. What do we do in a case when we can&#x27;t fix the source data?
vessenes11 months ago
Substantive effort by Dan, as usual.<p>He asks for a prediction — will this still be the same state of affairs in 2033, where “same” means models encode source data bias and we don’t have in-model ways of dealing with that bias. I’d predict “yes” on that, with some caveats.<p>What practitioners seem to be doing now is using prompting modifications to inputs to get desired diversity spreads out of the models. I’ve written a bit about this, but I think doing this openly, with user choice, is great, and doing it secretly, without notification is evil. I think a lot of people feel this way, and it explains much of the outcry against race-shifting founding fathers.<p>Whatever I think about it, we’ll see a lot of that by 2033. I do think we’ll see much more sophisticated use of controlnets &#x2F; LoRAs &#x2F; their successors to nudge &#x2F; adjust inference at the weight level, vs. prompting. These are useful right now, and super sophisticated, they’re not just for bias-related changes, almost anything you can prompt up could become a vector you adjust LLM behavior on. So, I think we’ll move out of the Stone Age and into say the Bronze Age by 2033.<p>That said, Dan does make a fundamental input bias error here, which is common to do when people explore and write about diffusion models, but really important to test — what does the source input image randomness look like? A diffusion model moves some number of steps away from some input. Typically random noise. This random noise has a lightness level, and also at times color tone. By default, in most inference systems, this image is on average tone-neutral (grey) and very light.<p>If you’re going to generate sample images without keeping track of seeds, fine, do a bunch, like he did here. But, if you’re going to determine how likely ‘whiteness’ is on a given prompt, you need to be very aware of what source image you’re giving the model to work on. Especially when we are talking about facial and race discriminators that are judged largely on skin tone. A white input image is easier to turn into a white face, and requires less deviation from the original image, and so on average, it will be preferred by most diffusion model generation stacks.<p>So, is PAI or Stable Diffusion biased over and above the world’s image data levels in their model? Maybe, I don’t know. Is it biased at the world’s image data levels? Maybe, probably? But, I don’t think you can pass a Gaussian noise image defined to have a fairly white lightness value and grey color tone to a thing, ask it to draw a face, and then say it’s white-face-biased a priori — you’re starting the model out with a box of very light crayons and making it ask for other colors from the cabinet vs using what’s at hand.<p>Anyway, I don’t think this takes away from Dan’s fundamental point that this class of ‘bug’ is not going away, especially in that it’s harder to even agree on what is a bug. But, I’d like to see someone, anyone, talk about image generation bias while aware of what’s being fed <i>into</i> these models at the start of inference, it would raise the level of discourse.
janalsncm11 months ago
&gt; a common call to action for at least the past twenty years . . . has been that we need more diverse teams<p>I view calls for more “diverse” teams as a sort of general platitude, basically a thought terminating cliche.<p>The problem is that the team has made certain assumptions about the demographics of its userbase, not that the team itself is not diverse. The real world is too long tail to represent every demographic. The 40th most popular language has 46 million speakers.
dr_dshiv11 months ago
ChatGPT, in my experience, is substantially less biased than most people. I’m impressed by that, fwiw
08234987234987211 months ago
&quot;Racial&quot; bias is a bug in people.<p>To the extent it shows up in &quot;AI&quot;, that&#x27;s just GIGO.<p>What surprises and disappoints me is how many people (not so much TFA, but many comments here) seem to be expecting AI to be magical pixie dust which gives &quot;the right answer&quot;, instead of, you know, an <i>artificial</i> intelligence.