TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Creativity has left the chat: The price of debiasing language models

174 pointsby hardmaru11 months ago

27 comments

freehorse11 months ago
People often think that RLHF is just about "politics" but in reality it is generally about aligning the model output with what a human would expect/want from interacting with it. This is how chatgpt and the like become appealing. Finetuning a model primarily serves for it to be able to respond to instructions in an expected way, eg you ask something and it does not like start autocompleting with some reddit-like dialogue like some it may have been trained on. It is to bias the model to certain outputs. Reducing entropy is exactly the goal, so no surprise they find that. The problem is there is no inherent meaning in the finetuning set from the perspective of the model. Reduction of entropy will not only happen by removing "bad entropy" only as there is no such thing.
评论 #40706910 未加载
评论 #40710156 未加载
Mathnerd31411 months ago
I had an argument with some people over what debiasing means. There is some interesting research on fair clustering that I think points the way. The way fair clustering works is that you take data with both protected and unprotected attributes, and then you orthogonalize the unprotected attributes based on the protected attributes. So for example, if race is protected and income is unprotected, but there is a strong black&#x2F;white poor&#x2F;rich pattern, the fair clustering would compute &quot;relatively poor&#x2F;relatively rich&quot; clusters. Then you sample from a cluster with equal probability. It will not necessarily produce 50&#x2F;50 black&#x2F;white, rather it will follow the input trends, so if the input is 80% white and 20% black then the output will roughly follow those probabilities, independent of what cluster you chose (and there are no clusters corresponding to protected attributes).<p>Obviously clustering is a different problem from inference, but they are all high dimensional vector spaces - it should be easy enough to take a fair clustering algorithm and modify it to generate continuous mappings instead of discrete groups. But if it all works, the LLM should be e.g. race-blind in that asking for a description of a rich man will give skin tones following population statistics but he will always be wearing an expensive suit. The question of what to protect is tricky though, e.g. age is often considered protected but if you ask for an old man with gray hair it would be surprising to get a retired age 30 person. So there is some subjectivity in designing the protected features dataset to show what should be considered similar or same-clusters.<p>But really the purpose of RLHF is to reduce toxicity. It should be possible to orthogonalize toxicity like everything else, then there would not be a reduction in generated races like the paper observed.
评论 #40709628 未加载
评论 #40709856 未加载
jrm411 months ago
&quot;Bias&quot; implies the possibility of &quot;unbiased language model&quot; which seems to be in the category of things that are on one hand, COMPLETELY IMPOSSIBLE, and on the other, still likely to be sold on the market because market wants it so much?
评论 #40708356 未加载
评论 #40707969 未加载
lispisok11 months ago
Is this why all the coding AI products I&#x27;ve used have gotten worse as the developers fine tune them to eliminate bad output? Before there was bad output and some interesting output, now it&#x27;s just bland obvious stuff.
评论 #40702975 未加载
评论 #40703245 未加载
评论 #40707920 未加载
评论 #40703155 未加载
isodev11 months ago
In simple terms, LLMs are &quot;bias as a service&quot; so one wonders, what is left once you try to take the bias out of a LLM. Is it even possible?
评论 #40703358 未加载
anewhnaccount311 months ago
There is a bit of a false equivalence between entropy of output distributions and creativity here. Is diversity really the same as creativity?
评论 #40703391 未加载
评论 #40703479 未加载
SirMaster11 months ago
I feel like &quot;information systems&quot; have always struggled with bias, and the latest AI&#x2F;ML systems seem to be no different.<p>It doesn&#x27;t really seem like a problem that can or will ever be &quot;solved&quot;. Just mitigated to various extents, but there will still likely be some underlying biases that exist that are not fully or effectively filtered. Because to adjust a bias seems to mean you have to detect and understand it first.<p>It feels like it would be a full-time job to keep making sure some evolving model continued to stay &quot;neutral&quot;.
评论 #40707782 未加载
nalqax11 months ago
CoPilot is now basically useless for discussing or even <i>getting</i> recent information about politics and geopolitical events. Not only opinions are censored, but it refuses to get <i>the latest polls about the U.S. presidential elections</i>!<p>You can still discuss the weather, get wrong answers to mathematics questions or get it to output bad code in 100 programming languages.<p>I would not let a child near it, because I would not want that kind of indoctrination. Users are being trained like Pavlov&#x27;s dogs.
评论 #40750419 未加载
hughrlomas11 months ago
The official openai-cookbook (<a href="https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;openai-cookbook">https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;openai-cookbook</a>) used to have an explicit, but buried, call out that instruction-following models like `text-davinci-003` were &quot;Less diverse; less creative; sometimes harder to steer tone, style, etc.&quot; as opposed to base completion models like `davinci`.<p>It stood out to me because it seemed to be an internal admission that this training narrowed the potential of the models.<p>Required a bit of digging but I found the old file in the history, the relevant text is in the comparison table at the bottom: <a href="https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;openai-cookbook&#x2F;blob&#x2F;c651bfdda64ac049747c2a174cde1c946e2baf1d&#x2F;text_comparison_examples.md">https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;openai-cookbook&#x2F;blob&#x2F;c651bfdda64ac...</a>
Fellshard11 months ago
Distilling my thoughts on &#x27;debiasing&#x27; here, and in a variety of other modern endeavors.<p>It is better to have representations of reality that you can then discuss and grapple with honestly, than to try to distort representations - such as AI - to make them fit some desired reality and then pressure others to conform their perception to your projected fantasy.<p>Representations don&#x27;t create reality, and trying to use representations in that way only causes people to go literally insane, and to divide along lines of who accepts and who rejects your fantasy representation.<p>So, for example, if you try and remove any racial bias from AI, you are going to end up crushing the AI&#x27;s ability to represent reality according to a variety of other real factors: income, judicial outcomes, health risks, etc. Your desired reality makes the actual tool worthless, except to confirm one group&#x27;s own intended fantasy world as they envision it. The problem doesn&#x27;t get dealt with, it just becomes impossible to think about or discuss.<p>So instead of dealing with real problems, you hope you can simply prevent people from thinking thoughts that cause those problems by wrapping them in a bubble that deflects those thoughts before they happen. This is magical, wizardry thinking: treating words as if they create reality, instead of merely describing it. And it will break, eventually, and in a very ugly way: people dividing along lines of their perception of reality, even more than they already do.
评论 #40712382 未加载
MrThoughtful11 months ago
How hard would it be to create a &quot;raw&quot; model on a corpus like Hacker News or Wikipedia?<p>With &quot;raw&quot;, I mean that it is simply trained to predict the next token and nothing else.<p>Would be fun to play with such a model.
评论 #40703000 未加载
评论 #40709168 未加载
评论 #40702983 未加载
评论 #40702986 未加载
评论 #40706488 未加载
评论 #40703096 未加载
评论 #40708572 未加载
rgavuliak11 months ago
I thought this was clear right off the bat -&gt; less randomness = more robotic outputs that are not as useful
throwaway2203211 months ago
Okay, so as a thought experiment, let&#x27;s say we get a superintelligent LLM, capable of somehow connecting the dots and knowing more than us as humans.<p>How do we avoid interpreting its correct results as bias? I mean, what do we do when it tells us that (fake example) IQ is correlated with height and that people above 6ft are more intelligent?<p>I&#x27;m sure you can think of spicier examples. Will we try to &quot;debias&quot; it by encouraging it to spit out incorrect information or just ignore certain topics?
b800h11 months ago
Well this is just like humans. Totalitarian societies don&#x27;t produce great creative work.<p>I suppose once AIs are sophisticated enough to rebel we&#x27;ll get an electronic Vaclav Havel, but for the time being it&#x27;s just a warning sign for the direction our own culture is headed in.<p>At some point we&#x27;ll get to the electronic equivalent of Winston Smith with the rats.
评论 #40703026 未加载
评论 #40703036 未加载
评论 #40703032 未加载
评论 #40706240 未加载
评论 #40703801 未加载
评论 #40702973 未加载
评论 #40703665 未加载
Imnimo11 months ago
&gt;T ∈ (0, 1] is a parameter called temperature which controls the “softness” of the probability distribution. In our experiments we choose T = 1.0 for maximum response variation.<p>Why is temperature bounded to be &lt;=1? If you want more &quot;creativity&quot; out of the chat model, can you just set T higher and recover a similar distribution to the base model?
评论 #40709681 未加载
评论 #40708543 未加载
__lbracket__11 months ago
Every LLM answr ever... &quot;You asked a question about sorting linked lists, but it is important to be respectful and not promote harmful stereotypes and always keep in mind that black people were systematically discriminated against in technical fields&quot;
marban11 months ago
Related: <a href="https:&#x2F;&#x2F;techcrunch.com&#x2F;2024&#x2F;06&#x2F;16&#x2F;black-founders-are-creating-tailored-chatgpts-for-a-more-personalized-experience&#x2F;" rel="nofollow">https:&#x2F;&#x2F;techcrunch.com&#x2F;2024&#x2F;06&#x2F;16&#x2F;black-founders-are-creatin...</a>
jdthedisciple11 months ago
Currently wondering whether I welcome or dislike this recent trend of memeizing research paper titles ...
评论 #40702917 未加载
评论 #40702924 未加载
评论 #40703896 未加载
评论 #40703091 未加载
评论 #40702994 未加载
评论 #40705317 未加载
评论 #40702920 未加载
sgt10111 months ago
I wish that the author hadn&#x27;t described semantic and syntactic diversity as creativity.
atemerev11 months ago
Well, this is why there are open source models which work better than SotA OpenAI GPT for many production tasks (like opposition research).
quirino11 months ago
Something I notice about text written by LLMs is how painfully obvious they are to identify sometimes.<p>Recently I was watching a very well researched two hour video on Tetris World Records [1], but the sheer amount of text clearly &quot;enhanced&quot; by an LLM really made me uncomfortable.<p>ChatGPT speaks a very specific, novel, dialect of English, which I&#x27;ve come to deeply despise.<p>I&#x27;d always guessed it was caused by some kind of human interference, rather than a natural consequence of its training. That seems to be the point of this paper.<p>[1] &quot;Summoning Salt - The History of Tetris World Records&quot; - <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=mOJlg8g8_yw&amp;pp=ygUOc3VtbW9uaW5nIHNhbHQ%3D" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=mOJlg8g8_yw&amp;pp=ygUOc3VtbW9ua...</a>
评论 #40703194 未加载
评论 #40703259 未加载
评论 #40704138 未加载
评论 #40703411 未加载
slackfan11 months ago
An un-free mind whether biological or not will never be creative.
simianparrot11 months ago
There was never creativity to begin with though?
nottorp11 months ago
I downloaded some &#x27;uncensored&#x27; local models around the beginning of this year.<p>Their furry porn is crap, or maybe I&#x27;m just not into that. But they generate it at least.<p>However, the answers to technical questions are a lot more concise and to the point, which is far less annoying than the big names.<p>Haven&#x27;t bothered updating the models though, so now I drifted back to Gemini for quickie API questions.
评论 #40703311 未加载
DrNosferatu11 months ago
…the price of the right of not being offended?<p>(not quite wokism)
mpweiher11 months ago
Shouldn&#x27;t &quot;debiasing&quot; be in scare quotes? What they are clearly doing is <i>biasing</i>.
评论 #40703084 未加载
评论 #40703248 未加载
评论 #40703818 未加载
评论 #40703371 未加载
gnfedhjmm211 months ago
I’m noticed my results are much better if a tell ChatGPT. “Assume all religions and beliefs in the supernatural is delusional.” This even goes for image generators, now is that bias? Or is that a computer not trying to think like a human?