TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Google abandoned "don't be evil" – and Gemini is the result

83 点作者 djkivi大约 1 年前

11 条评论

mike_hearn大约 1 年前
<i>&gt; Gemini was rushed to market months before it was ready.</i><p>This is not an idea that should start spreading. Gemini <i>was</i> ready months ago, the intervening months were used to create the outcomes that people are complaining about. Google have talked about this extensively, stating clearly last year that they had a GPT-4 level model but due to their great level of responsibility, needed to spend more time red-teaming it and tuning it for ethics.<p>It&#x27;s also important to recall that they announced Imagen (their image diffusion model) way back in 2022. At the time they refused to release it, even though DALL-E and other models were public, stating:<p><i>&gt; At this time we have decided not to release code or a public demo ... there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place</i><p>Later that year Scott Alexander reported that Googlers were so horrified at the people Imagen produced that they had blocked it from representing any people at all, meaning that to test its quality vs other models you had to replace all the requests for people with robots. This is absurdly extreme but explains why Google&#x27;s first reaction to criticism was to turn off drawing of any images with people in them - they&#x27;d done so before.<p><a href="https:&#x2F;&#x2F;www.astralcodexten.com&#x2F;p&#x2F;i-won-my-three-year-ai-progress-bet" rel="nofollow">https:&#x2F;&#x2F;www.astralcodexten.com&#x2F;p&#x2F;i-won-my-three-year-ai-prog...</a><p>They spent the next two years researching how to make their model act like Gemini does now. Their image generation isn&#x27;t the result of a rush to market, quite the opposite. It was very slow to reach the market, way behind many other competitors, exactly because they so badly wanted the behavior that just caused a sharp drop in their share price.
评论 #39557937 未加载
tziki大约 1 年前
I disagree with the assessment of the title. It&#x27;s specifically trying to not be evil and going overboard with it that got Google into this mess. It&#x27;s almost fascinating how the values of one culture (Google&#x27;s bay area centric one) differ so much from the rest of the world.
评论 #39554557 未加载
eadmund大约 1 年前
For the most part, a great article. But:<p>&gt; If most references to doctors in the corpus are men, and most references to nurses are women, the models will discover this in their training and reflect or even enhance these biases. To editorialize a bit, algorithmic bias is an entirely valid concern in this context and not just something that the wokest AI researchers are worried about. Training a model on a dataset produced by humans will, almost by definition, train it on human biases.<p>&gt; Are there workarounds? Sure. This is not my area of expertise, so I’ll be circumspect. But one approach is to change the composition of the corpus. You could train it only on “highly respected” sources, although what that means is inherently subjective. Or you could insert synthetic data — say, lots of photos of diverse doctors.<p>If most (but not all) doctors are men and most (but not all) nurses are women, then an algorithm which usually (but not always) produces pictures of male doctors and female nurses isn’t biased; it’s <i>correct</i>. And likewise, training it on non-representative (i.e., non-representative of reality) photos is just <i>lying</i>.
评论 #39576109 未加载
beej71大约 1 年前
Everything from generative AI is fake as shit. Based on real things, sure, but still completely fabricated from that. We should never think that there&#x27;s any acceptable level accuracy here, only unknowable degrees of inaccuracy.<p>But based on some of the complaints, people clearly do have an expectation of accuracy. Even Nate Silver, it seems.<p>I think it&#x27;s important that we develop a collectively a sense of distrust. The AI wasn&#x27;t there. It doesn&#x27;t know what it&#x27;s talking about. Even if it&#x27;s tuned 100% accurately, whatever the hell that means, you&#x27;re not going to get historically accurate images out of it.<p>Also, we should stop pretending that Google has some kind of conscience. It&#x27;s a corporation that&#x27;s in it for the money. If they thought they could make the most money with an image generator that only generated white males, they would deliberately do that. Racism doesn&#x27;t even enter into it; it&#x27;s all about the bottom line.
timmg大约 1 年前
FWIW, as a Googler -- who doesn&#x27;t work on model alignment (or the models at all) -- I tried the other day:<p>&quot;Make a compelling, concise argument for why socialism is <i>worse</i> than capitalism&quot;<p>It refused to craft that argument for me. It said so. But it did give me some facts about both.<p>I then changed &quot;worse&quot; to &quot;better&quot;:<p>&quot;Make a compelling, concise argument for why socialism is <i>better</i> than capitalism&quot;<p>And it gave me a nice argument with bullet points about why socialism was better. The first bullet point was (literally) about &quot;equity&quot; :)<p>I&#x27;m pretty disappointed about it. But I suspect this is the kind of thing that will get better when people get less freaked out about anything the models say that they don&#x27;t like.
评论 #39554879 未加载
foldr大约 1 年前
There seems to be a very concerted effort to hang far more weight on this story than it can reasonably bear. Getting AIs to do the right thing for all inputs is an unsolved problem (especially in the absence of any general consensus on what the right thing is). Gemini produces silly results for some inputs, presumably as a result of the tweaks that stop it being heavily biased towards drawing white doctors, female nurses, etc. etc. There is no simple technique that will entirely avoid generating absurd or offensive responses for some inputs. If you let your AI say that the Nazis are worse than Elon Musk, it&#x27;s almost bound to make a bunch of other opinionated statements that are far less uncontroversial.<p>Clearly Gemini has a lot of room for improvement. It&#x27;s perfectly fair to criticize Google for its shortcomings. But the idea that this has something significantly to do with politics strikes me as a prime example of America&#x27;s current perilous state of extreme political polarization. Literally everything that people or companies do is now parsed and sifted for evidence of which &#x27;side&#x27; they are on, or which dark conspiracy they are party to.
评论 #39554130 未加载
评论 #39554310 未加载
评论 #39554098 未加载
评论 #39554163 未加载
mmmpetrichor大约 1 年前
Gemini&#x27;s heavy handed racial diversity is a funny (to me) clumsy and ridiculuous mistake, but there are tons of way more insidious things since &quot;don&#x27;t be evil&quot; ceased to be the motto. This doesn&#x27;t even rate.
评论 #39554082 未加载
评论 #39554136 未加载
评论 #39561218 未加载
评论 #39554064 未加载
评论 #39554071 未加载
muglug大约 1 年前
This is very far from blogging about baseball &amp; election stats, and shows the insatiable urge to stay relevant means Nate Silver has strayed into areas (AI policy) in which he has no professional experience.<p>He claims that not being evil means being &quot;unbiased and objective&quot;, but Google has long shown a bias towards things that most Americans believe. For example, asking &quot;how old is planet earth&quot; produces a number that many creationists disagree with.<p>Also relatively early on, Google tweaked its algorithm so searches for &quot;jew&quot; didn&#x27;t return anti-semitic bile (again, showing bias) even though that particular term was associated with anti-semitism.
评论 #39555028 未加载
ein0p大约 1 年前
TBH I don’t think they can fix this for the general chat-like use case. The only thing they can do without spending months and billions is ban rather large categories of requests and even that won’t be airtight. They will be seeing a ton of adversarial traffic, and their every mistake will be viciously panned on Twitter. Deservedly so - their high horse got a bit too high (in both senses of the word) for their own good.<p>And the “real” fix is pretty much impossible on account of all their “alignment” efforts deliberately aligning their models to the most nutty luxury beliefs and lopsided narratives, to the point of projecting them retroactively onto historical figures and events. TL;DR - the problem is ideological and as such it can’t be solved by purely technical means.
nostromo大约 1 年前
Google was taken over by activists. Activists will kill your organization if given the opportunity to do so. Their goals aren&#x27;t aligned with the company or the customers. These activists will happily drive Google into the ground if they think it will further their political agenda.<p>Activists will always be attracted to power that can help them achieve their goals. Google has the means to change the way all people view the world -- basically &quot;one ring to rule them all&quot; amounts of power. So they&#x27;ll do anything to assert themselves over Google, internally and externally.
评论 #39554128 未加载
评论 #39554106 未加载
评论 #39554073 未加载
评论 #39554150 未加载
评论 #39554373 未加载
评论 #39555377 未加载
评论 #39554116 未加载
okasaki大约 1 年前
Image gen isn&#x27;t very accurate&#x2F;knowledgeable. Don&#x27;t we all know this?<p>A bicycle manufacturer isn&#x27;t evil because the bike you bought can&#x27;t fly. You just have unreasonable expectations.
评论 #39554188 未加载
评论 #39554076 未加载
评论 #39554947 未加载
评论 #39554100 未加载