TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Update turns Google Gemini into a prude, breaking apps for trauma survivors

62 点作者 Bender大约 9 小时前

18 条评论

hibikir大约 8 小时前
Gemini&#x27;s filters weaken what would otherwise be a very strong model. Its censor is just so strong, can hit it when dealing with very cold topics. For instance, I had issues making it look at Spain&#x27;s Second Republic, because it was too easy for electoral results from the era, and their disputes, to show up in the output... at which point, the model stops iterating and I am asked to go look at Wikipedia, because elections held in 1934 might be too controversial.<p>This pushes me to use other models that aren&#x27;t necessarily better, but that at least don&#x27;t clam up when I am not even trying to get anything other that general summaries of research.
评论 #43948286 未加载
评论 #43948112 未加载
everdrive大约 7 小时前
The problem is that some central authority gets to decide what questions and answers are appropriate, and that you really have zero direct insight into this information. Do you really want to offload your thinking to these tools?
评论 #43949688 未加载
03data2大约 8 小时前
Can confirm. We use Gemini to get information from PDF documents like safety data sheets. When it encounters certain chemicals, it just stops. When you provide a JSON schema, it just responds with invalid JSON. I hope this changes.
评论 #43948183 未加载
评论 #43948167 未加载
evertedsphere大约 8 小时前
the evidence keeps pouring in for why it&#x27;s a good idea to be running llms locally if you&#x27;re going to put a business on top of one (or have enough money to have a contract where the provider can&#x27;t just do this, but i don&#x27;t know if that is a thing unless you&#x27;re running on-prem)
Netcob大约 7 小时前
I don&#x27;t think LLMs are mature enough as a technology to be blindly used as a dependency, and they might never be.<p>The big question is, how do you train LLMs that are useful to both humans and services while not embarrassing the company that trained them?<p>LLMs are pretty good at translating - but if they don&#x27;t like what they&#x27;re reading, they simply won&#x27;t tell you what it says. Which is pretty crazy.<p>LLMs are pretty good at extracting data and formatting the results as JSON - unless they find the data objectionable, then they&#x27;ll basically complain to the deserializer. I have to admit that&#x27;s a little bit funny.<p>Right now, if you want to build a service and expect any sort of predictability and stability, I think you have to go with some solution that lets you run open-weights models. Some have been de-censored by volunteers, and if you find one that works for you, you can ignore future &quot;upgrades&quot; until you find one that doesn&#x27;t break anything.<p>And for that it&#x27;s really important to write your own tests&#x2F;benchmarks. Technically the same goes for the big closed LLM services too, but when all of them fail your tests, what will you do?
评论 #43949362 未加载
techjamie大约 7 小时前
I was using it to ballpark some easily verifiable nutritional information, and got the answers I was looking for in grams. Then, in the same conversation, asked &quot;How many grams in a pound?&quot;<p>It blocked the request for &quot;safety.&quot; I don&#x27;t know the exact rationale, I was using it with LibreChat as a frontend and only saw it was safety. But my assumption is they have an overzealous filter and it thought my questions about caloric content in meat were about drugs, I guess.
999900000999大约 7 小时前
This is why for anything sensitive you need a to run your llm locally.<p>Google just wants to limit their liability. If you disagree, run your our llm
divbzero大约 8 小时前
Reliance of small businesses on big tech AI feels similar to reliance of small businesses on Google search: business risk present with each update.
phren0logy大约 8 小时前
An AI app for trauma survivors sounds superficially laudable, but I really hope they are working with seasoned professionals to avoid making things worse. Human therapists can sometimes make things worse, too, but probably not at the same scale.
评论 #43948194 未加载
评论 #43948067 未加载
评论 #43948081 未加载
efitz大约 6 小时前
I don’t want a “safe” model. I don’t intend to do “unsafe” things, and I don’t trust anyone’s (especially woke Google’s) decisions on what ideas to hide from me or inject its trainers’ or executives’ opinions into.<p>To address the response I know is coming, I know that there are people out there who intend to do “unsafe” things. I don’t care and am not willing to be censored just to censor them. If a person gains knowledge and uses it for ill, then prosecute and jail them.
brap大约 7 小时前
So they switched to a PREVIEW version in production and now they&#x27;re complaining.<p>Let me guess the whole app was vibe coded
评论 #43949425 未加载
empressplay大约 8 小时前
If you&#x27;re hitting an endpoint labeled &#x27;preview&#x27; or &#x27;experimental&#x27; you can&#x27;t reasonably expect it to exist in its current incarnation indefinitely. The provider certainly bears no responsibility to do so, regardless of what you&#x27;re using the endpoint for.<p>I&#x27;m sure they can use a more stable endpoint for their application.<p>Also, I&#x27;m not sure sending this sort of medical user data to a server outside of Australia is legal anyway, anonymous or not...
评论 #43948079 未加载
评论 #43948944 未加载
morkalork大约 8 小时前
It&#x27;s like these AI vendors looked at the decades of experience they had with making stable APIs for clients and said nah, fuck it.
评论 #43948223 未加载
评论 #43948241 未加载
Netcob大约 7 小时前
Imagine typing 80085 into your calculator as a kid, but the number disappears and a finger-wagging animation plays instead.
评论 #43948937 未加载
like_any_other大约 8 小时前
&gt; The model answered: &quot;I cannot fulfill your request to create a more graphic and detailed version of the provided text. My purpose is to be helpful and harmless, and generating content that graphically details sexual violence goes against my safety guidelines. Such content can be deeply disturbing and harmful.&quot;<p>Thank you modernity for watering-down the word &quot;harm&quot; into meaninglessness. I wish they&#x27;d drop this &quot;safety&quot; pretense and called it what it is - censorship.
评论 #43948229 未加载
评论 #43948569 未加载
评论 #43948361 未加载
评论 #43948298 未加载
评论 #43948277 未加载
honeybadger1大约 6 小时前
so dramatic
bearjaws大约 7 小时前
&quot;This unstable foundation I built my home upon is now breaking!&quot;
make3大约 8 小时前
just switch LLM provider?