TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning

270 点作者 EgoIncarnate8 个月前

36 条评论

ChrisArchitect8 个月前
Earlier discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41534474">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41534474</a>
tedivm8 个月前
I&#x27;d still love to understand how a non-profit organization that was founded with the idea of making AI &quot;open&quot; has turned into this for profit behemoth with the least &quot;open&quot; models in the industry. Facebook of all places is more &quot;open&quot; with their models than OpenAI is.
评论 #41584044 未加载
评论 #41584465 未加载
评论 #41585448 未加载
评论 #41584216 未加载
评论 #41584574 未加载
评论 #41584800 未加载
评论 #41585278 未加载
评论 #41585737 未加载
评论 #41584778 未加载
评论 #41584914 未加载
评论 #41591130 未加载
评论 #41585438 未加载
评论 #41585055 未加载
评论 #41584206 未加载
评论 #41584060 未加载
评论 #41586308 未加载
评论 #41584735 未加载
评论 #41585025 未加载
评论 #41586220 未加载
评论 #41585157 未加载
评论 #41585761 未加载
评论 #41584183 未加载
评论 #41584782 未加载
brink8 个月前
&quot;For your safety&quot; is _always_ the preferred facade of tyranny.
评论 #41585652 未加载
评论 #41585015 未加载
评论 #41584494 未加载
评论 #41585297 未加载
评论 #41584811 未加载
AustinDev8 个月前
This seems like a fun attack vector. Find a service that uses o1 under the hood and then provide prompts that would violate this ToS to get their API key banned and take down the service.
评论 #41586085 未加载
JohnMakin8 个月前
&gt; The flipside of this approach, however, is that concentrates more responsibility for aligning the language language model into the hands of OpenAI, instead of democratizing it. That poses a problem for red-teamers, or programmers that try to hack AI models to make them safer.<p>More cynically, could it be that the model is not doing anything remotely close to what we consider &quot;reasoning&quot; and that inquiries into how it&#x27;s doing whatever it&#x27;s doing will expose this fact?
Shank8 个月前
I don&#x27;t know how widely it got reported on, but attempting to jailbreak Copilot nee. Bing Chat would actually result in getting banned for a while, post-Sydney-episode. It&#x27;s interesting to see that OpenAI is saying the same thing.
评论 #41584155 未加载
htk8 个月前
This just screams to me that o1&#x27;s secret sauce is easy to replicate. (e.g. a series of prompts)
blake80868 个月前
Perhaps controlling AI is harder than people thought.<p>They could &quot;just&quot; make it not reveal its reasoning process, but they don&#x27;t know how. But, they&#x27;re pretty sure they can keep AI from doing anything bad, because... well, just because, ok?
评论 #41586340 未加载
balls1878 个月前
Just give it more human-like intelligence.<p>Kid: &quot;Daddy why can&#x27;t I watch youtube?&quot;<p>Me: &quot;Because I said so.&quot;
评论 #41584815 未加载
black_puppydog8 个月前
Kinda funny how just this morning I was looking at a &quot;strawberry&quot; app on f-droid and wondering why someone would register such a nonsense app name with such nonsense content:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;Eve-146T&#x2F;STRAWBERRY">https:&#x2F;&#x2F;github.com&#x2F;Eve-146T&#x2F;STRAWBERRY</a><p>Turns out I&#x27;m not the only one wondering, although the discussion seems to largely be around &quot;should be allow users to install nonsense? #freedom &quot; :D<p><a href="https:&#x2F;&#x2F;gitlab.com&#x2F;fdroid&#x2F;fdroiddata&#x2F;-&#x2F;issues&#x2F;3377" rel="nofollow">https:&#x2F;&#x2F;gitlab.com&#x2F;fdroid&#x2F;fdroiddata&#x2F;-&#x2F;issues&#x2F;3377</a>
EMIRELADERO8 个月前
I wish people kept this in the back of their mind every time they hear about &quot;Open&quot;AI:<p>&quot;As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it&#x27;s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).&quot;<p>-Ilya Sutskever (email to Elon musk and Sam Altman, 2016)
评论 #41585224 未加载
crooked-v8 个月前
On the one hand, this is probably a (poor) attempt to keep other companies from copying their &#x27;secret sauce&#x27; to train their own models, as has already happened with GPT-4.<p>On the other hand, I also wonder if maybe its unrestrained &#x27;thought process&#x27; material is so racist&#x2F;sexist&#x2F;otherwise insulting at times (after all, it was trained on scraped Reddit posts) that they really don&#x27;t want anyone to see it.
nwoli8 个月前
Another reason llama is so important is that once you’re banned from OAI you’re fucked for the entire future AGI products as well.
lsy8 个月前
This has always been the end-game for the pseudoscience of &quot;prompt engineering&quot;, which is basically that some other technique (in this case, organizational policy enforcement) must be used to ensure that only approved questions are being asked in the approved way. And that only approved answers are returned, which of course is diametrically opposed to the perceived use case of generative LLMs as a general-purpose question answering tool.<p>Important to remember too, that this only catches those who are transparent about their motivations, and that there is no doubt that motivated actors will come up with some innocuous third-order implication that induces the machine to relay the forbidden information.
评论 #41585704 未加载
评论 #41584134 未加载
mihaic8 个月前
What I found very strange was that ChatGPT fails to answer how many &quot;r&quot;&#x27;s there are in &quot;strawberrystrawberry&quot; (said 4 instead of 6), but when I explicitly asked it to write a program to count them, it wrote perfect code that when ran gave the correct answer.
评论 #41584504 未加载
评论 #41584464 未加载
评论 #41584578 未加载
anothernewdude8 个月前
Seems rather tenuous to base an application on this API that may randomly decide that you&#x27;re banned. The &quot;decisions&quot; reached by the LLM that bans people is up to random sampling after all.
zzo38computer8 个月前
Like other programs, you should have FOSS that you will run on your own computer (without needing internet etc), if you should want freedom to use and understand them.
评论 #41586684 未加载
neuroelectron8 个月前
It&#x27;s not just a threat, some users have been banned.
Animats8 个月前
Hm. If a company uses Strawberry in their customer service chatbot, can outside users get the company&#x27;s account banned by asking Wrong Questions?
vjerancrnjak8 个月前
They should just switch to reasoning in representation space, no need to actualize tokens.<p>Or reasoning in latent tokens that don’t easily map to spoken language.
评论 #41584152 未加载
_joel8 个月前
This will lead to strawberry appeals forever.
评论 #41588114 未加载
causal8 个月前
I don&#x27;t know what I&#x27;m doing wrong but I&#x27;ve been pretty underwhelmed by o1 so far. I find its instruction following to be pretty good, but so far Claude is still much better at taking coding tasks and just getting it right on first try.
评论 #41585455 未加载
评论 #41586694 未加载
fallingsquirrel8 个月前
Wasn&#x27;t AI supposed to replace employees? Imagine if someone tried this at work.<p>&gt; I think we should combine these two pages on our website.<p>&gt; What&#x27;s your reasoning?<p>&gt; Don&#x27;t you dare ask me that, and if you do it again, I&#x27;ll quit.<p>Welcome to the future. You will do what the AI tells you. End of discussion.
评论 #41586506 未加载
评论 #41586665 未加载
slashdave8 个月前
I&#x27;m confused. Who decides if you are asking or not? Are casual users who innocently ask &quot;tell me how you came to decide this&quot; just going to get banned based on some regex script?
openAIengineer8 个月前
YC is responsible for this. They seek profit and turned a noble clause into a boring corp.<p>I am resigning from OpenAI today because of their profit motivations.<p>OpenAI will NOT be next Google. You heard it here first.
评论 #41586904 未加载
pietz8 个月前
How will this be controlled on Azure? Don&#x27;t they have a stricter policy on what they view and also develop their own content filters?
Hizonner8 个月前
This is not, of course, the sort of thing you do when you actually have any confidence whatsoever in your &quot;safety measures&quot;.
l5870uoo9y8 个月前
Can I risk loosing access if any of my users write CoT-leaking prompts on the AI-powered services that I run?
评论 #41586061 未加载
dekhn8 个月前
Is this still happening? It may merely have been some mistaken configuration settings.
raverbashing8 个月前
I guess we&#x27;ll never learn how to count the &#x27;r&#x27;s in strawberry
gloosx8 个月前
Why is banning even a threat? I can make a new account for 20 cents lol.
baq8 个月前
LLMs are not programs in the traditional sense. They&#x27;re a new paradigm of software and UX, somewhere around a digital dog who read the whole internet a million times but is still naive about everything.
评论 #41585949 未加载
评论 #41585510 未加载
iamnotsure8 个月前
There are three r&#x27;s in mirror.
elif8 个月前
Is there an appropriate open source advocacy group that can sue them into changing their name on grounds of defamation?
kmeisthax8 个月前
If OpenAI gets to have competitive advantage from hiding model output then they can pay for training data, too.
codedokode8 个月前
Should not AI research and GPUs be export-controlled? Do you want to see foreign nations making AI drones using published research and American GPUs?