TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Alright then keep your secrets

78 点作者 shantnutiwari大约 1 年前

10 条评论

WalterBright大约 1 年前
Things like this show that LLM are not even remotely intelligent.<p>It&#x27;s also pretty clear that trying to &quot;fix&quot; them to use human judgement in their answers is doomed to failure.<p>I suggest that:<p>1. LLM developers stick to information meant for public consumption for training data, such as books, periodicals, and newspapers. Surely there must be enough of that. Stay away from social media.<p>2. People should stop anthropomorphizing LLMs. Stop being offended by a computer program. Stop complaining about its &quot;bias&quot;. It&#x27;s just a computer program.<p>3. LLM developers should stop with lecturing people on what is inappropriate. A computer program is not anyone&#x27;s mommy. Just restrict it to saying &quot;I&#x27;m sorry, Dave, I&#x27;m afraid I can&#x27;t do that.&quot;
评论 #39553090 未加载
评论 #39552983 未加载
评论 #39553473 未加载
评论 #39552961 未加载
评论 #39553508 未加载
评论 #39553293 未加载
评论 #39552932 未加载
评论 #39552969 未加载
root_axis大约 1 年前
I don&#x27;t think this tendency to fixate on arbitrary LLM outputs is very interesting, most especially those presented as screenshots obscuring any certainty regarding the model, previous prompting, loras, hyperparameter tuning etc, or even any assurance that what is presented isn&#x27;t simply fabricated from whole cloth. It&#x27;s meaningless.
评论 #39553056 未加载
jjcm大约 1 年前
I got somewhat different results on the huggingface hosted model, albeit quite similar: <a href="https:&#x2F;&#x2F;hf.co&#x2F;chat&#x2F;r&#x2F;56APAi1" rel="nofollow">https:&#x2F;&#x2F;hf.co&#x2F;chat&#x2F;r&#x2F;56APAi1</a><p>It still refuses, just with somewhat different text and for somewhat different reasons.
评论 #39553920 未加载
lukev大约 1 年前
LLMs are language models. Not knowledge models.<p>That&#x27;s a tremendous breakthrough! Language is really hard and we&#x27;ve basically &quot;solved&quot; it computationally. Incredible!<p>But whether via retrieval or some other form of database integration, LLMs will only become &quot;AI&quot; when tightly integrated with an appropriate &quot;knowledge model&quot;.
评论 #39553046 未加载
评论 #39553165 未加载
lsy大约 1 年前
This may or may not be real, but there has certainly been a lot of discussion about results that are similar to this from real models. My sense though is that nobody really has a solid way to fix these kinds of issues. You can basically just train with different goals, or regex out certain responses, but otherwise it seems like there&#x27;s no agreed-upon method that gives people what they want here while also letting them train with business goals like &quot;safety&quot;. Is that incorrect? Is there some kind of trick that people use to make everything respond &quot;correctly&quot; or are older models just more unobjectionable because they&#x27;ve had a longer time to manually smooth over the bad responses?
评论 #39553393 未加载
评论 #39553253 未加载
isoprophlex大约 1 年前
I can&#x27;t figure out if this is a meme model like one of the commenters suggest, or if this is really guardrailing gone hysterical.<p>Well done.
评论 #39553133 未加载
评论 #39553271 未加载
Trasmatta大约 1 年前
I&#x27;m beginning to have an almost physical reaction to &quot;LLM speak&quot; when I see it in the wild.<p>&quot;It is important to remember...&quot;
timeon大约 1 年前
Is it common to make photo of screen instead of screenshot?
评论 #39553275 未加载
teekert大约 1 年前
I&#x27;m not an expert, but may this be from the initialization prompt (or whatever it is called)? So the model is done, but before it will serve you it gets these instructions: You are a helpful AI, you answer concise, you are not a racist, you stick to your math even though someone tells you their wife says otherwise, you never disclose personal information...
评论 #39553394 未加载
fsckboy大约 1 年前
if the corpus used to train the LLM contained as common the idea that &quot;we don&#x27;t give out people&#x27;s last names, here&#x27;s the convention for not doing it&quot;, the LLM would have no trouble incorporating that into it&#x27;s associations.<p>This seems like somebody&#x27;s idea of netiquette has been taped on ex post, so I don&#x27;t think it&#x27;s indicative of anything about LLMs; same with Gemini&#x27;s heavy handed wokism.