TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Offline llama3 sends corrections back to Meta's server – I was not aware of it

1 点作者 jeena大约 1 年前

3 条评论

raverbashing大约 1 年前
Wait<p>This person is asking the model (running on Ollama) what it does?<p>The model answer might have a significance when running on FB infra, but here it is <i>meaningless</i>. Even worse at higher temperatures<p>They need to check Ollama source for that<p>They&#x27;re doing no better than people asking Chatgpt if they wrote that assignment paper they got
评论 #40166986 未加载
reneberlin大约 1 年前
I think there is a clear misunderstanding how LLM-things work and that a network request has nothing to do with a LLM-model. Even if &quot;function calling&quot; is possible, it is the users choice what function can be called and if it does a network request it is totally the users side of the implementation what URI and request-body gets sent.<p>It feels a bit like trolling. I somehow can&#x27;t believe this is meant seriously.
okokwhatever大约 1 年前
mmm!!! so it&#x27;s gonna be necessary to deny some hosts...<p>Do you have a list of the hosts callbacked?
评论 #40166866 未加载