TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Offline llama3 sends corrections back to Meta's server – I was not aware of it

1 pointsby jeenaabout 1 year ago

3 comments

raverbashingabout 1 year ago
Wait<p>This person is asking the model (running on Ollama) what it does?<p>The model answer might have a significance when running on FB infra, but here it is <i>meaningless</i>. Even worse at higher temperatures<p>They need to check Ollama source for that<p>They&#x27;re doing no better than people asking Chatgpt if they wrote that assignment paper they got
评论 #40166986 未加载
reneberlinabout 1 year ago
I think there is a clear misunderstanding how LLM-things work and that a network request has nothing to do with a LLM-model. Even if &quot;function calling&quot; is possible, it is the users choice what function can be called and if it does a network request it is totally the users side of the implementation what URI and request-body gets sent.<p>It feels a bit like trolling. I somehow can&#x27;t believe this is meant seriously.
okokwhateverabout 1 year ago
mmm!!! so it&#x27;s gonna be necessary to deny some hosts...<p>Do you have a list of the hosts callbacked?
评论 #40166866 未加载