Wait<p>This person is asking the model (running on Ollama) what it does?<p>The model answer might have a significance when running on FB infra, but here it is <i>meaningless</i>. Even worse at higher temperatures<p>They need to check Ollama source for that<p>They're doing no better than people asking Chatgpt if they wrote that assignment paper they got
I think there is a clear misunderstanding how LLM-things work and that a network request has nothing to do with a LLM-model. Even if "function calling" is possible, it is the users choice what function can be called and if it does a network request it is totally the users side of the implementation what URI and request-body gets sent.<p>It feels a bit like trolling. I somehow can't believe this is meant seriously.