TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Inner Loop Agents

14 pointsby tkellogg30 days ago

2 comments

bob102930 days ago
&gt; The software you’re using to run your LLM, e.g. Ollama, vLLM, OpenAI, Anthropic, etc., is responsible for running this loop.<p>There&#x27;s currently <i>always</i> this same man behind the curtain.<p>To me these are all effectively the same picture. The differences merely shuffle things around in the code that is invoking the LLM and handling its response.<p>Until we have something like CLR integration for ChatGPT, I don&#x27;t see the significance of an inner loop agent.<p>Has anyone considered this model yet? E.g., shipping a DLL to the LLM provider that contains the actual implementations of the desired tool calls? Imagine how much easier it would be to provide your debugging symbols and XML doc files directly rather than re-documenting everything for some ever-shifting tool calling API surface.
评论 #43753598 未加载
jasonjmcghee30 days ago
Is this the widely used term? Do you know of any open source models fine-tuned as an &quot;inner loop&quot; &#x2F; native agentic llm? Or what the training process looks like?<p>I don&#x27;t see why any model couldn&#x27;t be fine-tuned to work this way - i.e. tool use doesn&#x27;t need to be followed by an EOS token or something - it could just wait for an output (or even continue with the knowledge there&#x27;s an open request, and to take action when it comes back)