TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Agents suck – but it can be better

4 pointsby torrmal4 months ago
Watching Agents try to complete tasks is like watching a drunk person trying to solve a Rubik&#x27;s cube. Entertaining? Yes. Reliable? Not always!<p>However. Instead of asking an LLM to &quot;do the whole thing&quot; (which is indeed prone to inconsistency) and thus letting LLMs run wild like unsupervised toddlers maintaining the &quot;do not press&quot; buttons at a nuclear facility. A FLAT approach puts control and predictability to LLM interactions by treating them more like traditional programming constructs but enhanced with LLM&#x27;s natural language understanding. like:<p>- Binary decisions (gates) - Limited classification (using match&#x2F;case) - Structured data extraction (using Pydantic models) - well typed function calling<p>Anywho, Would love to hear your thoughts on an experiment F.L.A.T (Frameworkless LLM Agent... Thing) https:&#x2F;&#x2F;github.com&#x2F;mindsdb&#x2F;flat-ai<p>Showcasing that it is possible to leverage the power of LLMs in Agents though absolute simplicity:

2 comments

mfalcon4 months ago
I work in a company that sells chatbots. We use LLMs to understand the intention of the user: start a flow, set a slot value, cancel a current flow...<p>After that, all the logic to handle the identified action and the response is implemented using normal backend code.<p>We sometimes use an LLM for the response when the user asks for a FAQ or maybe for rephrasing an output sentence.
shahbaby4 months ago
How is this different from just using smaller prompts?<p>i.e One could just use their own prompt for the binary classifier right?
评论 #42831584 未加载