TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Llama-agents: an async-first framework for building production ready agents

116 pointsby pierre11 months ago

7 comments

ldjkfkdsjnv11 months ago
These types of frameworks will become abundant. I personally feel that the integration of the user into the flow will be so critical, that a pure decoupled backend will struggle to encompass the full problem. I view the future of LLM application development to be more similar to:<p><a href="https:&#x2F;&#x2F;sdk.vercel.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sdk.vercel.ai&#x2F;</a><p>Which is essentially a next.js app where SSR is used to communicate with the LLMs&#x2F;agents. Personally I used to hate next.js, but its application architecture is uniquely suited to UX with LLMs.<p>Clearly the asynchronous tasks taken by agents shouldnt run on next.js server side, but the integration between the user and agent will need to be so tight, that it&#x27;s hard to imagine the value in some purely asynchronous system. A huge portion of the system&#x2F;state will need to be synchronously available to the user.<p>LLMs are not good enough to run purely on their own, and probably wont be for atleast another year.<p>If I was to guess, Agent systems like this will run on serverless AWS&#x2F;cloud architectures.
评论 #40826283 未加载
评论 #40823879 未加载
cheesyFish11 months ago
Hey guys, Logan here! I&#x27;ve been busy building this for the past three weeks with the llama-index team. While it&#x27;s still early days, I really think the agents-as-a-service vision is something worth building for.<p>We have a solid set of things to improve, and now is the best time to contribute and shape the project.<p>Feel free to ask me anything!
评论 #40823285 未加载
评论 #40823272 未加载
评论 #40823555 未加载
dr_kretyn11 months ago
Can&#x27;t really take it seriously seeing &quot;production ready&quot; next to a vague project that has been started three weeks ago.
gmerc11 months ago
How do you overcome compounding error given that the average LLM call reliability peaks well below 90%, let alone triple&#x2F;9
评论 #40829779 未加载
jondwillis11 months ago
why use already overloaded “llama”
k__11 months ago
I have yet to see a production ready agent.
评论 #40823686 未加载
评论 #40826566 未加载
评论 #40824635 未加载
williamdclt11 months ago
I must be missing something: isn’t this just describing a queue? The fact that the workload is a LLM seems irrelevant, it’s just async processing of jobs?
评论 #40825960 未加载
评论 #40824791 未加载
评论 #40827199 未加载
评论 #40824637 未加载