TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Llama-agents: an async-first framework for building production ready agents

116 点作者 pierre11 个月前

7 条评论

ldjkfkdsjnv11 个月前
These types of frameworks will become abundant. I personally feel that the integration of the user into the flow will be so critical, that a pure decoupled backend will struggle to encompass the full problem. I view the future of LLM application development to be more similar to:<p><a href="https:&#x2F;&#x2F;sdk.vercel.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sdk.vercel.ai&#x2F;</a><p>Which is essentially a next.js app where SSR is used to communicate with the LLMs&#x2F;agents. Personally I used to hate next.js, but its application architecture is uniquely suited to UX with LLMs.<p>Clearly the asynchronous tasks taken by agents shouldnt run on next.js server side, but the integration between the user and agent will need to be so tight, that it&#x27;s hard to imagine the value in some purely asynchronous system. A huge portion of the system&#x2F;state will need to be synchronously available to the user.<p>LLMs are not good enough to run purely on their own, and probably wont be for atleast another year.<p>If I was to guess, Agent systems like this will run on serverless AWS&#x2F;cloud architectures.
评论 #40826283 未加载
评论 #40823879 未加载
cheesyFish11 个月前
Hey guys, Logan here! I&#x27;ve been busy building this for the past three weeks with the llama-index team. While it&#x27;s still early days, I really think the agents-as-a-service vision is something worth building for.<p>We have a solid set of things to improve, and now is the best time to contribute and shape the project.<p>Feel free to ask me anything!
评论 #40823285 未加载
评论 #40823272 未加载
评论 #40823555 未加载
dr_kretyn11 个月前
Can&#x27;t really take it seriously seeing &quot;production ready&quot; next to a vague project that has been started three weeks ago.
gmerc11 个月前
How do you overcome compounding error given that the average LLM call reliability peaks well below 90%, let alone triple&#x2F;9
评论 #40829779 未加载
jondwillis11 个月前
why use already overloaded “llama”
k__11 个月前
I have yet to see a production ready agent.
评论 #40823686 未加载
评论 #40826566 未加载
评论 #40824635 未加载
williamdclt11 个月前
I must be missing something: isn’t this just describing a queue? The fact that the workload is a LLM seems irrelevant, it’s just async processing of jobs?
评论 #40825960 未加载
评论 #40824791 未加载
评论 #40827199 未加载
评论 #40824637 未加载