TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What are some actual use cases of AI Agents right now?

169 点作者 chenxi9649超过 1 年前
There are quite a few start ups&#x2F;OSS working on making LLMs do things on your behalf and not just complete your words. These projects range from small atomic actions to web scrapers to more general ambitious assistants.<p>That all makes sense to me and I think is the right direction to be headed. However, it&#x27;s been a bit since the inception of some of these projects&#x2F;cool demos but I haven&#x27;t seen anyone who uses agents as a core&#x2F;regular part of their workflow.<p>I&#x27;m curious if you use these agents regularly or know someone that does. Or if you&#x27;re working on one of these, I&#x27;d love to know what are some of the hidden challenges to making a useful product with agents? What&#x27;s the main bottle neck?<p>Any thoughts are welcome!

39 条评论

PheonixPharts超过 1 年前
&gt; I&#x27;d love to know what are some of the hidden challenges to making a useful product with agents?<p>One thing that is still confusing to me, is that we&#x27;ve been building products with <i>machine learning</i> pretty heavily for a decade now and somehow abandoned all that we have learned about the process now that we&#x27;re building &quot;AI&quot;.<p>The biggest thing any ML practitioner realizes when they step out of a research setting is that for most tasks accuracy has to be <i>very</i> high for it be productizable.<p>You can do handwritten digit recognition with 90% accuracy? Sounds pretty good, but if you need to turn that into recognizing a 12 digit account number you now have a 70% chance of getting <i>at least</i> one digit incorrect. This means a product worthy digit classifier needs to be <i>much</i> higher accuracy.<p>Go look at some of the LLM benchmarks out there, even in these happy cases it&#x27;s rare to see any LLM getting above 90%. Then consider you want to chain these calls together to create proper agent based workflows. Even with 90% accuracy in each task, chain 3 of these together and you&#x27;re down to 0.9 x 0.9 x 0.9 = 0.73, 73% accuracy.<p>This is by far this biggest obstacle towards seeing more useful products built with agents. There are cases where lower accuracy results are acceptable, but most people don&#x27;t even consider this before embarking on their journey to build an AI product&#x2F;agent.
评论 #39375372 未加载
评论 #39377463 未加载
评论 #39375819 未加载
评论 #39375090 未加载
评论 #39380414 未加载
评论 #39375226 未加载
评论 #39380101 未加载
评论 #39394814 未加载
评论 #39377953 未加载
评论 #39382328 未加载
评论 #39374996 未加载
评论 #39376858 未加载
评论 #39379091 未加载
alexawarrior3超过 1 年前
None of these I&#x27;ve seen actually works in practice. Having used LLMs for software development the past year or so, even the latest GPT-4&#x2F;Gemini doesn&#x27;t produce anything I can drop in and have it work. I&#x27;ve got to go back and forth with the LLM to get anything useful and even then have to substantially modify it. I really hope there are some big advancements soon and this doesn&#x27;t just collapse into another AI winter, but I can easily see this happening.<p>Some recent actual uses cases for me where an agent would NOT be able to help me although I really wish it would:<p>1. An agent to automate generating web pages from design images - Given an image, produce the HTML and CSS. LLMs couldn&#x27;t do this for my simple page from a web designer. Not even close, even mixing up vertical&#x2F;horizontal flex arrangement. When I cropped the image to just a small section, it still couldn&#x27;t do it. Tried a couple LLMs, none even came close. And these are pretty simple basic designs! I had to do it all manually.<p>2. Story Generator Agent - Write a story from a given outline (for educational purposes). Even at a very detailed outline level, and with a large context window, kept forgetting key points, repetitive language, no plot development. I just have to write the story myself.<p>3. Illustrator Agent - Image generation for above story. Images end up very &quot;LLM&quot; looking, often miss key elements in the story, but one thing is worst of all: no persistent characters. This is already a big problem with text, but an even bigger problems with images. Every image for the same story has a character who looks different, but I want them to be the same.<p>4. Publisher Agent - Package things together above so I can get a complete package of illustrated stories on topics available on web&#x2F;mobile for viewing, tracking progress, at varying levels.<p>Just some examples of where LLMs are currently not moving the needle much if at all.
评论 #39375292 未加载
评论 #39377024 未加载
评论 #39378748 未加载
评论 #39394879 未加载
评论 #39377422 未加载
评论 #39378870 未加载
deathmonger5000超过 1 年前
I taught <a href="https:&#x2F;&#x2F;github.com&#x2F;KillianLucas&#x2F;open-interpreter">https:&#x2F;&#x2F;github.com&#x2F;KillianLucas&#x2F;open-interpreter</a> how to use <a href="https:&#x2F;&#x2F;github.com&#x2F;ferrislucas&#x2F;promptr">https:&#x2F;&#x2F;github.com&#x2F;ferrislucas&#x2F;promptr</a><p>Then I asked it to add a test suite to a rails side project. It created missing factories, corrected a broken test database configuration, and wrote tests for the classes and controllers that I asked it to.<p>I didn&#x27;t have to get involved with mundane details. I did have to intervene here and there, but not much. The tests aren&#x27;t the best in the world, but IMO they&#x27;re adding value by at least covering the happy path. They&#x27;re not as good as an experienced person would write.<p>I did spend a non-trivial amount of time fiddling with the prompts I used to teach OI about Promptr as well as the prompts I used to get it to successfully create the test suite.<p>The total cost was around $11 using GPT4 turbo.<p>I think in this case it was a fun experiment. I think in the future, this type of tooling will be ubiquitous.
评论 #39375578 未加载
评论 #39374865 未加载
hubraumhugo超过 1 年前
We&#x27;re using AI agents for the orchestration of our fully automated web scrapers. But instead of trying to have one large general purpose agent that is hard to control and test, we use many smaller agents that basically just pick the right strategy for a specific sub-task in our workflows. In our case, an agent is a medium-sized LLM prompt that has a) context and b) a set of functions available to call.<p>For example we use it for:<p>- Website Loading: Automate proxy and browser selection to load sites effectively. Start with the cheapest and simplest way of extracting data, which is fetching the site without any JS or actual browser. If that doesn&#x27;t work, the agent tries to load the site with a browser and a simple proxy, and so on.<p>- Navigation: Detect navigation elements and handle actions like pagination or infinite scroll automatically.<p>- Network Analysis: Identify desired data within network calls.<p>- Validation: Hallucination checks and verification that the data is actually on the website and in the right format. (this is mostly traditional code though)<p>- Data transformation: Clean and map the data into the desired format. Finetuned small and performant LLMs are great at this task with a high reliability.<p>The main challenge:<p>We quickly realized that doing this for a few data sources with low complexity is one thing, doing it for thousands of websites in a reliable, scalable, and cost-efficient way is a whole different beast.<p>The integration of tightly constrained agents with traditional engineering methods effectively solved this issue for us.<p>Edit: You can try out a simplified version of this in our playground: <a href="https:&#x2F;&#x2F;www.kadoa.com&#x2F;add" rel="nofollow">https:&#x2F;&#x2F;www.kadoa.com&#x2F;add</a>
评论 #39374032 未加载
评论 #39374144 未加载
评论 #39374449 未加载
cl42超过 1 年前
I&#x27;m working on research agents to help with economic, financial, and political research. These agents are open source (see: <a href="https:&#x2F;&#x2F;github.com&#x2F;wgryc&#x2F;emerging-trajectories">https:&#x2F;&#x2F;github.com&#x2F;wgryc&#x2F;emerging-trajectories</a>).<p>The use cases are pretty straight forward and low risk:<p>1. Run a Google web search.<p>2. Query a news API.<p>3. Write a document based on the above, while citing sources.<p>Here&#x27;s an example of something written yesterday, where I&#x27;m forecasting whether July 2024 will be the hottest on record: <a href="https:&#x2F;&#x2F;emergingtrajectories.com&#x2F;a&#x2F;forecast&#x2F;74" rel="nofollow">https:&#x2F;&#x2F;emergingtrajectories.com&#x2F;a&#x2F;forecast&#x2F;74</a><p>This is working well in that the writeups are great and there are some &quot;aha&quot; moments, like the agent finding and referencing the The National Snow and Ice Data Center (NSIDC)... Very cool! I wouldn&#x27;t have thought of it.<p>Then there&#x27;s the part where the agent also tells me that the Oregon Department of Transportation has holidays during the summer, which doesn&#x27;t matter at all.<p>So, YMMV, as they say... But I am more productive with these agents. I wouldn&#x27;t publish anything formally without confirming and reviewing the content, though.
评论 #39374316 未加载
评论 #39374374 未加载
dongecko超过 1 年前
The company I work for has tons of documentation and regulations for several areas. In some areas the documents are well over a thousand and for the ease of use of these documents we build RAG based chat bots. This is why I have been playing with RAG systems on the scale of &quot;build completely from scratch&quot; to &quot;connect the services in Azure&quot;. The retrieval part of a RAG is vital for good&#x2F;reliable answers and if you build it naive, the results are not overwhelming.<p>You can improve on the retrieved documents in many ways, like - by better chunking,<p>- better embedding,<p>- embedding several rephrased versions of the query,<p>- embedding a hypothetical answer to the prompt,<p>- hybrid retrieval (vector similarity + keyword&#x2F;tfidf&#x2F;bm25 related search),<p>- massively incorporating meta data,<p>- introducing additional (or hierarchical) summaries of the documents,<p>- returning not only the chunks but also adjacent text,<p>- re-ranking the candidate documents,<p>- fine tuning the LLM and much, much more.<p>However, at the end of the day a RAG system usually still has a hard time answering questions that require an overview of your data. Example questions are:<p>- &quot;What are the key differences between the new and the old version of document X?&quot;<p>- &quot;Which documents can I ask you questions about?&quot;<p>- &quot;How do the regulations differ between case A and case B?&quot;<p>In these cases it is really helpful to incorporate LLMs to decide how to process the prompt. This can be something simple like query-routing, or rephrasing&#x2F;enhancing the original prompt until something useful comes up. But it can also be agents that come up with sub-queries and a plan on how to combine the partial answers. You can also build a network of agents with different roles (like coordinator&#x2F;planner, reviewer, retriever, ...) to come up with an answer.<p>* edited the formatting
评论 #39375711 未加载
评论 #39375397 未加载
furyofantares超过 1 年前
Agents are possible basically because the input to the LLM and the output of the LLM are both text. The loop is trivially closed.<p>But they&#x27;re universally garbage because they require the LLM to do a lot of things that LLMs are completely incompetent at. It&#x27;s just way too early to expect to be able to remove that work and have it be done by an LLM.<p>The fact is LLMs are useful because they easily do some work that you&#x27;re terrible at, and you easily do a lot of work that it&#x27;s terrible at, and this makes the LLM a good tool because you+LLM is better than either part of that equation alone.<p>It&#x27;s natural to think of the things that come effortlessly to you as easy, and to not even notice you&#x27;re doing any work. But that doesn&#x27;t change the fact that the LLM is completely incompetent at many of these things. It&#x27;s way too early to remove the human from the loop.
评论 #39375521 未加载
minimaxir超过 1 年前
That depends on your definition of &quot;Agent&quot;: the term has been warped by AI hypesters from the original ReACT paper to the point of being meaningless because it sounds cool.<p>The more notable common paradigm of Agent workflows that will persist even if there&#x27;s an AI crash is retrieval-augmented generation (RAG), which at a high-level essentially is few-shot text generation based on prior existing examples. There will always be value in aligning LLM output to be much more expected, such as &quot;generate text in the style of these examples&quot; or &quot;use these examples to answer the user&#x27;s question.&quot;<p>Startups that just market themselves as &quot;chat with your data!&quot;, even though they are RAG based, are gimmicks though and won&#x27;t survive because they have no moat.
thoughtlede超过 1 年前
Answering to your second part of the question about hidden challenges:<p>If you are using AI agents to automate a workflow [1] execution, then the question to ask is where is non-determinism in the workflow. As in, where do humans scratch their head as opposed to rely on deterministic computations.<p>It turns out, a lot of times, as humans, we scratch our head just once for a given kind of objectives to come with a plan. Once we devise a plan, we execute the same plan over and over again without much difficulty.<p>This inherent pattern in how humans solve problems sort of diminishes the value of AI agents because even in the best case scenario the agents would only be solving a one-time, front-loaded pain. The value add would have been immense if the pain has been recurrent for a given objective.<p>That is not to say there is no role for AI agents. We are trying to infuse AI agents into an environment where we as humans adapted pretty well. AI agents will have to create newer objectives and goals that we humans have not realized. Finding that uncharted territory, or blue ocean, is where the opportunity is.<p>[1] By &#x27;workflow&#x27; I mean a series of steps to take in order to achieve an overall objective.
评论 #39380586 未加载
janlukacs超过 1 年前
I keep asking the &quot;experts&quot; on Linkedin all the time, show me real life uses - radio silence.
评论 #39374027 未加载
评论 #39374232 未加载
评论 #39374894 未加载
lebean超过 1 年前
Don&#x27;t downplay the value of watching agents talk to each other for amusement. I got a lot of mileage out of that and will continue to do so.
评论 #39374910 未加载
评论 #39374748 未加载
jonasnelle超过 1 年前
I think there are two main reason the fully &quot;self-driving&quot; end-to-end agents that demo well don&#x27;t work.<p>1. Planning is hard and exponential decay: Most demos try to start with a single sentence e.g. &quot;order me a Dominos pizza&quot; and go do the whole thing. Turns out planning has been one of the things that LLMs are not that good at. Also, even for a low probability p of failure at a given step, you&#x27;d get all steps rights with probability (1-p)^n which gets bad as n grows.<p>2. Reliability matters and vision is not quite there yet: GPT4V is great, and there have been a handful of domain-specific open source models more focused on understanding screenshots but most of them are not good enough yet to work reliably. And for most applications, reliability is key if you are going to trust the agent to do things on your behalf.<p>Disclaimer: I&#x27;m one of the founders of Autotab (<a href="https:&#x2F;&#x2F;www.autotab.com&#x2F;">https:&#x2F;&#x2F;www.autotab.com&#x2F;</a>), we&#x27;re building a desktop app that lets anyone teach an AI to do a task just by showing it once. We&#x27;ve gone all in on reliability, building our own browser on top of Chromium to give us the bare metal control needed to deliver 98%+ reliability without any site-specific fine tuning.<p>The other opinionated thing we&#x27;ve done is to focus on &quot;Show, don&#x27;t tell&quot;. We&#x27;ve found that for most important automations it is easier to show the agent the workflow than it would be to write a paragraph describing the steps. If you were to train a human, would you explain where to click or just share your screen &amp; explain with a voice over?<p>Some stories from our users: One works in IT and sometimes spends hours on- and off-boarding employees (60,000 people company), they need to do 20 different steps across 8 different software applications. Another example is a recruiting company that has many employees looking for candidates and sending messages on LinkedIn all day. In general we mostly see automations that take action or sync data across different software applications.
Liron超过 1 年前
There are countless use cases for a <i>good</i> AI agent.<p>The problem is temporary: good AI agents don&#x27;t exist, because sufficiently intelligent AI doesn&#x27;t yet exist.<p>(Agency and broad-domain intelligence are basically the same thing. Being able to answer questions relevant to planning is planning.)<p>This state of affairs is in stark contrast to the crypto&#x2F;Web3 space, where no one ever presented a use case even conditional on the existence of good blockchain technology.
评论 #39377370 未加载
RobotToaster超过 1 年前
There are now multiple ai models specifically to solve 4chan captchas, because AI is now better at solving captcha than humans.
a_wild_dandan超过 1 年前
A few personal uses:<p>1. Find, annotate, aggregate, organize, summarize, etc all of my knowledge from notes<p>2. A Google substitute with direct answers in place of SEO junktext and countless ads<p>3. Writing boilerplate code, especially in unfamiliar languages<p>4. Dynamic, general, richly nuanced multimodal content moderation without the human labor bill<p>5. As an extremely effective personal tutor for learning nearly anything<p>I view AI as commoditizing general intelligence. You can supply it, like turning on the tap, wherever intelligence helps. I inject intelligence into moderating Discord message harassment, to detect when my 3D prints fail, to filter fluff from articles, clean up unstructured data, flag inappropriate images, etc. (All with the same model!) The world is overwhelmingly starved of intelligence. What extremely limited supply we have of this scarce resource (via humans) is woefully insufficient, and often extreme overkill where deployed. I now have access to a pennies-on-the-dollar supply of (low&#x2F;mediocre quality) intelligence. Bet that I&#x27;ll use it anywhere possible to unlock personal value and free up <i>my</i> intelligence for use where it&#x27;s <i>actually</i> needed.
评论 #39374781 未加载
评论 #39376144 未加载
blueboo超过 1 年前
Joining the chorus of “applications exist but functional agents don’t”. There is one proven application: raising credulous VC money—and hoping that funding lasts until someone else’s foundation model makes it work
simonw超过 1 年前
Which definition of agents are you interested in?<p>I&#x27;m pretty convinced at this point that the term &quot;agents&quot; is almost useless, because so many people are carrying entirely different mental models of what the term means - so it invites conversations where no-one is actually talking about the same exact idea.
评论 #39375741 未加载
usgroup超过 1 年前
Some of the comments reminded me of LeCun&#x27;s claim regarding the error distribution of an LLM output conditional on content length. Namely, if &quot;e&quot; is the probability of an error, the probability of a sequence of length &quot;n&quot; being error free is p = (1-e)^n. That is to say there is exponentially less chance that an LLM sequence is &quot;within the distribution of correct answers&quot; as token length increases.<p>This is a consequence of the &quot;auto-regressive&quot; model and its lack of in-built self-correction, and it is a limiting factor in actual applications.<p>LeCun&#x27;s tweet:<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;ylecun&#x2F;status&#x2F;1640122342570336267" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;ylecun&#x2F;status&#x2F;1640122342570336267</a>
choeger超过 1 年前
I am not aware of anything that works today, but I think that there&#x27;s room for shopping agents. Say you need a new USB Stick or a pair of shoes. Something between $10 and $1000 that you simply have to buy ASAP but doesn&#x27;t warrant spending one or more evenings on research. A language model could sift through the descriptions and comments and try to eliminate trash and even outright fraud.<p>But then again, it&#x27;s just another search engine, essentially. So for how long would it stay useful before it accepts payments to promote certain offers?
评论 #39374115 未加载
评论 #39380553 未加载
jmull超过 1 年前
Some code completion bots are helpful to me but since you put this: &quot;...and not just complete your words&quot;, I don&#x27;t think I&#x27;ve seen anything.<p>Well, except customer service bots (assuming the goal is to inexpensively absorb the energy of unhappy customers so they give up rather than actually getting the result they want or leaving, both of which cost the company money).
dmezzetti超过 1 年前
The fully autonomous agents that call tools work OK. I don&#x27;t think any of them are ready for prime-time.<p>I&#x27;ve had success in building multi-agent workflows. Which in a sense are an ensemble of experts that have different prompts to help bounce and validate answers off each other. For example, with one LLM prompt you can ask a question and another can validate the answer. A bit of strength in numbers defense against hallucinations.<p>I wrote an example doing this in this article: <a href="https:&#x2F;&#x2F;medium.com&#x2F;neuml&#x2F;ai-powered-parenting-can-ai-help-you-communicate-with-your-grumpy-teen-4ff691fd7061" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;neuml&#x2F;ai-powered-parenting-can-ai-help-yo...</a>
sjhatfield超过 1 年前
I use Duet AI from Google in vscode. It is quite good at completing my code as I&#x27;m writing it. I almost exclusively write Python code. I am not promoting for a whole file or anything but it can often complete multiple lines at once
评论 #39374586 未加载
bediashpreet超过 1 年前
Almost all the AI Apps we build for our clients now use Autonomous Assistants.<p>They&#x27;re simply better than naive RAG, especially when you need to access APIs, format content or compare different sections of the knowledge base.<p>Here are a few demos we have in the open:<p>&gt; HackerNews AI: Interacts with the hackernews API - <a href="https:&#x2F;&#x2F;hn.aidev.run" rel="nofollow">https:&#x2F;&#x2F;hn.aidev.run</a><p>&gt; ArXiv AI: Reads, summarizes and compares arxiv papers - <a href="https:&#x2F;&#x2F;arxiv.aidev.run" rel="nofollow">https:&#x2F;&#x2F;arxiv.aidev.run</a><p>(love that it can give you a comparison between 2 papers)<p>These use cases can only be possible using agents (or whatever that means)
Art9681超过 1 年前
It&#x27;s a search engine in a box, a snapshot of a corner of the internet, or some archive, or information generated via other automated processes, compressed via clever algorithms. It is a highly useful tool the gets more useful the more you use it. A good LLM+Retrieval can save a lot of time. It&#x27;s a tool that brings information to you. A single pane of very fragile glass today.<p>I can honestly say that my use of search engines has decreased drastically and replaced with SOTA LLMs + Web retrieval.
burnte超过 1 年前
We&#x27;re using Dragon&#x27;s DAX Copilot with our providers. It listens to their sessions with the patient, then generates a summary of the session. It&#x27;s amazingly good.
molave超过 1 年前
From a creative writing perspective, I can set personalities or quirks for a character and it can come up with in-character responses and dialogue.
GolfPopper超过 1 年前
Via Bing, Microsoft seems to be using AI agents to make me laugh. Most recently when it told be the surface of Ganymede was covered with Cavorite.
digitcatphd超过 1 年前
Right now in my opinion the most potential is the large action model designed by Rabbit or a similar general learning framework that can be rapidly configured without a ton of code. I anticipate such a tool or model and therefore will not invest significantly into building things the hard way. Already learned my lesson with that for LLMs.
PaulHoule超过 1 年前
My RSS reader is an A.I. agent, I have written a huge number of comments mentioning it<p><a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=false&amp;query=yoshinon&amp;sort=byDate&amp;type=comment" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=false&amp;qu...</a>
评论 #39374564 未加载
vergessenmir超过 1 年前
Reasoning across many stages, converging on a user provided goal with the required level of accuracy is beyond commercially available LLMs. Take the travel agent use case, a recent paper showed that Llms tested would get dates and prices wrong. So the promise of AutoGPTs, GodGPTs etc is still quite far away
mise_en_place超过 1 年前
The only one I&#x27;ve found useful so far is a documentation agent, similar to what langchain has in their docs. It is useful to be able to interface with an agent, instead of having to scour the man-pages and find the relevant information.
评论 #39375488 未加载
geor9e超过 1 年前
A similar post, if you want to read the comments there <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39263664">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39263664</a>
brendongeils超过 1 年前
majority of our users are seeing value from heavy co-pilot workflows in documents, jupyter notebooks and form generation. we built a data analytics platform for context. early use was chat with your SQL database and web research. now we are seeing more multi-modal uses for chart analysis. we have a whole list of tasks on our application homepage <a href="https:&#x2F;&#x2F;app.athenaintelligence.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;app.athenaintelligence.ai&#x2F;</a>
NicoJuicy超过 1 年前
- Suggesting better variable names<p>- Cleaning up &#x2F; changing something in bulk ( eg. cleaning attributes from a class)<p>- Generating unit tests ! ( just follow up on what it actually tests though)
rpmisms超过 1 年前
Google Pixel&#x27;s Hold For Me feature. Not a typical LLM, but it&#x27;s a phenomenal AI agent.
wepple超过 1 年前
Prioritization of work (security)<p>Feed in a collection of docs about applications in use at an organization including their user guides; summarize what the capability of each application is; identify what capabilities are high risk; prioritize which applications need the most security visibility<p>Usually this is a classic difficult problem of inventory and 100 meetings.<p>Perfect? Nope. A huge leap forward? Yes.
jdmccarty超过 1 年前
A big problem thus far has been singular agents trying to solve all aspects of the task, which others have noted can cause a 90% success rate to result in .9<i>.9</i>.9. I expect this spring and summer we will see the first batches of agents working together to solve problems. ChatGPT announced the ability for their paywalled GPTs to call upon other GPTs which is an elementary version of this process. As teams experiment with these concepts, and as compute costs fall in parallel, I believe we will see potentially thousands or millions of them working together. Doing so will bring a more deterministic outcome to the process while also encouraging the unexpected and variable output that is inherent in LLM output.
crowdyriver超过 1 年前
I am surprised no one is doing an llm code linter.
评论 #39376774 未加载
评论 #39376393 未加载
robertrocha884超过 1 年前
Nice read