TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What We Learned from a Year of Building with LLMs

324 点作者 7d7n12 个月前

18 条评论

wokwokwok12 个月前
Mildly surprised to see no mention of my top 2 LLM fails:<p>1) you’re sampling a distribution; if you only sample once, your sample is not representative of the distribution.<p>For evaluating prompts and running in production; your hallucination rate is inversely proportional to the number of times you sample.<p>Sample many times and vote is a highly effective (but slow) strategy.<p>There is almost zero value in evaluating a prompt by only running it once.<p>2) Sequences are generated in order.<p>Asking an LLM to make a decision and justify its decision in that order is literally meaningless.<p>Once the “decision” tokens are generated; the justification does not influence them. It’s not like they happen “all at once” there is a specific sequence to generating output where the later output <i>cannot magically</i> influence the output which has already been generated.<p>This is true for sequential outputs from an LLM (obviously), but it is also true <i>inside single outputs</i>. The sequence of tokens in the output is a <i>sequence</i>.<p>If you’re generating structured output (eg json, xml) which is not sequenced, and your output is something like {decision: …, reason:…} it literally does nothing.<p>…but, it <i>is</i> valuable to “show the working out” when, as above, you then evaluate multiple solutions to a single request and pick the best one(s).
评论 #40511325 未加载
评论 #40512816 未加载
评论 #40510089 未加载
评论 #40510988 未加载
评论 #40513412 未加载
评论 #40532052 未加载
评论 #40513858 未加载
评论 #40510929 未加载
DubiousPusher12 个月前
Pretty good. Despite my high scepticism of the technology I have spent the last year working with LLMs myself. I would add a few things.<p>The LLM is like another user. And it can surprise you just like a user can. All the things you&#x27;ve done over the years to sanitize user input apply to LLM responses.<p>There is power beyond the conversational aspects of LLMs. Always ask, do you need to pass the actual text back to your user or can you leverage the LLM and constrain what you return?<p>LLMs are the best tool we&#x27;ve ever had for understanding user intent. They obsolete the hierarchies of decision trees and spaghetti logic we&#x27;ve written for years to classify user input into discrete tasks (realizing this and throwing away so much code has been the joy of the last year of my work).<p>Being concise is key and these things suck at it.<p>If you leave a user alone with the LLM, some users will break it. No matter what you do.
评论 #40508944 未加载
评论 #40536172 未加载
评论 #40509806 未加载
mloncode12 个月前
Hello this is Hamel, one of the authors (among the list of other amazing authors). Happy to answer any questions as well as tag any of my colleagues to answer any questions!<p>(Note: this is only Part 1 of 3 of a series that has already been written and the other 2 parts will be released shortly)
评论 #40508971 未加载
评论 #40529544 未加载
评论 #40512271 未加载
评论 #40508950 未加载
评论 #40509267 未加载
__loam12 个月前
I feel like an insane person everytime I look at the LLM development space and see what the state of the art is.<p>If I&#x27;m understanding this correctly, the standard way to get structured output seems to be to retry the query until the stochastic language model produces expected output. RAG also seems like a hilariously thin wrapper over traditional search systems, and it still might hallucinate in that tiny distance between the search result and the user. Like we&#x27;re talking about writing sentences and coaching what amounts to an auto complete system to magically give us something we want. How is this industry getting hundreds of billions of dollars in investment?<p>Also the error rate is about 5-10% according to this article. That&#x27;s pretty bad!
评论 #40509395 未加载
评论 #40509358 未加载
评论 #40509237 未加载
评论 #40509366 未加载
评论 #40510502 未加载
评论 #40511041 未加载
评论 #40509361 未加载
elicksaur12 个月前
Upon loading the site, a chat bubble pops up and auto-plays a loud ding. Is the innovation of LLMs really a regression to 2000s spam sites? Can’t say I’m excited.
Havoc12 个月前
Surely step one is carefully consider whether LLMs are the solution to you problem? That to me is the part where this is likely to go wrong for most people
评论 #40509636 未加载
l5870uoo9y12 个月前
&gt; Thus, you may expect that effective prompting for Text-to-SQL should include structured schema definitions; indeed.<p>I found that the simpler the better, when testing lots of different SQL schema formats on <a href="https:&#x2F;&#x2F;www.sqlai.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.sqlai.ai&#x2F;</a>. CSV (table name, table column, data type) outperformed both a JSON formatted and SQL schema dump. And not to mention consumed fewer tokens.<p>If you need the database schema in a consistent format (e.g. CSV) just have LLM extract data and convert whatever the user provides into CSV. It shines at this.
评论 #40509292 未加载
surfingdino12 个月前
One thing I am getting from this is that you need to be able to write prompts using well-structured English. That may be a challenge to a significant percentage of the population.<p>I am curious to know if the authors tried to build LLMs in languages other than English and what did they learn while doing so?<p>An excellent post reminding me of the best O&#x27;Reilly articles from the past. Looking forward to parts 2 and 3.
评论 #40508970 未加载
评论 #40508968 未加载
CuriouslyC12 个月前
One thing that wasn&#x27;t mentioned that works pretty well - if you have a RAG process running async rather than in a REPL loop, you can retrieve documents then perform a pass with another LLM to do summarization&#x2F;extraction first. This saves input token costs for expensive LLMs, and lets you cram more information in the context, you just have to deal with additional latency.
7thpower12 个月前
This is excellent and matches with my experience, especially the part about prioritizing deterministic outputs. They are not as sexy as agentic chain of thought, but they actually work.
hubraumhugo12 个月前
Comprehensive and practical write-up that aligns with most of my experiences.<p>One controversial point that has led to discussions in my team is this:<p>&gt; A common anti-pattern&#x2F;code smell in software is the “God Object,” where we have a single class or function that does everything. The same applies to prompts too.<p>In theory, a monolithic agent&#x2F;prompt with infinite context size, a large toolset, and perfect attention would be ideal.<p>Multi-agent systems will always be less effective and more error-prone than monolithic systems on a given problem because of less context of the overall problem. Individual agents work best when they have entirely different functionalities.<p>I wrote down my thoughts about agent architectures here: <a href="https:&#x2F;&#x2F;www.kadoa.com&#x2F;blog&#x2F;ai-agents-hype-vs-reality" rel="nofollow">https:&#x2F;&#x2F;www.kadoa.com&#x2F;blog&#x2F;ai-agents-hype-vs-reality</a>
评论 #40508953 未加载
评论 #40509586 未加载
anon37383912 个月前
Is anyone using DSPy? It seems like a really interesting project, but I haven’t heard much from people building with it.
hugobowne12 个月前
hey there, Hugo here and big fan of this work. Such a fan I&#x27;m actually doing a livestream podcast recording with all the authors here, if you&#x27;re interested in hearing more from them: <a href="https:&#x2F;&#x2F;lu.ma&#x2F;e8huz3s6?utm_source=hn" rel="nofollow">https:&#x2F;&#x2F;lu.ma&#x2F;e8huz3s6?utm_source=hn</a><p>should be fun!
lagrange7712 个月前
Can anyone recommend resources, preferably books, on this whole topic of building applications around LLMs? It feels like running after an accelerating train to hop on.
msp2612 个月前
Thanks for sharing, I&#x27;ve followed these authors for a while and they&#x27;re great.<p>Some notes from my own experience on LLMs for NLP problems:<p>1) The output schema is usually more impactful than the text part of a prompt.<p>a) Field order matters a lot. At inference, the earlier tokens generated influence the next tokens.<p>b) Just have the CoT as a field in the schema too.<p>c) PotentialField and ActualField allow the LLM to create some broad options and then select the best. This mitigates the fact that they can&#x27;t backtrack a bit. If you have human evaluation in your process, this also makes it easier for them to correct mistakes.<p>`&#x27;PotentialThemes&#x27;: [&#x27;Surreal Worlds&#x27;, &#x27;Alternate History&#x27;, &#x27;Post-Apocalyptic&#x27;], &#x27;FinalThemes&#x27;: [&#x27;Surreal Worlds&#x27;]`<p>d) Most well definined problems should be possible zero-shot on a frontier model. Before rushing off to add examples really check that you&#x27;re solving the correct problem in the most ideal way.<p>2) Defining the schema as typescript types is flexible and reliable and takes up minimal tokens. The output JSON structure is pretty much always correct (as long as the it fits in the context window) the only issue is that the language model can pick values outside the schema but that&#x27;s easy to validate in post.<p>3) &quot;Evaluating LLMs can be a minefield.&quot; yeah it&#x27;s a pain in the ass.<p>4) Adding too many examples increases the token costs per item a lot. I&#x27;ve found that it&#x27;s possible to process several items in one prompt and, despite it being seemingly silly and inefficient, it works reliably and cheaply.<p>5) Example selection is not trivial and can cause very subtle errors.<p>6) Structuring your inputs with XML is very good. Even if you&#x27;re trying to get JSON output, XML input seems to work better. (Haven&#x27;t extensively tested this because eval is hard).
评论 #40518192 未加载
goldemerald12 个月前
&quot;Ready to -dive- delve in?&quot; is an amazingly hilarious reference. For those who don&#x27;t know, LLMs (especially ChatGPT) use the word delve significantly more often than human created content. It&#x27;s a primary tell-tale sign that someone used an LLM to write the text. Keep an eye out for delving, and you&#x27;ll see it everywhere.
评论 #40509081 未加载
评论 #40509103 未加载
评论 #40509095 未加载
mark_l_watson12 个月前
Fantastic advice. While reading the article I kept running across advice I had seen before or figured out myself, then forgot about. I am going to summarize this article and add the summary to my own Apple Notes (there are better tools, but I just use Apple Notes to act as a pile-of-text for reach notes.)
beepbooptheory12 个月前
Is every &quot;AI product&quot; a piece of software where the end user interfaces with an llm? Or is an application that used AI to be built an &quot;AI product&quot;?<p>Is it the thing itself, or is it the thing that enables us?