TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LLM Prompt Tuning Playbook

2 点作者 aberoham6 个月前

1 comment

kingkongjaffa6 个月前
What’s surprising is that this guidance comes from someone doing machine learning research at Google according to their github, and there’s nothing in the guidance that a curious person couldn’t have figured out themselves just by playing with it.<p>With LLM’s the gap between novice and expert insight doesn’t seem very large.<p>I think we have yet to see new UX patterns emerge for this tech.<p>We have anchored closely on chat bot behaviour and started looking at multi agentic systems, but these might be local maxima and some other paradigms can get value out of these tools.<p>I’ll use this repo as a handy example, but there isn’t anything new compared to my own independent playing with the tools.<p>One interesting part was about zero shot sometimes being better than few-shot prompting. This is something many (me included) assumed to make intuitive sense that helping the LLM anchor on some examples is always better than 0-shot.<p>It would have been nice to see some test examples of this.