TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

LLM Prompt Tuning Playbook

2 pointsby aberoham6 months ago

1 comment

kingkongjaffa6 months ago
What’s surprising is that this guidance comes from someone doing machine learning research at Google according to their github, and there’s nothing in the guidance that a curious person couldn’t have figured out themselves just by playing with it.<p>With LLM’s the gap between novice and expert insight doesn’t seem very large.<p>I think we have yet to see new UX patterns emerge for this tech.<p>We have anchored closely on chat bot behaviour and started looking at multi agentic systems, but these might be local maxima and some other paradigms can get value out of these tools.<p>I’ll use this repo as a handy example, but there isn’t anything new compared to my own independent playing with the tools.<p>One interesting part was about zero shot sometimes being better than few-shot prompting. This is something many (me included) assumed to make intuitive sense that helping the LLM anchor on some examples is always better than 0-shot.<p>It would have been nice to see some test examples of this.