TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Can You Prompt an LLM to Solve a Simple Puzzle Clearly?

1 点作者 jeeybee大约 1 个月前

1 comment

jeeybee大约 1 个月前
I&#x27;ve observed something interesting: my toddler can quickly and somewhat effortlessly solve a simple physical puzzle, yet when I try to prompt large language models (LLMs) to guide me clearly through the solution, they struggle.<p>Even with clear photographs and detailed instructions, LLMs tend to give general advice or indirect methods rather than precise, actionable steps that clearly demonstrate correctness.<p>Have you tried something similar? Have you successfully &quot;prompt-engineered&quot; an LLM into giving clear, precise, step-by-step solutions for physical puzzles? If yes, what&#x27;s your approach?