TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How to save an LLM state achieved with a series of prompts?

3 点作者 Exorust将近 2 年前
I'd like to load up an LLM in which I've taught it about something by a series of prompts.

2 条评论

armchairhacker将近 2 年前
I&#x27;m not sure exactly how it&#x27;s formatted (and may differ between GPT&#x2F;LLAMA&#x2F;others), but when an LLM &quot;remembers&quot; your chat history, it&#x27;s actually getting all of the previous prompts and its responses as its input prepended to your next prompt.<p>Something like:<p><pre><code> System: You are a helpful assistant. User: What is the country with the largest population? Assistant: India. User: Ok, what is the next largest? Assistant: China. User: How about the next largest? Assistant: United States. User: And the next? Assistant: </code></pre> and the model outputs<p><pre><code> Indonesia. </code></pre> But you can&#x27;t just put your history in a single prompt because it would be formatted differently. If you&#x27;re using a model like ChatGPT, there may be a way to copy the conversation. Otherwise, you can definitely edit the conversation (including AI responses) in the API. See <a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;playground" rel="nofollow noreferrer">https:&#x2F;&#x2F;platform.openai.com&#x2F;playground</a> for easy access, there&#x27;s probably a command-line tool or alternative frontend which you can give your API key to make it easier.
brucethemoose2将近 2 年前
The LLM has no memory. What you actually feed it is the entire conversation (truncated up to its input limit) every time you respond.<p>But you are bound to the interface, especially if you are not running a local LLM like Llama.<p>Different models have different prompting syntax.