TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

I wrote a meta mode for ChatGPT

194 点作者 airesearcher超过 1 年前

19 条评论

furyofantares超过 1 年前
I have a number of &quot;slash commands&quot; listed amidst my custom instructions, which I usually use naked to amend the previous reply, but sometimes include in my query to begin with.<p>&quot;There are some slash commands to alter or reinforce behavior.<p>&#x2F;b Be as brief as possible<p>&#x2F;d Be as detailed as possible<p>&#x2F;eli5 or just &#x2F;5 Explain like I&#x27;m 5. And you&#x27;re Richard Feynman.<p>&#x2F;r Research or fact-check your previous reply. Identify the main claims, do a web search, cite sources.<p>&#x2F;c Complement your prior reply by writing and running code.<p>&#x2F;i Illustrate it.&quot;
评论 #38599040 未加载
dr_dshiv超过 1 年前
It’s neat but I would love to see some more examples of why you think it is good. I tend to be skeptical of adding anything to the context window that isn’t closely aligned with my goals. My biggest issue is usually getting chatGPT to get out of the “blurry center” (where it just blathers pablum) to a “productive spike” where it can genuinely do specific work.
评论 #38598429 未加载
评论 #38611403 未加载
评论 #38611323 未加载
评论 #38595223 未加载
评论 #38598400 未加载
评论 #38596901 未加载
fnordpiglet超过 1 年前
Sort of surprised at the negativity in the responses here. I think the idea of a meta programming prompt for chatgpt evaluated by GPT is pretty clever and useful, particularly if you’re working to iteratively refine some content in context. It’s not my natural mode of using chatgpt, but I can certainly see the use particularly for content creators.<p>I am curious from the author the specific motivation and the use case for the GPT that was in their mind - which in some ways is probably asking “ what do you typically use chatgpt for and what is your way of doing that.”<p>None the less, this is clever and and a novel way of manipulating instruction to surface a meta language within a LLM.
评论 #38596032 未加载
评论 #38596926 未加载
评论 #38595984 未加载
bongodongobob超过 1 年前
I don&#x27;t see why this is good. You&#x27;re clogging your context with a bunch of unnecessary clutter. Just tell it what you want it to do, no? Like why am I spending 1500 characters per message on the hello world loop example? I get the same output from just asking it to do that.<p>The message indexing is kind of interesting, but again, it&#x27;s a huge waste. Just write a wrapper rather than wasting all those tokens and muddying your context.<p>I think in the end this is just eye candy and is going to get you worse results. Granted, I haven&#x27;t tested thoroughly, but neither has OP.
评论 #38595346 未加载
评论 #38597999 未加载
zomglings超过 1 年前
The hate this is getting is not justified, it&#x27;s a very cool idea. I tried the GPT you created and it is impressively consistent.<p>Why do you think that GPT4 so consistently respects these instructions?<p>It&#x27;s not perfect, though -- I asked it to explain a bit more about functions and it slipped. Its usage examples had function invocations in the form &quot;&#x2F;&#x2F;f &lt;function_name&gt; &lt;*args&gt;&quot;<p>I asked it about this and it responded:<p><pre><code> In reply to: 3 Thank you for pointing out the discrepancy. You are correct. According to the &#x2F;&#x2F;N syntax rules, functions should be invoked using the format &quot;&#x2F;&#x2F;&lt;function name&gt;,&quot; not &quot;&#x2F;&#x2F;f &lt;function name&gt;.&quot; The &quot;&#x2F;&#x2F;f&quot; prefix is used for defining a new function, not for invoking it. Here&#x27;s the corrected way to invoke the functions I mentioned earlier: </code></pre> Anyway, I don&#x27;t think I would use this exactly the way that you define it, but I really like the idea of basically defining macros to the conversation and invoking them using a special syntax like this. Will have to see if it sticks or not, but it&#x27;s a valuable idea.<p>I&#x27;m curious - do you know how custom instructions work? A lot of people are implying that the custom instructions simply get prepended to each message, but is that true? Or are they fed into the model as context in a more opaque manner?
评论 #38599825 未加载
WhitneyLand超过 1 年前
Nova, I don’t think people are wanting to respond negatively, the concept is just not coming across easily even with the GPT.<p>Maybe a short YouTube video where you can demo it and show some examples of how it adds value, or post something like this:<p>“Normally in ChatGpt to do x you would have to do this… But with my approach you can do x like this… Notice how this saved y amount of time or typing”
评论 #38596725 未加载
airesearcher超过 1 年前
Update:<p>Thanks for all your comments!<p>I&#x27;ve made the GPT version and added a LOT more (because there is no 1500 character restriction there).<p><a href="https:&#x2F;&#x2F;chat.openai.com&#x2F;g&#x2F;g-tcXXGxXmA-nova-mode-pro-ai-authoring-productivity-tool" rel="nofollow noreferrer">https:&#x2F;&#x2F;chat.openai.com&#x2F;g&#x2F;g-tcXXGxXmA-nova-mode-pro-ai-autho...</a><p>Type &#x2F;&#x2F;? for the user manual<p>Type &#x2F;&#x2F;?? for usage examples<p>Type &#x2F;&#x2F;fun to get it generate a random new command<p>Also I updated the basic code block on my OP blog post, with a few little fixes <a href="https:&#x2F;&#x2F;tinyurl.com&#x2F;mryn42te" rel="nofollow noreferrer">https:&#x2F;&#x2F;tinyurl.com&#x2F;mryn42te</a>
EMM_386超过 1 年前
I&#x27;m not much interested in the prompt itself, but I am continually amazed that ChatGPT is able to make sense of prompts like this.<p>That is pretty far from &quot;language&quot;, and I can&#x27;t see how any of that has been seen in its &quot;training data&quot;.<p>I mean ... you can add something like &quot;&#x2F;&#x2F;! = loop mode = do loop: (ask USR for instruct (string, num); iterate string num times); Nesting ok.&quot; and it can not only parse that (or &quot;tokenize&quot; it), then somehow find relationships in its internal high-dimensional vector space that are sufficient to ... pseudo-&quot;execute&quot; it?<p>I don&#x27;t know. Obviously, not my area of expertise, although I can say I&#x27;ve spent a lot of time <i>trying</i> to understand even the basics. But then I&#x27;ll see an example like this, and be reminded of how little I understand any of it.
评论 #38596129 未加载
评论 #38595779 未加载
评论 #38595801 未加载
评论 #38595770 未加载
czhu12超过 1 年前
I&#x27;m a bit confused -- Are these instructions that are compiled and executed deterministically by some chatgpt runtime engine or is it just a prompt thats prepended to every input?
评论 #38596749 未加载
airesearcher超过 1 年前
Wow … I figured out a great trick to compress the custom instructions!!!<p>The potential benefit of this new approach is that (a) it causes ChatGPT to only load the instructions once per chat, instead of prepending them to every message, which can save token space, and (b) the length of the custom instructions is not limited by the 1500 character limit - it can be as long as the instructions on the target page.<p>Instead of adding my custom instructions just add this text below as your Custom Instruction:<p>“If you did not already do this yet in this chat session before this, then you must first use web browser to read <a href="https:&#x2F;&#x2F;tinyurl.com&#x2F;app" rel="nofollow noreferrer">https:&#x2F;&#x2F;tinyurl.com&#x2F;app</a> and then print “Custom Instructions loaded” and learn the instructions and use them to interpret any &#x2F;&#x2F;N commands in this chat.”<p>UPDATE - it seems that ChatGPT has some policies in place that limit it from using more than 90 words from a web page it fetches. I am investigating this to see if I can find a way around it....
airesearcher超过 1 年前
I wrote a custom instruction for ChatGPT Pro that radically improves productivity inside ChatGPT. Add it to your custom instructions and enjoy!
评论 #38595800 未加载
评论 #38594621 未加载
simonw超过 1 年前
I suggest also making this available as a GPT. I don&#x27;t like pasting random stuff into my custom instructions, because that will affect all of my future usage. I&#x27;d much rather try out a GPT where the effects of those instructions stay limited to that one place.
评论 #38595340 未加载
miohat超过 1 年前
I am using custom instructions and have built several GPTs.<p>I have discovered that ChatGPT just keep forgetting my system prompt.<p>For example, I ask my custom GPT[0] to always print 3 follow-up questions regarding the current topic after responding. But it just keep forgetting.<p>I found that if I upload images, there&#x27;s a 99% chance that ChatGPT will forget to print the follow-up question. I don&#x27;t know why. And I&#x27;m wondering that, when it forgets to print the follow-up questions, does it still remember the other system prompts I gave it?<p>[0]: <a href="https:&#x2F;&#x2F;chat.openai.com&#x2F;g&#x2F;g-IehmFtJh3-great-explainer" rel="nofollow noreferrer">https:&#x2F;&#x2F;chat.openai.com&#x2F;g&#x2F;g-IehmFtJh3-great-explainer</a>
airesearcher超过 1 年前
Here it is a GPT if you want to try it that way:<p><a href="https:&#x2F;&#x2F;chat.openai.com&#x2F;share&#x2F;3ebde0ee-5db6-44c3-a836-2b4ee35b944b" rel="nofollow noreferrer">https:&#x2F;&#x2F;chat.openai.com&#x2F;share&#x2F;3ebde0ee-5db6-44c3-a836-2b4ee3...</a>
评论 #38595587 未加载
airesearcher超过 1 年前
Try the &#x2F;&#x2F;fun command in the GPT version... its fun<p><a href="https:&#x2F;&#x2F;chat.openai.com&#x2F;g&#x2F;g-tcXXGxXmA-nova-mode-pro-ai-authoring-productivity-tool" rel="nofollow noreferrer">https:&#x2F;&#x2F;chat.openai.com&#x2F;g&#x2F;g-tcXXGxXmA-nova-mode-pro-ai-autho...</a>
joshcho超过 1 年前
This is cool, I have been experimenting with Forth to control chats (chat as a stack). Surprisingly useful, demoed to around 50 friends
评论 #38596731 未加载
sitkack超过 1 年前
Neat. I did something similar and created a stack based NLP language, I used it, mainly for synthesizing prompts for image generation.
airesearcher超过 1 年前
Type &#x2F;&#x2F;? for the manual
airesearcher超过 1 年前
Main URL has been corrected thanks to the moderators.