I have a number of "slash commands" listed amidst my custom instructions, which I usually use naked to amend the previous reply, but sometimes include in my query to begin with.<p>"There are some slash commands to alter or reinforce behavior.<p>/b Be as brief as possible<p>/d Be as detailed as possible<p>/eli5 or just /5 Explain like I'm 5. And you're Richard Feynman.<p>/r Research or fact-check your previous reply. Identify the main claims, do a web search, cite sources.<p>/c Complement your prior reply by writing and running code.<p>/i Illustrate it."
It’s neat but I would love to see some more examples of why you think it is good. I tend to be skeptical of adding anything to the context window that isn’t closely aligned with my goals. My biggest issue is usually getting chatGPT to get out of the “blurry center” (where it just blathers pablum) to a “productive spike” where it can genuinely do specific work.
Sort of surprised at the negativity in the responses here. I think the idea of a meta programming prompt for chatgpt evaluated by GPT is pretty clever and useful, particularly if you’re working to iteratively refine some content in context. It’s not my natural mode of using chatgpt, but I can certainly see the use particularly for content creators.<p>I am curious from the author the specific motivation and the use case for the GPT that was in their mind - which in some ways is probably asking “ what do you typically use chatgpt for and what is your way of doing that.”<p>None the less, this is clever and and a novel way of manipulating instruction to surface a meta language within a LLM.
I don't see why this is good. You're clogging your context with a bunch of unnecessary clutter. Just tell it what you want it to do, no? Like why am I spending 1500 characters per message on the hello world loop example? I get the same output from just asking it to do that.<p>The message indexing is kind of interesting, but again, it's a huge waste. Just write a wrapper rather than wasting all those tokens and muddying your context.<p>I think in the end this is just eye candy and is going to get you worse results. Granted, I haven't tested thoroughly, but neither has OP.
The hate this is getting is not justified, it's a very cool idea. I tried the GPT you created and it is impressively consistent.<p>Why do you think that GPT4 so consistently respects these instructions?<p>It's not perfect, though -- I asked it to explain a bit more about functions and it slipped. Its usage examples had function invocations in the form "//f <function_name> <*args>"<p>I asked it about this and it responded:<p><pre><code> In reply to: 3
Thank you for pointing out the discrepancy. You are correct. According to
the //N syntax rules, functions should be invoked using the format
"//<function name>," not "//f <function name>." The "//f" prefix is used for
defining a new function, not for invoking it. Here's the corrected way to
invoke the functions I mentioned earlier:
</code></pre>
Anyway, I don't think I would use this exactly the way that you define it, but I really like the idea of basically defining macros to the conversation and invoking them using a special syntax like this. Will have to see if it sticks or not, but it's a valuable idea.<p>I'm curious - do you know how custom instructions work? A lot of people are implying that the custom instructions simply get prepended to each message, but is that true? Or are they fed into the model as context in a more opaque manner?
Nova, I don’t think people are wanting to respond negatively, the concept is just not coming across easily even with the GPT.<p>Maybe a short YouTube video where you can demo it and show some examples of how it adds value, or post something like this:<p>“Normally in ChatGpt to do x you would have to do this… But with my approach you can do x like this… Notice how this saved y amount of time or typing”
Update:<p>Thanks for all your comments!<p>I've made the GPT version and added a LOT more (because there is no 1500 character restriction there).<p><a href="https://chat.openai.com/g/g-tcXXGxXmA-nova-mode-pro-ai-authoring-productivity-tool" rel="nofollow noreferrer">https://chat.openai.com/g/g-tcXXGxXmA-nova-mode-pro-ai-autho...</a><p>Type //? for the user manual<p>Type //?? for usage examples<p>Type //fun to get it generate a random new command<p>Also I updated the basic code block on my OP blog post, with a few little fixes <a href="https://tinyurl.com/mryn42te" rel="nofollow noreferrer">https://tinyurl.com/mryn42te</a>
I'm not much interested in the prompt itself, but I am continually amazed that ChatGPT is able to make sense of prompts like this.<p>That is pretty far from "language", and I can't see how any of that has been seen in its "training data".<p>I mean ... you can add something like "//! = loop mode = do loop: (ask USR for instruct (string, num); iterate string num times); Nesting ok." and it can not only parse that (or "tokenize" it), then somehow find relationships in its internal high-dimensional vector space that are sufficient to ... pseudo-"execute" it?<p>I don't know. Obviously, not my area of expertise, although I can say I've spent a lot of time <i>trying</i> to understand even the basics. But then I'll see an example like this, and be reminded of how little I understand any of it.
I'm a bit confused -- Are these instructions that are compiled and executed deterministically by some chatgpt runtime engine or is it just a prompt thats prepended to every input?
Wow … I figured out a great trick to compress the custom instructions!!!<p>The potential benefit of this new approach is that (a) it causes ChatGPT to only load the instructions once per chat, instead of prepending them to every message, which can save token space, and (b) the length of the custom instructions is not limited by the 1500 character limit - it can be as long as the instructions on the target page.<p>Instead of adding my custom instructions just add this text below as your Custom Instruction:<p>“If you did not already do this yet in this chat session before this, then you must first use web browser to read <a href="https://tinyurl.com/app" rel="nofollow noreferrer">https://tinyurl.com/app</a> and then print “Custom Instructions loaded” and learn the instructions and use them to interpret any //N commands in this chat.”<p>UPDATE - it seems that ChatGPT has some policies in place that limit it from using more than 90 words from a web page it fetches. I am investigating this to see if I can find a way around it....
I suggest also making this available as a GPT. I don't like pasting random stuff into my custom instructions, because that will affect all of my future usage. I'd much rather try out a GPT where the effects of those instructions stay limited to that one place.
I am using custom instructions and have built several GPTs.<p>I have discovered that ChatGPT just keep forgetting my system prompt.<p>For example, I ask my custom GPT[0] to always print 3 follow-up questions regarding the current topic after responding. But it just keep forgetting.<p>I found that if I upload images, there's a 99% chance that ChatGPT will forget to print the follow-up question. I don't know why. And I'm wondering that, when it forgets to print the follow-up questions, does it still remember the other system prompts I gave it?<p>[0]: <a href="https://chat.openai.com/g/g-IehmFtJh3-great-explainer" rel="nofollow noreferrer">https://chat.openai.com/g/g-IehmFtJh3-great-explainer</a>
Here it is a GPT if you want to try it that way:<p><a href="https://chat.openai.com/share/3ebde0ee-5db6-44c3-a836-2b4ee35b944b" rel="nofollow noreferrer">https://chat.openai.com/share/3ebde0ee-5db6-44c3-a836-2b4ee3...</a>
Try the //fun command in the GPT version... its fun<p><a href="https://chat.openai.com/g/g-tcXXGxXmA-nova-mode-pro-ai-authoring-productivity-tool" rel="nofollow noreferrer">https://chat.openai.com/g/g-tcXXGxXmA-nova-mode-pro-ai-autho...</a>