Depending on the use case and frequency, I either:<p>- Save them as a ChatGPT custom GPT or a Claude Project.<p>- Create a RayCast AI Command. <a href="https://manual.raycast.com/ai">https://manual.raycast.com/ai</a><p>- Save them as a text snippet in Obsidian notes. <a href="https://obsidian.md" rel="nofollow">https://obsidian.md</a>
Mostly plain text files saved locally for easy copy-pasting.<p>I'll occassionally use prompts from the Anthropic library (<a href="https://docs.anthropic.com/en/prompt-library/library" rel="nofollow">https://docs.anthropic.com/en/prompt-library/library</a>) and make some minor modifications to them. E.g. I'll modify the "prose polisher" prompt from the prompt library for refining written text in specific ways.
For ChatGPT I've found this search extension useful to find previously used prompts: <a href="https://chromewebstore.google.com/detail/gpt-search-chat-history/glhkbfoibolghhfikadjikgfmaknpelb?hl=en" rel="nofollow">https://chromewebstore.google.com/detail/gpt-search-chat-his...</a><p>Source code: <a href="https://github.com/polywock/gpt-search">https://github.com/polywock/gpt-search</a>
I keep it adhoc - models change so frequently that prompts are always broken all the time. Most of the ones I've used last year are no longer relevant.<p>"Prompt engineering" may be a thing of the past. These days, you can sketch a vague table on a piece of paper and take a photo of it with a phone, and AI will figure out exactly what you're trying to do.
Maybe I'm hijacking, but I see a generalized problem - how do you keep snippets of text that you use in your browser?<p>My current kludge is to edit long fields of text in an external editor via a browser addon, and have the editor save all such edits locally.
I’m thinking of making a simple wrapper around APIs, because web-based AIs tend to dump literal tons of text due to monetary incentives. For now I prepend a standard pseudo-system stub to all my chats, works fine in my case.
Tired of OpenAI account deletions and Gemini template hiccups? Frustrated with manually typing or copy-pasting prompts every time you switch between LLM clients? If you're like me and want a smoother way to manage your prompts, I built a tool that might be just what you need.<p>*The Problem:*<p>* OpenAI accounts can be deleted unexpectedly.<p>* Gemini templates sometimes fail to work.<p>* Re-typing or copy-pasting prompts across multiple clients is tedious.<p>*The Solution: DryPrompt*<p>DryPrompt lets you create reusable prompt templates with variable fields. You set up the framework once, and then simply fill in the variables to generate the full prompt.<p>*How It Works:*<p>1. *Go to:* dryprompt.go123.live<p>2. *Sign up:* It's free and allows you to sync your prompts across devices.<p>3. *Create a template:* Define your prompt structure and mark the parts you want to change with variables.<p>4. *Use it:* Copy the template, replace the variables with your specific content, and you've got your ready-to-use prompt!<p>*Example:*<p>Let's say you need to internationalize multiple code files. With DryPrompt, you can create a template that includes the file code as a variable. Each time, just copy the template, paste in the new file's code, and you'll instantly get the internationalization prompt. No more tedious copying and manual concatenation!<p>*Give it a try and make your LLM workflow more efficient:* dryprompt.go123.live