Hi Everyone! I've just extracted this from our code base.<p>As LLMs have become ubiquitous in web applications, I've noticed that prompts intended for Claude or GPT have become scattered throughout our codebase or buried within objects. Often, these prompts were built inline through string manipulation. My thinking was two-fold, 1) Let's come up with a simple pattern for organizing and rendering these prompts, and 2) Let's make them easy to review.<p>This draws heavy inspiration from ActionMailer::Preview.
From the codebase you extracted this from, how are you sending these prompt strings onward? I feel like it would really benefit from being inspired by all of ActionMailer, not just ActionMailer:::Preview. Being able to just fire these off in a job would be useful. Even better if you could act on those as part of the job, so kind of like a little controller/agent thing?<p><pre><code> config.action_prompt.delivery_method = :chatgpt_api
config.action_prompt.chatgpt_api_settings = {
api_key: Rails.env[:OPENAI_API_KEY]
}
...
class ApplicationPrompt < ActionPrompt::Base
before_action :set_user
def set_user
@user = current_user || AnonUser.new
end
end
class SupportAgentPrompt < ApplicationPrompt
def ask(question)
@question = question.strip.downcase
if result = prompt(question:) # renders `views/prompts/support_agent/ask.text.erb`,
# ideally handles recognizing JSON and stripping out whitespace in response
broadcast_append_to @user,
target: :support_chat,
partial: 'support_chat/message',
locals: { message: result, from: SupportAgentUser.new }
end
end
end
...
SupportAgentPrompt.ask.prompt_later "What's the best menu item?"</code></pre>