Hey HN! I'm working on OpenPipe, an open source prompt workshop. I wanted to share a feature we recently released: prompt translations. Prompt translations allow you to quickly convert a prompt between GPT 3.5, Llama 2, and Claude 1/2 compatible formats. The common case would be if you’re using GPT 3.5 in production and are interested in evaluating a Claude or Llama 2 model for your use case. Here's a screen recording to show how it works in our UI: https://twitter.com/OpenPipeLab/status/1687875354311180288<p>We’ve found a lot of our users are interested in evaluating Claude or Llama 2, but weren’t sure what changes they need to make to their prompts to get the best performance out of those models. Prompt translations make that easier.<p>A bit more background: OpenPipe is an open-source prompt studio that lets you test your LLM prompts against scenarios from your real workloads. We currently support GPT 3.5/4, Claude 1/2, and Llama 2. The full codebase (including prompt translations) is available at https://github.com/OpenPipe/OpenPipe. If you’d prefer a managed experience, you can also sign up for our hosted version at at https://openpipe.ai/.<p>Happy to answer any questions!
The thing I’m keen for is keeping my open ai function definitions and having Claude (or Llama) return the same “do this function call with these arguments” syntax. Needs a little prompting to do it but by hand it works just need a wrapper so I can talk to Claude with same inputs as functions as Open AI. Does this do that?
Nice!<p>So it sounds like this takes a GPT formatted prompt and adds all that Llama 2 prompt template stuff (<s>, [INST], etc) is that right?<p>I'm guessing no conversion is needed between GPT-3.5 and Claude 1/2, but I'd like to know whether that's right or not too.