Started working on an LLM application for language learning. Are there any good resources that show how differently structuring prompts gives different results, in a reliable way. I guess I'm looking for prompts best practices, if something like that exists.
Many of these are misleading and overengineered.<p>The basic idea is talk to it like you would talk to a human. It's trained on human language. Act like a kindergarten teacher. Be iterative.<p>Give it plenty of context. If your prompt would be downvoted on Stack Overflow, it might be low quality. It is lots of effort and most of the high level stuff around GPT (e.g. langchain, Voiceflow) are just ways to feed it better context.<p>It reads tokens, not words. It's pretty bad with numbers because of this. "10000" might be read as "one tenthousand" or "onehundred hundred" or "onethousand ten" to the LLM. It does badly with some other things as well for similar reasons. Like you can't ask it to write an essay under 2000 words.<p>Because of this, it often has trouble with markdown even after being trained on plenty. Try to simplify in a way that a human could grasp it; break down tables into nested bullet points, etc.<p>Give it examples. Garbage in, garbage out. If you have none, it will pull something from its memory, probably some crappy poetry from Wattpad or code written by some bloated consulting company. If you want it to write like Rumi, give examples of Rumi poetry you like and then ask it to work on that.<p>An example of a bad prompt is "Write a poem about a cat." A human won't do this well. Of course it sounds soulless. What is it about cats you want to say? What is this poem for?
There are a huge number of GPT's around language learning in the ChatGPT store - see if you can coax any of them to spit out their system prompts and documents.