Towards the end they state: ‘… just adding “do not hallucinate” has been shown to reduce the odds a model hallucinates.’ I find this surprising and doesn’t fit with my understanding of how a language model works. But I’m very much a novice. Would this be due to update training including feedback that marks bad responses with the term “hallucinate”?
Prompt engineering doesn't feel like an activity that creates sustainable AI advancement. A prompt may work well with one model, in most situations, but even the best practices seem too experimental.<p>For their competition to avoid a PR disaster, isn't it better to look in the model? Perhaps observe the weights, when the AI says something that you want to avoid in the future. A safeguard could trigger if the model is going in that direction.
Last year I’ve seen a live “prompt battle” and it was great: a single-elimination tournament with an applause meter, a hype-man and music!<p><a href="https://promptbattle.com" rel="nofollow">https://promptbattle.com</a>
Interesting analogue to books like "How to win friends and influence people". This genre of self help books include a lot of things that when you squint look like prompt engineering on humans
Am I the only one annoyed by the term “prompt engineer{ing}”?<p>I thought this was a meme, but I have actually seen some job posts for “prompt engineer”.