I'm extremely amused that one of the trials here was "how do I measure 6 liters of water with a 12 liter jug and a 6 liter jug?" and the article completely glosses over the fact that the framework doesn't help GPT find the simple answer "fill the six liter jug"
The whole “prompt engineering “ space has a smell of secret sauce that is ultimately not IP or even particularly clever. Reading the SOTA systems prompts and other prompt optimisations leads me to believe that any business model based in prompt manipulation is ultimately offering a zero marginal value product, not a
Zero marginal cost one.
The best way to always know the prompt is to write it yourself. 90% of what those libraries do is just a complex front end to a templating system. So you are often better off just using f-strings or template literals.
> Furthermore, the prompt has a spelling error (Let'w) and also overly focuses on the negative about identifying errors - which makes me skeptical that this prompt has been optimized or tested.<p>Fixed in <a href="https://github.com/langchain-ai/langchain/commit/7c6009b76f04628b1617cec07c7d0bb766ca1009">https://github.com/langchain-ai/langchain/commit/7c6009b76f0...</a>
I want some kind of genetic algorithm where I can choose between perhaps three images/renders and then have the AI iterate on the one I chose to make three more variations. Perhaps the software varies my prompt for me in clever ways?<p>Otherwise I have no idea how to take my prompt and a disappointing outcome and massage it.
I wrote some code to annotate the image with the text of the prompt. I couldn't find a convenient way to do it in EXIF.<p>For Midjourney: <a href="https://github.com/ernop/social-ai/tree/main">https://github.com/ernop/social-ai/tree/main</a>
This one just downloads and annotates all the images in a discord channel/server you have admin in, and can backfill.<p>For Dalle3 Api: <a href="https://github.com/ernop/cmdline-dalle3-csharp">https://github.com/ernop/cmdline-dalle3-csharp</a>
This one also submits prompts and does permutations, powersets, block checking etc. Warning: very addictive.
We felt this 100% so we built a DSL (called BAML) to solve the "prompt transparency" problem (amongst other issues). We have a VSCode playground that always shows you the full prompt -- kinda like a markdown preview works.<p>We are still in beta (and open source) but feel free to check it out! <a href="https://docs.boundaryml.com/">https://docs.boundaryml.com/</a> .<p>Some of these frameworks like instructor use like 80% more tokens or only work with OpenAI so we aim to tackle all these problems from the ground up.
Takeaway: get the llm to populate it's mental space with ideas, then have it evaluate mashups for creativity.. and it's good a framework is doing it..
HN guidelines:<p>> <i>If the title contains a gratuitous number or number + adjective, we'd appreciate it if you'd crop it. E.g. translate "10 Ways To Do X" to "How To Do X," and "14 Amazing Ys" to "Ys." Exception: when the number is meaningful, e.g. "The 5 Platonic Solids." Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.</i><p>The post title is "Fuck You, Show Me The Prompt", but the HN title is "Show Me The Prompt". The removal of "Fuck You, " has significantly changed the title IMO, and is a form of editorialization. Did the HN mods do this, or was it submitted in this form?