Hey HN, I recently built a plan-validate-solve agent and struggled to write high-quality prompts so I decided to build a Grammarly for prompts. I used Go, HTMX, a-h/templ, Tailwind as well as the new OpenAI assistants - wanted to share a few learnings with you here:<p>- The overall “weight” of the solution. Basically, no external dependencies needed. I haven’t even touched web frameworks like Gin so far
- The packaging size when putting everything into an alpine-based image
- Super fast hot reload with cosmtrek/air in comparison to TS & React
- Basically no learning curve moving from React- to Templ-Tailwind templating
- Almost no JS is needed (except for interacting Web APIs like Permissions and Clipboard API)<p>For now, handling state on the server side is no big deal when using HTMX
So far I only faced some minor issues where, in some cases, Templ’s wrappers didn’t always transpile correctly or where hx-vals isn’t working due to Templ’s lack of support for single-quotes (used URL Query Params instead for now).<p>Re the OpenAI assistants - They worked quite well for my use case. But going forward I don’t see myself using them for very long since they are “just” isolated model instances that you can instruct and connect to your internal documents in a no-code fashion. I’ll probably drop them for a self-hosted Mistral 7B in the coming days, especially since fine-tuning becomes drastically easier these days. Excited to see how this scales!<p>An interesting direction I’ve been thinking about for the solution is to build something like this as a proxy that sits in front of LLMs and optimizes user prompts on the fly. Would appreciate any feedback on this approach.