Prompting LLMs to turn search queries like "red loveseat" into structured search filters like {"item_type": "loveseat", "color": "red"} is a neat trick.<p>I tried Doug's prompt out on a few other LLMs:<p>Gemini 1.5 Flash 8B handles it well and costs about 1/1000th of a cent: <a href="https://gist.github.com/simonw/cc825bfa7f921ca9ac47d7afb6eab1ce" rel="nofollow">https://gist.github.com/simonw/cc825bfa7f921ca9ac47d7afb6eab...</a><p>Llama 3.2 3B is a very small local model (a 2GB file) which can handle it too: <a href="https://gist.github.com/simonw/d18422ca24528cdb9e5bd77692531cfd" rel="nofollow">https://gist.github.com/simonw/d18422ca24528cdb9e5bd77692531...</a><p>An even smaller model, the 1.1GB deepseek-r1:1.5b, thought about it at length and confidently spat out the wrong answer! <a href="https://gist.github.com/simonw/c37eca96dd6721883207c99d25aec49d" rel="nofollow">https://gist.github.com/simonw/c37eca96dd6721883207c99d25aec...</a><p>All three tests run with <a href="https://llm.datasette.io" rel="nofollow">https://llm.datasette.io</a> using the llm-gemini or llm-ollama plugins.