If anyone needs a more powerful constrain outputs, llama.cpp support gbnf:<p><a href="https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md">https://github.com/ggerganov/llama.cpp/blob/master/grammars/...</a>
This is wonderful news.<p>I was actually scratching my head on how to structure a regular prompt to produce csv data without extra nonsense like "Here is your data" and "Please note blah blah" at the beginning and end, so this is much welcome as I can define exactly what I want returned then just push structured output to csv.
Yay! It works. I used gemma2:2b and gave it below text<p><pre><code> You have spent 190 at Fresh Mart. Current balance: 5098
</code></pre>
and it gave below output<p><pre><code> {\n\"amount\": 190,\n\"balance\": 5098 ,\"category\": \"Shopping\",\n\"place\":\"Fresh Mart\"\n}</code></pre>
No way. This is amazing and one of the things I actually wanted. I love ollama be because it makes using an LLM feel like using any other UNIX program. It makes LLMs feel like they belong on UNIX.<p>Question though. Has anyone had luck running it on AMD GPUs? I've heard it's harder but I really want to support the competition when I get cards next year.
Has anyone seen how these constraints affect the quality of the output out of the LLM?<p>In some instances, I'd rather parse Markdown or plain text if it means the quality of the output is higher.
What's the value-add compared to `outlines`?<p><a href="https://www.souzatharsis.com/tamingLLMs/notebooks/structured_output.html#outlines" rel="nofollow">https://www.souzatharsis.com/tamingLLMs/notebooks/structured...</a>
Is there a best approach for providing structured input to LLMs? Example: feed in 100 sentences and get each one classified in different ways. It's easy to get structured data out, but my approach of prefixing line numbers seems clumsy.
That's very useful. To see why, try to get an LLM _reliably_ generate JSON output without this. Sometimes it will, but sometimes it'll just YOLO and produce something you didn't ask for, that can't be parsed.
I must say it is nice to see the curl example first. As much as I like Pydantic, I still prefer to hand-code the schemas, since it makes it easier to move my prototypes to Go (or something else).
Could someone explain how this is implemented? I saw on Meta's Llama page that the model has intrinsic support for structured output. My 30k ft mental model of LLM is as a text completer, so it's not clear to me how this is accomplished.<p>Are llama.cpp and ollama leveraging llama's intrinsic structured output capability, or is this something else bolted ex-post on the output? (And if the former, how is the capability guaranteed across other models?)
Wow neat! The first step to format ambivalence! Curious to see how well does this perform on the edge, our overhead is always so scarce!<p>Amazing work as always, looking forward to taking this for a spin!
This is a fantastic news!
I spent hours on fine tuning my prompt to summarise text and output in JSON and still have some issues sometimes.
Is this feature available also with Go?