TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Jsonformer: Generate structured output from LLMs

340 pointsby yunyuabout 2 years ago

31 comments

kcorbittabout 2 years ago
I&#x27;ve thought about building this for a while, glad it&#x27;s out there!<p>Not only does this guarantee your output is JSON, it lowers your generation cost and latency by filling in many of the repetitive schema tokens without passing them through the LLM.<p>For the very common case of &quot;extracting multiple structured fields from a piece of unstructured text,&quot; I believe there&#x27;s an even stronger optimization possible that would further decrease costs, latency and potentially even improve accuracy.<p>Assuming the fields you want to extract are independent (and they often are), you don&#x27;t <i>need</i> to generate them all in one go autoregressively. Eg. instead of running the following pseudo-prompt:<p><pre><code> &quot;Input: &#x27;It&#x27;s sunny and cold today&#x27; Output schema: {&quot;sunny&quot;: boolean, &quot;temperature&quot;: string}&quot; </code></pre> You could instead run the following two:<p><pre><code> &quot;Input: &#x27;It&#x27;s sunny and cold today&#x27; Output schema: {&quot;sunny&quot;: boolean}&quot; &quot;Input: &#x27;It&#x27;s sunny and cold today&#x27; Output schema: {&quot;temperature&quot;: string}&quot; </code></pre> We don&#x27;t do that today because when done naively it&#x27;s very inefficient -- you&#x27;d be tokenizing, passing to the GPU, and computing the KV cache of the shared part of the prompt twice. But a library with the right abstraction could run the second two queries in a batch in parallel and reuse the same tokenization and KV cache for both of them. It would actually be <i>more</i> efficient than generating both fields in one go, since when you factor out the shared prefixes both the generated text and its context are shorter!<p>I mentioned above that this could also improve accuracy. Of course it doesn&#x27;t do that by default (except that by excluding all the irrelevant fields it makes self-attention&#x27;s job easier). But what it <i>does</i> do is give you an independent prompt for each field you&#x27;re interested in. And so for particularly tricky fields you&#x27;re trying to extract, you have the flexibility to eg. add several examples to make the generation N-shot.
评论 #35794949 未加载
评论 #35793400 未加载
评论 #35793905 未加载
newhousebabout 2 years ago
Oh nice! I built a similar system a few weeks ago: <a href="https:&#x2F;&#x2F;github.com&#x2F;newhouseb&#x2F;clownfish">https:&#x2F;&#x2F;github.com&#x2F;newhouseb&#x2F;clownfish</a><p>I think the main differentiating factor here is that this is better if you have a simpler JSON schema without enums or oneOf constraints. If you do have these constraints, i.e. let&#x27;s say you wanted an array of different types that represented a items on a menu { kind: pizza, toppings: [pepperoni] } or { kind: ice_cream, flavor: vanilla | strawberry } then you would need something more sophisticated like clownfish that can ask the LLM to pick specific properties (and an ability to do some backtracking so you can do proper beam search).<p>For completeness, another common approach can be found here: <a href="https:&#x2F;&#x2F;github.com&#x2F;ShreyaR&#x2F;guardrails">https:&#x2F;&#x2F;github.com&#x2F;ShreyaR&#x2F;guardrails</a> which essentially boils down to &quot;provide the schema in the prompt and ask the LLM to correct things if it fails to get the schema right the first time.&quot;
评论 #35793079 未加载
评论 #35798587 未加载
评论 #35794307 未加载
评论 #35792659 未加载
评论 #35794966 未加载
评论 #35802211 未加载
sundarurfriendabout 2 years ago
&gt; Bulletproof JSON generation: Jsonformer ensures that the generated JSON is always syntactically correct and conforms to the specified schema.<p>This is an important definition to take note of: &quot;bulletproof&quot; doesn&#x27;t mean that you&#x27;ll get good or correct data. It only means that it&#x27;ll be valid JSON and in a particular schema that you specify (because the LLM isn&#x27;t building the JSON in the first place, the library is).<p>It&#x27;s an interesting idea. But it&#x27;s not clear if they&#x27;ve validated the heuristics they use, to see how well it performs in terms of accuracy against, say, some kind of BeautifulSoup-like attempt to make sense of the JSON-ish that the LLM produces and correct that to be valid JSON, or any other approach to the problem.
评论 #35793762 未加载
Der_Einzigeabout 2 years ago
Love to see further work on constrained decoding like this and other systems introduced in the comments!<p>See my work and the paper about it. I&#x27;ve got a lot of y&#x27;all beat on this (constrained decoding, not the templating and structuring) by about a year:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;hellisotherpeople&#x2F;constrained-text-generation-studio">https:&#x2F;&#x2F;github.com&#x2F;hellisotherpeople&#x2F;constrained-text-genera...</a>
andrewcamelabout 2 years ago
Seen a lot of things trying to do this by pressure testing the outputs, but all feel like anti-patterns. This is the first that seems like the &quot;right&quot; way to do it. Better to manage how the model is generating vs creating one more potentially faulty &quot;glue&quot; layer.
评论 #35794369 未加载
评论 #35794056 未加载
motoboiabout 2 years ago
I found it rather strange that the new AndrewNG course about prompting, that features an OpenAI employee, says nothing about templated output.<p>To me this is a killer feature of GPT, being able to turn a document into a json or any other template.<p>The kind of prompt is just amazing for GPT (try it with a blog post, document or any other thing): &quot;Analyze this document and transform it into the following format:<p>&lt;title&gt;<p>&lt;summary (text conciseness: 5&#x2F;10)&gt;<p>&lt;content bullet points (text conciseness 3&#x2F;10)&gt;<p>&lt;content_item 1&gt;<p>&lt;content_item 2&gt;<p>&lt;content_item N&gt;&quot;<p>Also you can ask the same prompt in a json and GPT will gladly transform a PDF into a JSON.
评论 #35799097 未加载
toughabout 2 years ago
I knew a similar one called GPTyped, just posted it on HN <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35793056#35793057" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35793056#35793057</a>
benobabout 2 years ago
How about going one step further and constrain transformer output with a context-free grammar? That way you can generate more conformant code such as Python or C.
评论 #35792378 未加载
rickcarlinoabout 2 years ago
Has anyone seen a tool like this that uses Node rather than Python? I have this exact problem in a GPT-based web application I am building and have had to resort to some “creative” solutions. At the very least I am glad to see people are tackling this problem.
评论 #35794387 未加载
评论 #35796751 未加载
评论 #35793979 未加载
aligajaniabout 2 years ago
Nice tool, will check it out. I had to go through a painstaking trial and error process to generate valid and deterministic JSON for my AI presentation tool called Slide Genie (<a href="https:&#x2F;&#x2F;slidegenie.vercel.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;slidegenie.vercel.app&#x2F;</a>). The hard part was making it work when temperature &gt; 0.
ianbutlerabout 2 years ago
Nice this codifies something similar I&#x27;ve been doing in my prompts! Will be using this instead.<p>What I currently have been doing:<p>The JSON template for your response is provided below. The parts to fill out are capitalized. Please do not modify the template. Please fill in the template with one of the above options for your response. &lt;result&gt; { &quot;rating&quot;: &quot;N. RATING&quot;, &quot;reason&quot;: &quot;REASON&quot; } &lt;&#x2F;result&gt;
xephoid42about 2 years ago
I actually did this with an silly little app I made that generates fake social media profiles (<a href="https:&#x2F;&#x2F;lookface.app" rel="nofollow">https:&#x2F;&#x2F;lookface.app</a>). I gave it a prompt telling it what to generate and an example JSON. As long as you say it must be in JSON I haven&#x27;t had any problems with it generating bad JSON.
tanepiperabout 2 years ago
Nice job - I&#x27;ve tried to massage the outputs to be structured and sometimes it works, but sometimes it fails badly. Having a more specific set of constraints around it will definitely make it more effective.
评论 #35799066 未加载
visargaabout 2 years ago
I wanted to see the opposite - parsing JSON and YAML generated from LLMs. It doesn&#x27;t happen much with GPT-4 but lesser models might mess up the format and then you can&#x27;t simply parse it.
评论 #35795162 未加载
diginovaabout 2 years ago
Something like this should be integrated with library like <a href="https:&#x2F;&#x2F;fakerjs.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fakerjs.dev&#x2F;</a> With LLM or in general AI based generation of the fake data it can be more diverse and generalized for lot&#x27;s more applications and help developers My bad if I am unaware of faker having AI based generation already, but afaik it does not have right now
drbojingleabout 2 years ago
I like the idea of getting ChatGPT to return something easily parse-able by a program. I&#x27;ve been using an XML derivative for that. <a href="https:&#x2F;&#x2F;github.com&#x2F;ColinRyan&#x2F;Chat-Markup-Language">https:&#x2F;&#x2F;github.com&#x2F;ColinRyan&#x2F;Chat-Markup-Language</a><p>Never thought to use json schema. I&#x27;ll check this out!
layoricabout 2 years ago
I might be reading the code wrong but it looks like it crawls the schema making a generation per primitive type. While that’s a clever way to ensure valid JSON, I don’t know if I’d go as far as to describe it as efficient.<p>Saying that if the model is unable to generate JSON due to its training&#x2F;fine tuning, this is indeed a clever solution!
评论 #35795705 未加载
yawnxyzabout 2 years ago
<p><pre><code> Efficiency: By generating only the content tokens and filling in the fixed tokens, Jsonformer is more efficient than generating a full JSON string and parsing it. </code></pre> I was excited to try this in Replit... and realized it required pytorch. Ouch. Replit was not happy about that!
syntaxingabout 2 years ago
Is there a way to do something like this but with fine tuning? For example, I want to train a Lora to become a email spam classifier. I have the training data for the prompt as the email and the response as {Boolean:True&#x2F;False}?
评论 #35796168 未加载
wy35about 2 years ago
Very interesting. I&#x27;ve only been using OpenAI APIs so this logit stuff is new to me.
评论 #35792087 未加载
评论 #35792381 未加载
评论 #35793724 未加载
apalmerabout 2 years ago
Trying to understand why this is necessary? LLMs cannot reliably generate valid Jason?
评论 #35793539 未加载
评论 #35795665 未加载
评论 #35794379 未加载
pankajdohareyabout 2 years ago
Its not very hard through prompting. You can just ask the LLM to generate on these parameters. I did this exact same thing and never wrote any code for it.
Jayakumarkabout 2 years ago
This is great that it does not use OpenAI and runs locally
pkleeabout 2 years ago
This is pretty cool. I tried with dolly and then I tried with T5-base, both of it did not give me result. It broke for me. Has anyone tried it ?
zaptheimpalerabout 2 years ago
How does this work? I guess its different from something like fine-tuning because that wouldn&#x27;t 100% guarantee the right schema?
Aerbil313about 2 years ago
Is it possible to use this with OpenAI’s models? i.e do they support such in-line token generation?
EGregabout 2 years ago
Fantastic! This makes it easy to let humans write prompts and generate requests an API can consume.
rain1about 2 years ago
There is no point in constructing a fixed template JSON object like that just to parse it again.
评论 #35798777 未加载
评论 #35798772 未加载
91Jacobabout 2 years ago
I&#x27;m trying to imagine what a possible use case would be for this. Any simple examples?
评论 #35799127 未加载
phhabout 2 years ago
I hope that this is new to no-one generating JSON using LLM, because it felt like the first thing you&#x27;d do when I implemented that kind of stuff. That being said, it&#x27;s nice to have that as a library ready-to-go.
phate334about 2 years ago
It may be easier to use with Pydantic.