Hey HN!<p>After struggling with complex prompt engineering and unreliable parsing, we built L1M, a simple API that lets you extract structured data from unstructured text and images.<p><pre><code> curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: demo" \
-H "X-Provider-Key: demo" \
-H "X-Provider-Model: demo" \
-d '{
"input": "A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913",
"schema": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" }
}
}
}
}
}
}'
</code></pre>
This is actually a component we unbundled from our larger because we think it's useful on its own.<p>It's fully open source (MIT license) and you can:<p>- Use with text or images
- Bring your own model (OpenAI, Anthropic, or any compatible API)
- Run locally with Ollama for privacy
- Cache responses with customizable TTL<p>The code is at <a href="https://github.com/inferablehq/l1m">https://github.com/inferablehq/l1m</a> with SDKs for Node.js, Python, and Go.<p>Would love to hear if this solves a pain point for you!