TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How to Safeguard a LLM to SQL Prompt from SQL Injection?

3 pointsby halamadrid5 months ago
I have seen posts about how you can use LLM to proxy queries into your dataset. This seems powerful, but how to prevent it from being abused or hacked? Are there other strategies to deploy a text to SQL use case?

4 comments

lunarcave5 months ago
Yes! I&#x27;ve worked on something [1] in the space lately, and can share learnings. It generates queries, which gets fed to the application layer, and it executes them based on some deterministic controls.<p>Basically, if you&#x27;re working with a database that has some actually useful&#x2F;critical data, put an application layer in the middle that can gatekeep LLM access.<p>- Depending on your risk appetite, make a decision on providing a read-only URL to the thing that interfaces with the LLM.<p>- For maximum control over the queries, you can wrap then in functions (tools) and let the LLM only call the tools. But this takes away a lot of the power. What we&#x27;ve basically done is allowing the LLM to read the context progressively [2] and give it free reign over the queries.<p>- The said application layer can ask for approval on all of the queries. (Human in the loop). [3]<p>- (Optionally) If your query returns a lot of data, putting all of that through the LLM context might be expensive. You need a way to return data out of band from a query, directly to the user.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-connector">https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-conn...</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;blob&#x2F;ec504b8593f41b8e0697bf0ffd2972c8ce465f52&#x2F;data-connector&#x2F;src&#x2F;postgres&#x2F;postgres.ts#L75">https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;blob&#x2F;ec504b8593f41b...</a><p>[3] <a href="https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-connector&#x2F;src&#x2F;postgres">https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-conn...</a>
laxk5 months ago
DB Level:<p>- Create SQL credentials for LLM queries that will have limited access (exclude sensitive fields, tables, views etc) and read-only access.<p>- Use SQL views to avoid&#x2F;exclude access to specific fields.<p>App Level:<p>- Use SQL linters and treat input as a regular user input.<p>- Convert the SQL query string into a query tree, escape ALL fields and then assemble the query back.
BWStearns5 months ago
Treat your LLM input like it&#x27;s user input and don&#x27;t execute SQL directly from a LLM output.
fullstackwife5 months ago
instead of plain SQL, the LLM should output AST