TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How to Safeguard a LLM to SQL Prompt from SQL Injection?

3 点作者 halamadrid5 个月前
I have seen posts about how you can use LLM to proxy queries into your dataset. This seems powerful, but how to prevent it from being abused or hacked? Are there other strategies to deploy a text to SQL use case?

4 条评论

lunarcave5 个月前
Yes! I&#x27;ve worked on something [1] in the space lately, and can share learnings. It generates queries, which gets fed to the application layer, and it executes them based on some deterministic controls.<p>Basically, if you&#x27;re working with a database that has some actually useful&#x2F;critical data, put an application layer in the middle that can gatekeep LLM access.<p>- Depending on your risk appetite, make a decision on providing a read-only URL to the thing that interfaces with the LLM.<p>- For maximum control over the queries, you can wrap then in functions (tools) and let the LLM only call the tools. But this takes away a lot of the power. What we&#x27;ve basically done is allowing the LLM to read the context progressively [2] and give it free reign over the queries.<p>- The said application layer can ask for approval on all of the queries. (Human in the loop). [3]<p>- (Optionally) If your query returns a lot of data, putting all of that through the LLM context might be expensive. You need a way to return data out of band from a query, directly to the user.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-connector">https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-conn...</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;blob&#x2F;ec504b8593f41b8e0697bf0ffd2972c8ce465f52&#x2F;data-connector&#x2F;src&#x2F;postgres&#x2F;postgres.ts#L75">https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;blob&#x2F;ec504b8593f41b...</a><p>[3] <a href="https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-connector&#x2F;src&#x2F;postgres">https:&#x2F;&#x2F;github.com&#x2F;inferablehq&#x2F;inferable&#x2F;tree&#x2F;main&#x2F;data-conn...</a>
laxk5 个月前
DB Level:<p>- Create SQL credentials for LLM queries that will have limited access (exclude sensitive fields, tables, views etc) and read-only access.<p>- Use SQL views to avoid&#x2F;exclude access to specific fields.<p>App Level:<p>- Use SQL linters and treat input as a regular user input.<p>- Convert the SQL query string into a query tree, escape ALL fields and then assemble the query back.
BWStearns5 个月前
Treat your LLM input like it&#x27;s user input and don&#x27;t execute SQL directly from a LLM output.
fullstackwife5 个月前
instead of plain SQL, the LLM should output AST