TE
科技回声
首页
24小时热榜
最新
最佳
问答
展示
工作
中文
GitHub
Twitter
首页
Securing LLM Systems Against Prompt Injection – Nvidia Technical Blog
2 点
作者
yandie
将近 2 年前
1 comment
yandie
将近 2 年前
Who executes LLM-generated code (or non PR-ed code in general) against trusted environments/databases? I hope that's just a bad pattern introduced by langchain pattern and not the norm...