TE
TechEcho
Home
24h Top
Newest
Best
Ask
Show
Jobs
English
GitHub
Twitter
Home
Securing LLM Systems Against Prompt Injection – Nvidia Technical Blog
2 points
by
yandie
almost 2 years ago
1 comment
yandie
almost 2 years ago
Who executes LLM-generated code (or non PR-ed code in general) against trusted environments/databases? I hope that's just a bad pattern introduced by langchain pattern and not the norm...