TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What's missing in AI prompt validation and security tools?

2 pointsby sharmasachin98about 1 month ago
We&#x27;ve been building a middleware layer that acts like a firewall for LLMs, it sits between the user and the model (OpenAI, Claude, Gemini, etc.) and intercepts prompts and responses in real time.<p>It blocks prompt injection, flags hallucinations, masks PII, and adds logging + metadata tagging for compliance and audit.<p>But we’re hitting the classic startup blind spot: we don’t want to build in a vacuum.<p>What do <i>you</i> feel is still broken or missing when it comes to: - Securing LLM prompts&#x2F;responses? - Making GenAI safe for enterprise use? - Auditing what the AI actually said or saw?<p>We’d love your feedback — especially if you’re working on or thinking about GenAI in production settings.<p>Thanks!

1 comment

Uzmanaliabout 1 month ago
One big gap I see is context-aware filtering and memory control.<p>Many tools block clear prompt injections, but few detect contextual misuse. This happens when users gradually direct the model over many sessions or subtly draw out its internal logic.<p>Your middleware sounds promising; I&#x27;m excited to see where it goes.
评论 #43775562 未加载