Hi HN,<p>I would like to introduce GalenAI to hackernews, an AI search engine for clinicians that uses an expert-curated knowledge base, search engine context, and generative AI to give clinicians the human-friendliness of AI, with the backing and the context of legitimate references and scientific evidence. (Make sure to make an account to see the references, as these are much more valuable and provide excellent context to complicated questions - but the product will work without needing an account for busy folks).<p>Couple of cool technical things for GalenAI.<p>1. We managed to figure out blocking out prompt injection/leaking with a similar approach to a paper that was just released today[0]. Our approach has an extra layer and is far more strict blocking out of scope questions where there isn't good data to answer the question. There are some improvements to be made there as sometimes it blocks when the question is valid, but we erred on the side of safety versus allowing the model to answer question that could be out of scope. (Paid/Organizational account can bypass that restrictions.)<p>2. Because it's connected to the internet and also uses a vetted knowledge-base, GalenAI is always up-to-date and has access to drugs that just got approved, and new literature. GalenAI is smart enough to know which references and evidence it should look at from the question, and return relevant references. A simple question about the dose of a drug would lead to a very different pathway than a complicated comparative efficacy in a niche disease state questions.<p>3. A big problem in using AI in healthcare is you can't rely on all or random studies for a truthful context. We have figured out away to limit where the information can come from to only clinically significant studies. Unlike, a typical search in Pubmed where everything about the topic is returned, clinically relevant or not.<p>You're probably wondering how it's different from all the new AI tools out there. For one, GalenAI is a knowledge-first product - if there isn't knowledge, it will NOT give you an answer. This can be frustrating at times, but we believe that this make the product a lot safer. It's okay and necessary for AI products to say "I don't know".<p>Second, almost all our efforts were directed on getting the right context/evidence and passing this to the LLM. There are lots of interesting engineering to comb through millions of documents and get the right context with giving the user that instant answer feeling. We are laser-focused on providing clinicians with an AI product that they can trust with heavy emphasis on backing the answer with science.<p>The goal is not to give simple 5th grade level answer to complex questions (because sometimes in medicine you really can't!), but to allow the user to crunch the 20 min literature search process into 3 seconds. And if the question is simple enough, it will provide that straight-forward answer as a normal LLM would.<p>I spent the first half of my career as a clinical pharmacist and published some of the literature that potentially could be cited in GalenAI. The second half was all very interesting software engineering work. I have been working on GalenAI for the last year with a very focused vision on making AI safe to use in accuracy-first domains. Due to my inherent biases as the founder, you will find that GalenAI is excellent at answering drug-related questions.<p>I look forward to hearing your ideas, feedback, comments, and what would be helpful for you if you are in that field!<p>0.<a href="https://arxiv.org/abs/2306.03423" rel="nofollow">https://arxiv.org/abs/2306.03423</a>