Hello HN,<p>we are a team of researchers and students who have created LVE, an open source database to document and track exploits and safety issues with LLMs like (Chat)GPT, Llama and Mistral models. Our goal is to improve the discourse around LLM safety by precisely documenting, tracking LLM failures, using an open database and a small framework focused on reproducibility and traceability. LVE is meant to to raise awareness and help everyone better understand the capabilities but also the vulnerabilities of state-of-the-art large language models.<p>Our website, lve-project.org, also hosts a series of community challenges, which are mini competitions, where you can learn about LLM safety and contribute to the project, by submitting prompts that break model behavior.<p>We are open to feedback and suggestions, so please let us know what you think. We are also looking for contributors, so if you are interested in helping out, please reach out to us. Everything lives on GitHub at <a href="https://github.com/lve-org/lve">https://github.com/lve-org/lve</a> and contribution works via PRs.<p>Happy to answer any questions you might have!