TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Airgapped Offline RAG – Run LLMs Locally with Llama, Mistral, & Gemini

9 点作者 koconder8 个月前
I've built an airgapped Retrieval-Augmented Generation (RAG) system for question-answering on documents, running entirely offline with local inference. Using Llama 3, Mistral, and Gemini, this setup allows secure, private NLP on your own machine. Perfect for researchers, data scientists, and developers who need to process sensitive data without cloud dependencies. Built with Llama C++, LangChain, and Streamlit, it supports quantized models and provides a sleek UI for document processing. Check it out, contribute, or suggest new features!

1 comment

novitzmann7 个月前
hey, we were ready to biuld something smilar for a &quot;shaddow client&quot;. What s the main language used ? we r all about cpp <a href="https:&#x2F;&#x2F;github.com&#x2F;docwire&#x2F;docwire">https:&#x2F;&#x2F;github.com&#x2F;docwire&#x2F;docwire</a>