TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: I have many PDFs – what is the best local way to leverage AI for search?

257 pointsby phodo12 months ago
As the title says, I have many PDFs - mostly scans via Scansnap - but also non-scans. These are sensitive in nature, e.g. bills, documents, etc. I would like a local-first AI solution that allows me to say things like: "show me all tax documents for August 2023" or "show my home title". Ideally it is Mac software that can access iCloud too, since that where I store it all. I would prefer to not do any tagging. I would like to optimize on recall over precision, so False Positives in the search results are ok. What are modern approaches to do this, without hacking one up on my own?

38 comments

bastien212 months ago
You don't. You use a full-text indexer and normal search tools. A chatbot is only going to decrease the integrity of query results.
评论 #40530624 未加载
评论 #40530786 未加载
评论 #40530058 未加载
评论 #40531860 未加载
评论 #40564169 未加载
pierre12 months ago
RAG cli from llamaindex, allow you to do it 100% locally when used with ollama or llamacpp instead of OpenAI.<p><a href="https:&#x2F;&#x2F;docs.llamaindex.ai&#x2F;en&#x2F;stable&#x2F;getting_started&#x2F;starter_tools&#x2F;rag_cli&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.llamaindex.ai&#x2F;en&#x2F;stable&#x2F;getting_started&#x2F;starter...</a>
评论 #40529638 未加载
评论 #40530387 未加载
评论 #40529953 未加载
评论 #40535007 未加载
评论 #40529874 未加载
m0shen12 months ago
Paperless supports OCR + full text indexing: <a href="https:&#x2F;&#x2F;docs.paperless-ngx.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.paperless-ngx.com&#x2F;</a><p>As far as AI goes, not sure.
评论 #40530543 未加载
Ey7NFZ3P0nzAe12 months ago
I am a medical students with thousands and thousands of PDF and was unsatisfied with RAG tools so I made my own. It can consume basically any type of content (pdf, epub, youtube playlist, anki database, mp3, you name it) and does a multi step RAG by first using embedding then filtering using a smaller LLM then answering using by feeding each remaining document to the strong LLM then combine those answers.<p>It supports virtually all LLMs and embeddings, including local LLMs and local embedding It scales surprisingly well and I have tons of improvements to come, when I have some free time or procrastinate. Don&#x27;t hesitate to ask for features!<p>Here&#x27;s the link: <a href="https:&#x2F;&#x2F;github.com&#x2F;thiswillbeyourgithub&#x2F;DocToolsLLM&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;thiswillbeyourgithub&#x2F;DocToolsLLM&#x2F;</a>
评论 #40539662 未加载
constantinum12 months ago
The primary challenge is not just about harnessing AI for search; it&#x27;s about preparing complex documents of various formats, structures, designs, scans, multi-layout tables, and even poorly captured images for LLM consumption. This is a crucial issue.<p>There is a 20 min read on why parsing PDFs is hell: <a href="https:&#x2F;&#x2F;unstract.com&#x2F;blog&#x2F;pdf-hell-and-practical-rag-applications&#x2F;" rel="nofollow">https:&#x2F;&#x2F;unstract.com&#x2F;blog&#x2F;pdf-hell-and-practical-rag-applica...</a><p>To parse PDFs for RAG applications, you&#x27;ll need tools like LLMwhisperer[1] or unstructured.io[2].<p>Now back to your problem:<p>This solution might be an overkill for your requirement, but you can try the following:<p>To set things up quickly, try Unstract[3], an open-source document processing tool. You can set this up and bring your own LLM models; it also supports local models. It has a GUI to write prompts to get insights from your documents.[4]<p>[1] <a href="https:&#x2F;&#x2F;unstract.com&#x2F;llmwhisperer&#x2F;" rel="nofollow">https:&#x2F;&#x2F;unstract.com&#x2F;llmwhisperer&#x2F;</a> [2] <a href="https:&#x2F;&#x2F;unstructured.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;unstructured.io&#x2F;</a> [3] <a href="https:&#x2F;&#x2F;github.com&#x2F;Zipstack&#x2F;unstract">https:&#x2F;&#x2F;github.com&#x2F;Zipstack&#x2F;unstract</a> [4] <a href="https:&#x2F;&#x2F;github.com&#x2F;Zipstack&#x2F;unstract&#x2F;blob&#x2F;main&#x2F;docs&#x2F;assets&#x2F;prompt_studio.png">https:&#x2F;&#x2F;github.com&#x2F;Zipstack&#x2F;unstract&#x2F;blob&#x2F;main&#x2F;docs&#x2F;assets&#x2F;p...</a>
评论 #40531218 未加载
评论 #40531369 未加载
elrostelperien12 months ago
For macOS, there&#x27;s this: <a href="https:&#x2F;&#x2F;pdfsearch.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pdfsearch.app&#x2F;</a><p>Without AI, but searching the PDF content, I use Recoll (<a href="https:&#x2F;&#x2F;www.recoll.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.recoll.org&#x2F;</a>) or ripgrep-all (<a href="https:&#x2F;&#x2F;github.com&#x2F;phiresky&#x2F;ripgrep-all">https:&#x2F;&#x2F;github.com&#x2F;phiresky&#x2F;ripgrep-all</a>)
评论 #40540743 未加载
hm-nah12 months ago
You have the find a good OCR tool that you can run locally on your hardware. RAG depends on your doc processing pipeline.<p>It’s not local, but the Azure Document Intelligence OCR service has a number of prebuilt models. The “prebuilt-read” model is $1.50&#x2F;1k pages. Once you OCR your docs, you’ll have a JSON of all the text AND you get breakdowns by page&#x2F;word&#x2F;paragraph&#x2F;tables&#x2F;figures&#x2F;alllll with bouding-boxes.<p>Forget the Lang&#x2F;Llama&#x2F;Chain-theory. You can do it all in vanilla Python.
Kikawala12 months ago
Quivr: <a href="https:&#x2F;&#x2F;github.com&#x2F;QuivrHQ&#x2F;quivr">https:&#x2F;&#x2F;github.com&#x2F;QuivrHQ&#x2F;quivr</a><p>SecureAI-Tools: <a href="https:&#x2F;&#x2F;github.com&#x2F;SecureAI-Tools&#x2F;SecureAI-Tools">https:&#x2F;&#x2F;github.com&#x2F;SecureAI-Tools&#x2F;SecureAI-Tools</a>
pixelmonkey12 months ago
rga, aka ripgrep-all, is my go-to for this. I suppose grep is a form of AI -- or, at least, an advanced intelligence that&#x27;s wiser than it looks. ;)<p><a href="https:&#x2F;&#x2F;github.com&#x2F;phiresky&#x2F;ripgrep-all">https:&#x2F;&#x2F;github.com&#x2F;phiresky&#x2F;ripgrep-all</a>
评论 #40531015 未加载
SoftTalker12 months ago
If you haven’t given some serious thought to getting rid of most of the documents then consider it. There is very little need to keep most routine documents for more than a few years. If you think you need your electric bill for March 2006 at your fingertips, why?
评论 #40532890 未加载
Kikobeats12 months ago
You can use Microlink to turn PDF into HTML, and combine it with other service for processing the text data.<p>Here an example turning a arxiv paper into real text:<p><a href="https:&#x2F;&#x2F;api.microlink.io&#x2F;?data.html.selector=html&amp;embed=html&amp;meta=false&amp;url=https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2104.12871" rel="nofollow">https:&#x2F;&#x2F;api.microlink.io&#x2F;?data.html.selector=html&amp;embed=html...</a><p>It looks like PDF, but it you open devtools you can see it&#x27;s actually a very precise HTML representation.
theolivenbaum12 months ago
If you&#x27;re looking for something local, we develop an app for macOS and Windows that let&#x27;s you search and talk to local files and data from cloud apps: <a href="https:&#x2F;&#x2F;curiosity.ai" rel="nofollow">https:&#x2F;&#x2F;curiosity.ai</a> For the AI features, you can use OpenAI or local models (the app uses llama.cpp in the background, it ships with llama3 and a few other models, and we&#x27;re soon going to let you use any .gguf model)
brailsafe12 months ago
Like many others have suggested, local indexing is what I use for this, although some more natural interface may be better for structured search and querying.<p>What I haven&#x27;t seen suggested though, is the built-in spotlight. Press CMD+Space, type some unique words that might appear in the document, and spotlight will search it. This also works surprisingly well for non-OCRd images of text, anything inside a zip file, an email, etc..
yousnail12 months ago
PrivateGPT is a great starting point for using a local model and RAG. Text-generation-ui, oogabooga, using superbooga V2 is very nice and more customizable.<p>I’ve used both for sensitive internal SOPs, and both work quite well. Private gpt excels at ingesting many separate documents, the other excels at customization. Both are totally offline, and can use mostly whatever models you want.
ssahoo11 months ago
This could be a humor or real hack.<p>Get a copilot PC with recall enabled and quickly scan through the documents by opening in Adobe acrobat reader. Voillla! You will have an sqlite dB that has your index. Few days later, Adobe could have your data in their llm.
gibsonf112 months ago
<a href="https:&#x2F;&#x2F;graphmetrix.com&#x2F;trinpod-server" rel="nofollow">https:&#x2F;&#x2F;graphmetrix.com&#x2F;trinpod-server</a>
pawelduda12 months ago
Try <a href="https:&#x2F;&#x2F;github.com&#x2F;phiresky&#x2F;ripgrep-all">https:&#x2F;&#x2F;github.com&#x2F;phiresky&#x2F;ripgrep-all</a> before going down the rabbit hole of AI and advanced indexers. Quick to set up and undo if that&#x27;s not what you want, but I&#x27;m pretty sure you&#x27;ll be surprised how far can this get you
ilaksh12 months ago
If you want to run locally you can look into this <a href="https:&#x2F;&#x2F;github.com&#x2F;PaddlePaddle&#x2F;PaddleOCR">https:&#x2F;&#x2F;github.com&#x2F;PaddlePaddle&#x2F;PaddleOCR</a><p><a href="https:&#x2F;&#x2F;andrejusb.blogspot.com&#x2F;2024&#x2F;03&#x2F;optimizing-receipt-processing-with.html" rel="nofollow">https:&#x2F;&#x2F;andrejusb.blogspot.com&#x2F;2024&#x2F;03&#x2F;optimizing-receipt-pr...</a><p>But I suggest that you just skip that and use gpt-4o. They aren&#x27;t actually going to steal your data.<p>Sort through it to find anything with a credit card number or anything ahead time.<p>Or you could look into InternVL..<p>Or a combination of PaddleOCR first and then use a strong LLM via API, like gpt-4o or llama3 70b via together.ai<p>If you truly must do it locally, then if you have two 3090s or 4090s it might work out. Otherwise it the LLMs may not be smart enough to give good results.<p>Leaving out the details of your hardware makes it impossible to give good advice about running locally. Other than, it&#x27;s not really necessary.
评论 #40531267 未加载
bendsawyer12 months ago
I looked into this for sensitive material recently. In the end I got a purpose-built local system built and am having it remotely maintained. Cost: around 5k a year. I used <a href="http:&#x2F;&#x2F;www.skunkwerx.ai" rel="nofollow">http:&#x2F;&#x2F;www.skunkwerx.ai</a>, who are US based.<p>The result is a huge step up from &#x27;full text search&#x27; solutions, for my use case. I can have conversations with decades of documents, and it&#x27;s incredibly helpful. The support scheme keeps my original documents unconnected from the machine, which I own, while updates are done over a remote link. It&#x27;s great, and I feel safe.<p>Things change so fast in this space that there did not seem to be a cheap, stable, local alternative. I honestly doubt one is coming. This is not a on-size-fits-all problem.
skapa_flow12 months ago
Google Drive. It doesn&#x27;t fullfill the &quot;local&quot; criteria, but it works for us (small engineering firm). We synchronize our local file server with GD nighly and use it only for searching. Google is just good when it comes to search.
phodo12 months ago
Thank you all for the comments. Got a lot of good input and ways to think thru the tried and true tools (enjoying ripgrep-all + fzf) plus the standard ai&#x2F;rag-style tools. I do think there is room for a bridge or an integrated way to pipe in similarity &#x2F; embedding into the ripgreps of the world. Maybe something close to fzf’s piping model. Will explore if I have some time.
westcort12 months ago
Use Recoll on Linux or File Locator Lite on Windows to do RegEx searches. Design the RegEx searches with GPT or llama running locally (or write them yourself).
hulitu12 months ago
&gt; Ask HN: I have many PDFs – what is the best local way to leverage AI for search?<p>Adobe Reader can search all PDFs in a directory. They hide this function though.
kkfx12 months ago
Honestly?<p>ocrmypdf + ripgrep-all, recoll (GUI+XLI xapian wrapper) if you prefer an indexed version, for mere full-text search, currently nothing gives better results. The semantic search it&#x27;s still not there, Paperless-ngx, tagspaces and so on demand way too much time for adding just a single document to be useful at a certain scale.<p>My own personal version is org-mode, I keep all my stuff org-attached, so instead of searching the pdfs I search my notes linking them, a kind of metadata-rich, taggable, quick, full-text search however even if org-ql is there I almost not use it, just org-roam-node-find and counsel-rg on notes. Once done this allow for quick enough manual and variously automated archiving, do it on a large home directory it&#x27;s a very long and tedious manual work. For me it&#x27;s worth done since I keep adding documents and using them, but it took more than an year to be &quot;almost done enough&quot; and it&#x27;s still unfinished after 4 years.
treetalker12 months ago
On MacOS, use HoudahSpot. It’s awesome. Not AI, but as others have said, you likely want plain text search, not “AI” or a chatbot, for something like this.<p>If you’re having trouble thinking of search terms to plug into HoudahSpot (or grep etc.) then I suppose you could ask a chatbot to assist your brainstorming, and then plug those terms into HoudahSpot&#x2F;grep&#x2F;etc.
epirogov12 months ago
Cheap but full featured solution for batch AI processing of PDF documents on your local is an Aspose.PDF ChatGPT plugin<p><a href="https:&#x2F;&#x2F;products.aspose.org&#x2F;pdf&#x2F;net&#x2F;chat-gpt&#x2F;" rel="nofollow">https:&#x2F;&#x2F;products.aspose.org&#x2F;pdf&#x2F;net&#x2F;chat-gpt&#x2F;</a>
dudus12 months ago
I tried Google&#x27;s NotebookLM for this use case and was very pleased with the experience.<p>If you trust Google that is.
评论 #40529166 未加载
评论 #40529357 未加载
jesterson12 months ago
The best tool I found for myself for similar goal was Devonthink. Using it for many years since and quite happy with it.<p>There is no AI or any other modern fad, but fulltext search (including OCR for image files inside PDFs) works great
112358132112 months ago
Devonthink would do this with a tiny model to translate your natural length search prompts into its syntax and your folder&#x2F;tag tree.<p>If you&#x27;re okay with some false positives, Devonthink would work as is, actually.
评论 #40529341 未加载
edgyquant12 months ago
Using python to dump the PDF to text then use llama3 (8B) to parse
评论 #40530400 未加载
jeffreyq12 months ago
Tangentially related, but you can try <a href="https:&#x2F;&#x2F;macro.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;macro.com&#x2F;</a> for reading your PDFs.
hypefi12 months ago
check out my app &quot;Chofane&quot; this is something that does it, local batch OCR scan for PDFs and PNG files, I am just launching it, you can export results to json and csv, do some text based search on results <a href="https:&#x2F;&#x2F;chofane-landing.pages.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;chofane-landing.pages.dev&#x2F;</a>
Tylast12 months ago
You can try <a href="https:&#x2F;&#x2F;gpt4all.io&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;gpt4all.io&#x2F;index.html</a>
sciencesama12 months ago
You can tabulate the info 90% of your info will be from single source. There are online tools that sort costco and walmart bills !!
gandalfthepink12 months ago
I use Curiosity AI. Good interface.
vrighter12 months ago
you use a tool intended for accurately searihing. Which is not ai based.
finack12 months ago
OCR and pattern matching on text are computationally cheap and incredibly easy to do. For example, tax documents often bear the name of your government&#x27;s tax authority, which presumably you are familiar with and can search for. They also tend to have years on them.
评论 #40532591 未加载
adyashakti12 months ago
getcody.ai
评论 #40528972 未加载