TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How to build a LLM model on local files?

2 pointsby recvonlinealmost 2 years ago
I am part of a larger community, which organizes themselves through loads of E-Mails, PDFa etc. Many questions one has about the current state of affairs could be, in my opinion, done through a ChatGPT like interface.<p>How would one go about training a model based on local files? Is it possible? What would I have to do?

2 comments

brucethemoose2almost 2 years ago
For non commercial use? To answer your question, finetune a llama based instruction model, maybe using the lit-llama repo. For this you will need to rent a pretty beefy cloud instance, and you will need to resume the finetuning (or use a LORA) to put new data in. Then host it on a cheaper server with a llama.cpp frontend.<p>But what you <i>really</i> might want is a vector search. This seems like a better fit.
评论 #36024646 未加载
tikkunalmost 2 years ago
There are some &quot;drag and drop&quot; type solutions, like <a href="https:&#x2F;&#x2F;www.chatbase.co&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.chatbase.co&#x2F;</a>. There are various more - search for custom chatgpt on product hunt and you&#x27;ll find a lot.
评论 #36176275 未加载