TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How to Keep Up with LLMs? [Linux, Self-Hosting, Info]

16 pointsby conor_falmost 2 years ago
I&#x27;ve accepted that LLMs are useful when taken with precautions. A few tools I&#x27;ve used recently have convinced me on this fact. Alongside this, I&#x27;ve heard first-hand anecdotal experience of LLMs providing great benefit, from asking about APIs all the way to practising languages or summarizing texts.<p>I&#x27;m now interested enough to try run one myself and see how it suits my personal workflow. So I have a few questions:<p>1) How can I set up a LLM locally with good effort&#x2F;reward ratio? I don&#x27;t want to spend hours setting up something unreliable that needs constant modification - moreso something I can just interact with easily from a web UI&#x2F;CLI when I need to.<p>2) Is there an easy way to keep up to date with LLMs so I can update to newer models as they become popular to get the best results?<p>Note that I&#x27;m only looking for self hosted, Linux compatible solutions!

2 comments

version_fivealmost 2 years ago
My first stop would be llama.cpp and compatible models on your own machine. You should be able to run quantized 7B and 13B models, try them out and see if they work.<p>Though for &quot;personal workflow&quot;, unless you want to be able to play with the internals of the models or are worried about privacy, I&#x27;d just use ChatGPT (in fact I do, despite having llama.cpp setup to run various models, I always use ChatGPT for personal stuff and programming question)
评论 #36996757 未加载
tikkunalmost 2 years ago
1) what do you want to use it for?<p>2) &#x2F;r&#x2F;localllama is good, and then also the “open llm leaderboard” and the “lmsys llm leaderboard”
评论 #36996793 未加载