TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Has anyone trained a specific offline LLM model to act as a Linux helpdesk?

3 pointsby mofosyneover 1 year ago
I&#x27;m imagining booting to linux commandline recovery console as a linux noob and being able to boot up on a split screen an offline chatbot that can parse all the man pages and tools onboard and walk you though solving various problems.<p>(But alternatively you could just make a more comprehensive and recovery focused `tl;dr` program)

2 comments

brucethemoose2over 1 year ago
It&#x27;s definitely doable right now. llama.cpp (and frontends) can run in CLI on most CPUs. Mistral 7B is a &quot;good enough&quot; base model to run on most machines. Training is good, but you can also inject current relevant man pages into the context with a vectordb or maybe even a simple grep.<p>One issue is that LLMs tend to lie when they don&#x27;t know really know a particular answer, and you usually solve that by double checking it with a web search.
smoldesuover 1 year ago
It sounds potentially useful, but there&#x27;s pretty much zero friction if you want to do this with Llama or ChatGPT today. I doubt there will be enough demand that people eventually keep bespoke &quot;recovery models&quot; on-disk. If it <i>does</i> become a desired feature, I think maintainers would prefer a deterministic GUI for recovery instead of a language model.