This is my take on the common "use llms to generate shell commands" utility. Emphasis is placed on good CLI UX, simplicity, and flexibility.<p>`llm2sh` supports multiple LLM providers and lets LLMs generate multi-command sequences to handle complex tasks. There is also limited support for commands requiring `sudo` and other basic input.<p>I recommend using Groq llama3-70b for day-to-day use. The ultra-low latency is a game-changer - its near-instant responses helps `llm2sh` integrate seamlessly into day-to-day tasks without breaking you out of the 'zone'. For more advanced tasks, swapping to smarter models is just a CLI option away.
Cool! I’m experimenting with something like this that uses docker containers to ensure it’s sandboxed. And, crucially, rewindable. And then I can just let it do ~whatever it wants without having to verify commands myself. Obviously it’s still risky to let it touch network resources but there’s workarounds for that.
Some really nice things:<p>+ GPLv3<p>+ Defaults to listing commands and asking for confirmation<p>+ Install is just "pip install"<p>+ Good docs with examples<p>Is there a way to point at an arbitrary API endpoint? IIRC llama.cpp can do an OpenAPI compatible API so it should be drop in?
This looks great! I would use this if you had a dispatcher for using a custom/local OpenAI-compatible API like eg llama.cpp server. If I can make some time I'll take a stab at writing one and submit a PR :)
This looks good.<p>I created something similar using blade a while back, but I found that using English to express what I want was actually really inefficient. It turns out that for most commands, the command syntax is already a pretty expressive format.<p>So nowadays I'm back to using a chat UI (Claude) for the scenarios where I need help figuring out the right command. Being able to iterate is essential in those scenarios.
Nice tool. I am using ai-shell for that purpose.<p><a href="https://github.com/BuilderIO/ai-shell">https://github.com/BuilderIO/ai-shell</a>