Nice! I use a combination of an endless bash (zsh) history with timestamps that I navigate via fzf and ctr+r and comments I occasionally add to commands via # at the end followed by my annotation so that I can rediscover the command.<p>I do this ever since I switched to a Mac in 2015 and my history has over 60,000 lines. So that’s basically my knowledge base :)<p>But your project looks nice. Will check out.
Your gif in your README features a prompt asking to "show all files in this directory" but the 'ls -lh' returned and selected in the demo gif does not show all files, just the ones that aren't hidden. I'd have chosen a more accurate interaction for the demo.
I'm trying to get this to work with ollama. I'm on Arch Linux, fish shell, new to ollama, and only very rarely used pipx. I get:<p>raise ValueError("OPENAI_BASE_URL and OPENAI_API_KEY must be set. Try running `zev --setup`.")
ValueError: OPENAI_BASE_URL and OPENAI_API_KEY must be set. Try running `zev --setup`<p>even when I run (for example) set -x ZEV_USE_OLLAMA 1; zev 'show all files and all permissions'
I don't like most of these commands because they just execute. This one is nice because it will be in your history. The current trick I use is to use copilot.vim at the command line. It naturally fits into my flow.<p>Recently some of my friends reported that it just wants to do comments and I've noticed that it actually biases towards that nowadays, so I start it with something to get it kicked off.<p>I've been managing to try to figure out what in the prompt makes it like that, but for the moment that little workaround gives me both the comment and the command in my history so it's easier to r-i-search for it.<p><a href="https://x.com/arjie/status/1575201117595926530" rel="nofollow">https://x.com/arjie/status/1575201117595926530</a><p>You just set up copilot for neovim normally and set it as your EDITOR. <a href="https://wiki.roshangeorge.dev/index.php/AI_Completion_In_The_Shell" rel="nofollow">https://wiki.roshangeorge.dev/index.php/AI_Completion_In_The...</a>
You may be interested in copying some of the usage patterns from my similar project: <a href="https://github.com/CGamesPlay/llm-cmd-comp">https://github.com/CGamesPlay/llm-cmd-comp</a><p>Instead of being a separate command, I released a set of key bindings you can push that start the LLM prompt with your current command line, and if you successfully accept the suggestion, replace your command line with the result, bypassing the manual clipboard step, and making it so that the result goes into your shell history as a normal command.
I really like how it gives you multiple options to choose from. I've been using <a href="https://github.com/simonw/llm-cmd">https://github.com/simonw/llm-cmd</a>
<a href="https://docs.aws.amazon.com/codewhisperer/latest/userguide/command-line-conversation.html" rel="nofollow">https://docs.aws.amazon.com/codewhisperer/latest/userguide/c...</a><p>Looks like cw from aws
Nice!
Little plug for what I did too, in a similar vein - it has a web version <a href="https://gencmd.com/" rel="nofollow">https://gencmd.com/</a> and also a cmd line version.
Since it's generating terminal commands dynamically, what safeguards (if any) are in place to avoid generating destructive or insecure commands (like rm -rf /, etc.)?
Somewhat related, here's a little project I've done with LLM: <a href="https://github.com/regnull/how.sh">https://github.com/regnull/how.sh</a><p>It uses locally hosted (or remote) LLMs to create and execute shell commands that you describe. You can go as far as writing "shell scripts" in natural language.