I've got a similar approach from a Unix philosophy.<p>Look at the savebrace screenshot here<p><a href="https://github.com/kristopolous/Streamdown?tab=readme-ov-file#as-well-as-everything-else">https://github.com/kristopolous/Streamdown?tab=readme-ov-fil...</a><p>There's a markdown renderer which can extract code samples, a code sample viewer, and a tool to do the tmux handling and this all uses things like fzf and simple tools like simonw's llm. It's all I/O so it's all swappable.<p>It sits adjacent and you can go back and forth, using the chat when you need to but not doing everything through it.<p>You can also make it go away and then when it comes back it's the same context so you're not starting over.<p>Since I offload the actual llm loop, you can use whatever you want. The hooks are at the interface and parsing level.<p>When rendering the markdown, streamdown saves the code blocks as null-delimited chunks in the configurable /tmp/sd/savebrace. This allows things like xargs, fzf, or a suite of unix tools to manipulate it in sophisticated chains.<p>Again, it's not a package, it's an open architecture.<p>I know I don't have a slick pitch site but it's intentionally dispersive like Unix is supposed to be.<p>It's ready to go, just ask me. Everyone I've shown in person has followed up with things like "This has changed my life".<p>I'm trying to make llm workflow components. The WIMP of the LLM era. Things that are flexible, primitive in a good way, and also very easy to use.<p>Bug reports, contributions, and even opinionated designers are highly encouraged!
Had a terrible experience with warp. I personally don't use warp, but I know one colleague who uses it. One day, he ran `kubectl describe <resource> <resource name>` and warp suggested `kubectl delete <resource> <resource name>` and he pressed enter. He was lucky the resource was not critical and could be recreated without any damage. Think about what would have happened if the same thing had happened for the namespace resource. People go into automatic accept mode after some time, and this is very dangerous when you do anything at the terminal, because there is no UNDO button.
My first instinct is that this is super useful.<p>But then I realise that I do enough sensitive stuff on the terminal that I don't really want this unless I have a model running locally.<p>Then I worry about all the times I have seen a junior run a command from the internet and bricked a production server.
> TmuxAI » I'll help you find large files taking up space in this directory.<p>Get rid of this bit, so the user asks question, gets command.<p>Make it so the user can ask a follow up question if they want, but this is just noise, taking up valuable terminal space
Instead of showing:<p><pre><code> Do you want to execute this command? [Y]es/No/Edit
</code></pre>
perhaps also add an "Explain" option, because for some commands it is not immediately obvious what they do (or are supposed to do).
A WIP but evolving, it watches your active tmux panes and allows you to work with AI agents who can interact with those panes. For command line folk, this could feel like a pretty good way to bring AI in to your working life.
I already use aider and VS Code Agent Mode (which occasionally asks me to run commands for libraries, etc.)<p>This seems… like an amazing attack vector. Hope it integrates with litellm/ollama without fuss so I can run it locally.
This looks interesting and I’m eager to try it, but my concern is this could easily send sensitive information such as API keys I paste to my terminal to the AI providers. How do you remedy that?
I would usually alt-tab to browser, open up any good LLM in 1 keystroke, write a short prompt, optionally paste the output of "ls" or "find" if context matters, then just copy and paste the result. This tool adds context but I'm fine without it.
The "non-intrusive" part is interesting. I've bit the bullet with AI assistance when coding - even when it feels like it gets in the way sometimes, overall I find it a net benefit. But I briefly tried AI in the shell with the warp terminal and found it just too clunky and distracting. I wasn't even interested in the AI features, just wanted to try a fancy new terminal. Not saying warp might not be useful for some people, just wasn't for me. So far I've found explicitly calling for assistance with a CLI command (I've used aichat for this, but there's several out there) to be more useful in those occasional instances where I can't remember some obscure flag combination.
I do love this, but haven't managed to actually try it out. ( I stopped trying and moved on)<p>But well done for launching (the following is not hate, but onboarding feedback)<p>Who else had issues about API key ?<p>1. What is a TMUXAI_OPENROUTER_API_KEY ?? (is like an OPENAI key) ?<p>2. If its an API key for TMUXAI ? Where do I find this ? Can't see on the website ? (probably haven't searched properly, but why make me search ?)<p>3. SUPER simple instructions to install, but ZERO (discoverable) instructions where/how to find and set API key ??<p>4. When running tmuxai instead of telling me I need an API key.
How about putting an actual link to where I can find the API key.<p>Again well done for launching... sure it took hard word and effort.
Interesting. I've been working on a similar project, though with more 'agentic' workflow. It's also in golang, CLI-native but also supports MCP and "just finishing" 'agentic tasks'.
Potentially a nice overlap :) <a href="https://github.com/laszukdawid/terminal-agent">https://github.com/laszukdawid/terminal-agent</a>
Shellsage has provided this functionality for quite a while. I've been using it for months, and it's been a game-changer for me.<p>It was created by one of my colleagues, Nathan Cooper.<p><a href="https://www.answer.ai/posts/2024-12-05-introducing-shell-sage.html" rel="nofollow">https://www.answer.ai/posts/2024-12-05-introducing-shell-sag...</a>
Just got this running. It took a minute to figure out "where the config file is" but once I got it set up with openrouter keys... wow! This plus speech to text = Look ma no hands!
Can this be aimed at ollama or some other locally hosted model? It wasn’t clear from the docs since their config examples seem to presume you want to use a third party hosted API.
So I have yet to use any tool that needs an API key because I am concerned about costs. Does anyone have any idea what the daily usage of something like this would cost?
Thanks iaresee! Yes, the non-intrusive observation of panes is the central idea, trying to integrate AI help without breaking the command-line workflow.<p>Appreciate the feedback as it evolves.
I feel like heuristics would be a much better way to do this. Just an "newb assassitmant" with a long list of useful commands but I guess this frees up expert's time from doing something so boring.