One issue I've had with personal shell scripts is that the dependencies are often implicit, or command line arguments change, leading to (sometimes silent and critical) behavior. With Nix you can write shell scripts with a nix-shell shebang[0], specify the dependencies and the rest of the script will run with the dependencies satisfied, for instance, this will execute GNU Hello regardless of whether it is already in the path or not<p><pre><code> #! /usr/bin/env nix-shell
#! nix-shell --pure -i bash -p "hello"
hello
</code></pre>
a more realistic example is[1] which generates[2], and if necessary, the Nixpkgs revision can be pinned to fix the dependencies in time. This approach also extends across languages and some of my scripts are written in Haskell which is interpreted!<p>The upshot of all this is that you can write reproducible shell scripts as if you had every package in Nixpkgs available to you, and share it easily with others.<p>[0] <a href="https://nix.dev/tutorials/ad-hoc-developer-environments.html#reproducible-executables" rel="nofollow">https://nix.dev/tutorials/ad-hoc-developer-environments.html...</a><p>[1] <a href="https://edef.eu/~qyliss/nix/lib/gen.sh" rel="nofollow">https://edef.eu/~qyliss/nix/lib/gen.sh</a><p>[2] <a href="https://edef.eu/~qyliss/nix/lib/" rel="nofollow">https://edef.eu/~qyliss/nix/lib/</a>
Most instances of the organising principle I find here are amazing. I think "I would never have done it that way" followed almost immediately by "but damn: its good"<p>for comparison: I have ~/bin and I don't distinguish between compiled and scripted personal commands.
I used to have very full ~/bin, and ~/$(hostname), directories. In the end I pared them back and started bundling things together in one binary.<p>The end result is very similar to this approach, I run "sysbox blah", or "sysbox help", and use integrated subcommands.<p>Very helpful and makes deployment easy by having only a single binary:<p><a href="https://github.com/skx/sysbox" rel="nofollow">https://github.com/skx/sysbox</a><p>Not bash/shell, but similar and useful idea to experiment with.
I have my setup so that you can create a shell environment specific to the project associated with the working directory. For example, I can invoke<p><pre><code> edit-env -ds dump-data.sh
</code></pre>
and it will open a script for editing which will only be available to run in the project's directory. It would be callable as the bash function `dump-data`. I've got lots of useful little things like that now that I wouldn't necessarily want to be stored in the project's VC but that I find useful.
While the organization seems nice, the organization itself may have a cognition overhead. I find myself often abandon an organization scheme after I spending time perfecting it. Lately I am mostly using aliases to manage my personal scripts and commands. What the aliases does may change from time to time, but the aliases themselves can stabilize. Those aliases that failed to stabilize or can't fit into top of my head probably are not worth keeping anyway. I delete them every once a while.
'personal monorepo' is what I like to consider this kind of organization. I have templated starter projects, direnv configs, etc. all in a hierarchy of folders in my home.
This is really similar to something I've recently created: <a href="https://github.com/simonmeulenbeek/Eezy" rel="nofollow">https://github.com/simonmeulenbeek/Eezy</a> . Although in my project's case it's scoped to the specific PWD you're in.<p>I really like using the folder structure to have 'subcommands' (i.e. 'sd blog publish' ). Very neat!
I made a ~/bin folder to do something similar, so I don't have to mess with any of the PATH stuff, and can just call it like anything else found in a /bin dir.<p>EDIT: Oh, and autocomplete works with it by default. Just 'mkdir ~/bin' and you're good to go.
If you want an actual project for this I use this one:
<a href="https://github.com/knqyf263/pet" rel="nofollow">https://github.com/knqyf263/pet</a>