A lot of commenters suggesting "pipefail" aren't realising the full extent of the problem. Here's an example that might be clearer, from <a href="https://david.rothlis.net/shell-set-e" rel="nofollow">https://david.rothlis.net/shell-set-e</a><p>This prints “a”, as you’d expect:<p><pre><code> set -e
myfun() { printf a; false; printf b; }
myfun
printf c
</code></pre>
...because the “set -e” terminates the whole script immediately after running the “false” command.<p>But this script prints “a-b-True”, where some people (myself included) might have expected it to print “a-False”:<p><pre><code> set -e
myfun() { printf a-; false; printf b-; }
if myfun; then
printf True
else
printf False
fi
</code></pre>
The “set -e” is ignored completely. Putting another “set -e” inside the definition of myfun doesn’t make any difference.
In the 2000s, I was running extensive sets of simulations and data reduction scripts for a scientific experiment, and I was heavy relying on scripts to run the programs, collect the results, and distribute them over several servers. At first I developed those scripts using bash, but I needed to do math and complex iterations over file names and different parameter files, and I continuously stumbled upon weird behaviors and had to rely to hard-to-understand quirks like the ones explained in the article (which bit me more than once!).<p>After a while I stumbled upon scsh [1], which at first didn't impress me because I ran it as an interactive shell, and from this point of view it was really ugly. But then I realized that scsh was primarily meant as a way to run shell <i>scripts</i>, and I immediately felt in love. I had the power of a Scheme interpreter, the ability to easily use mathematical expressions (the awesomeness of Scheme's numerical tower!) and macros, and a very effective way to redirect inputs and outputs and pipe commands that was embedded in the language [2]!<p>In those years I used scsh a lot and developed quite complex scripts using it, it was really a godsend. Unfortunately the program got abandoned around 2006 because it was not trivial to add support for 64-bit architectures. However, while writing this post I've just discovered that somebody has revived the project and enabled 64-bit compilation [3]. I would love to see a revamp! Nowadays I use Python for complex scripts, but it's not the same as a language with native support for redirection and pipes!<p>[1] <a href="https://scsh.net/" rel="nofollow">https://scsh.net/</a><p>[2] <a href="https://scsh.net/docu/html/man-Z-H-3.html#node_chap_2" rel="nofollow">https://scsh.net/docu/html/man-Z-H-3.html#node_chap_2</a><p>[3] <a href="https://github.com/scheme/scsh" rel="nofollow">https://github.com/scheme/scsh</a>
All of these problems are fixed in OSH.<p>It runs your bash scripts but you can also opt into correct error handling. The simple invariant is that it doesn't lose an exit code, and non-zero is fatal by default.<p>See <a href="https://www.oilshell.org/release/latest/doc/error-handling.html" rel="nofollow">https://www.oilshell.org/release/latest/doc/error-handling.h...</a><p><i>Oil 0.10.0 - Can Unix Shell Error Handling Be Fixed Once and For All?</i><p><a href="https://www.oilshell.org/blog/2022/05/release-0.10.0.html" rel="nofollow">https://www.oilshell.org/blog/2022/05/release-0.10.0.html</a><p>This took quite awhile, and I became aware of more error handling gotchas than I knew about when starting the project:<p>e.g. it's impossible in bash to even see the error status of a process sub like diff <(sort left.txt) <(sort OOPS)<p>If you have bash scripts that you don't want to rewrite, try<p>1) run them with OSH<p>2) Add shopt --set oil:upgrade at the top to get the error handling fixes.<p>Tell me what happens :) <a href="https://github.com/oilshell/oil" rel="nofollow">https://github.com/oilshell/oil</a><p>I spent a long time on that, but the post didn't get read much. I think it's because it takes a lot of bash experience to even understand what the problem is.<p>(rehash of <a href="https://news.ycombinator.com/item?id=33075915" rel="nofollow">https://news.ycombinator.com/item?id=33075915</a> which came up the other day :) )
Mainly, the reason why Bash's "set -e" doesn't do what you expected is this.<p>set -e comes from the POSIX shell language, descended from the Bourne Shell.<p>The examples you're using use obscure Bash features. For instance let turns an arithmetic result into a termination status, where 0 is fail (opposite to the POSIX convention and all).<p>set -e works to the extent that your commands have a sane termination status, which they generally do if they are standard built-ins or well-behaved utilities.<p>One Bash feature improves the effectiveness of set -e (or exit status testing in general). In a command pipe:<p><pre><code> a | b | .. | z
</code></pre>
the termination status is obtained from z. That's standard. If z indicates success, the pipeline is successful no matter how a through y terminate. Bash has a "pipefail" option to help with this.
I can highly recommend using Shellcheck [0] when writing Bash, it also has extensions for VS Code and other IDE's. It makes writing Bash much easier.<p>[0] <a href="https://github.com/koalaman/shellcheck" rel="nofollow">https://github.com/koalaman/shellcheck</a>
You don't have to be hyper-aware of false positives with "set -euo pipefail". False positives bring themselves to your attention during testing, while false negatives don't announce themselves at all. It's almost always preferable for your code to incorrectly fail and force you to stick "|| true" on the end, than to incorrectly succeed and let you miss the bug.
All set -e does is halt further execution of your script if any line exits above 0. There is really nothing bash can do about this. It can't perform a psychological evaluation on why a program is not giving the expected output. And most of the time, when something fails, if it were made by a less than serious programmer (like myself) they don't bother to exit on error correctly with a code above zero. But if you are building a script collection, or python utility or even an binary, simply exit 1 if you encounter a general error. Then further up the chain in your shell scripts the error can be handled properly. There are more specific error codes you can use that might be helpful to your shell script.<p>If you are wondering what in the world I'm talking about and I've missed your question entirely, sorry about that. My feet are tired.
Shell scripts can be thought like C++. They can be sanely managed only if one adopts a strict subset of functionalities.<p>The page is probably oriented at situations where one needs to support any version of bash and any obscure/inadvisable functionality (mind that there are still devs that mix sh and bash syntax in the same script), which is very inconvenient and error prone, so manual error handling <i>may</i> make sense.<p>While there is always something insane behind the corner in Bash, with a restricted subdomain (e.g. bash 4.2+, strict shell options, and shellcheck) it's possible to progressively write reasonably solid shell scripts.<p>The document conclusion is somewhat biased. "to handle errors properly" implies that `-e` in inherently unreliable, which not fair - strict shell options do remove certain classes of errors, which doesn't hurt.
Also noteworthy, this post of mine with authoritative text from GNU docs. Read "(Un)Portable Shell Programming"<p><a href="https://news.ycombinator.com/item?id=31678176" rel="nofollow">https://news.ycombinator.com/item?id=31678176</a>
Annoyingly, when a process is terminated by an unhandled signal (say, SIGTERM), it is treated as if it exited with a nonzero exit code. This can make it tricky to use non-builtin commands as conditions in "if" statements, since there's always the potential edge case where the "if" block is skipped because of a signal that the condition received.
Because bash error handling is a thousand blades and no handle.<p><a href="https://blog.habets.se/2021/06/The-uselessness-of-bash.html" rel="nofollow">https://blog.habets.se/2021/06/The-uselessness-of-bash.html</a><p>I've reviewed a lot of code, bash and otherwise. I have never, not once, reviewed bash code that didn't have subtle bugs. And this is code written by smart people.
If you modify your PS1 to include $? It makes writing shell scripts that rely on exit codes a lot easier. It should be the default in my opinion.<p>Build scripts that fail and return 0 are my nemesis.
This is really interesting. The case pointed out in Ex.3 is pretty hilarious:<p>(from the bash manual)<p><pre><code> The ERR trap [same rules as set -e] is not executed if the
failed command is part of the command list immediately following
a while or until keyword, part of the test in an if statement,
part of a command executed in a && or || list except the command
following the final && or || [...]</code></pre>
I use "set -Eeuo pipefail" in pretty much all bash scripts.
(and often -x as well)<p>Makes things much saner.<p>This post is a better introduction than the submission: <a href="https://vaneyckt.io/posts/safer_bash_scripts_with_set_euxo_pipefail/" rel="nofollow">https://vaneyckt.io/posts/safer_bash_scripts_with_set_euxo_p...</a>
One of the reasons I created Next Generation Shell. It has exceptions. So "if $(grep ...)" works correctly as opposed to bash. grep exit codes: 0 - found, 1 - not found, 2 - error. bash can not handle this correctly in if. There are just two branches for 3 exit codes. NGS has two branches and exception that can be thrown. Yep, every single "if grep ..." in bash is a bomb.
I am using zsh but only because it comes with a little more out the box and because zle's excellent vi mode (supports text objects for the win)<p>But I am wondering, does zsh fare better when it comes to writing more correct scripts or is it plagued with the same issues ?
Few people understand that if...then in bash is actually a form of try...catch.<p>The -e terminates the script at every failure not caught by a try...catch.<p>With this in mind, it's easier to predict bash's behavior.
mountain out of a molehill imo. there’s a certain point where you’ll be fighting shell more than it’s helping—choosing another language is the better choice. i’m not a wizard but i feel like i’ve developed a decent intuition for what’s sane to do in shell and what needs something else.<p>and even then, shelling out from another language when absolutely necessary can be a better option
the `read -r foo < configfile` might be obscure, but every line in a POSIX text file ought to have a newline terminator. If this is worth erroring out on probably depends on context.
I'm not sure what this person <i>did</i> expect. That bash magically parsed the output and memory of running programs, and read the user's thoughts, to determine if the state of the program indicates a condition that the user would consider an error?<p>set -e does exactly what you'd expect, arguably, with the exception of subshells and conditions.<p>And those rules are extremely simple to learn too. If you understand when a statement which might be composed of other statements would have an error, you can predict what set -e will do