Lets all agree that it is impossible to write code that will never have an unexpected outcome. Imagine that we somehow write a function that is totally bullet-proof. It can't fail, it will always do precisely what it was intended to do. Further, lets say whenever we run this function we run it on N computers and take the consensus result if any of the computers disagree. No matter how large N is, if we run the function enough times eventually we will get a majority of the computers to agree, return the same result, and that result will be wrong. Whether it's from cosmic rays resetting bits in memory, or multiple cosmic ray strikes resetting multiple bits and thus defeating ECC, sooner or later things will break no matter what you do. Even if you shield all the computers with 5 meters of pre-nuclear age lead, eventually it will break.<p>The point is that it is just not possible to get to perfect reliability. Your actual reliability is always going to be less than 100%. You can invest money and effort to get closer to 100%, but obviously you are going to get diminishing returns.<p>The correct analysis is to decide where the optimal trade-off is between investing in reliability and the return on that investment.<p>Example 1: You are calling a web service that checks the weather. The service might be down. If it is down you wait a few seconds and try it again. The 'cost' of it being down is that a user doesn't see the current weather. Is it worthwhile to carefully try to determine whether the error when calling the service is due to a server returning a 500 status code versus invalid json?<p>No, it's not worth it. Either way the client can't use the response. In fact, it doesn't matter what causes the exception, since anything that goes wrong can't be corrected by the client. Whether it's bad json, a network failure, dns failure, the server is being rebooted, the webserver is misconfigured, or the device is in airplane mode, the resolution is always the same, wait and try again in a few seconds. Exceptions work pretty much ideally in this case, you only have to code the 'happy' path and handle all exceptions the same way generically.<p>Example 2: You are writing code to update a database containing financial transactions. If something goes wrong in an unexpected way you need to make sure the financial data isn't updated or left partially updated.<p>Again, you don't care about unexpected exceptions. For failures you expect and are coding to work around them, possibly by catching the generic exception where it happens deep in the call stack, and then raising your own exception class which properly identifies the error and contains the context necessary to perform the recovery. For example, if you need to send an email via receipt for the transaction, you call some function which formats and sends the email. That function fails due to the email server being unreachable. The network exception is caught and an EmailCantBeSent exception is raised with the relevant details in it (the user_id you were emailing, the transaction_id the email is for). The resolution is to log a critical error and insert a record to the database with the relevant details of the email so that someone can make sure it is sent later. Then it continues with the transaction. If something unexpected happens the database transaction is never committed.<p>My point is that there are two kinds of exceptions you run into, the ones you are being careful to trap and resolve as part of your applications design, and the ones that you aren't trying to resolve and so result in just a generic 'this failed' situation.<p>So finally getting back to finding the optimal tradeoff between investment to improve reliability and payback on that investment, you just need to make sure your generic failures are rare enough that you aren't pushed far from that optimal point, which is almost always going to be the case, even if you basically don't ever handle any exceptions and only code for the happy path. Obviously there are tons of counter-examples and sometimes you need to make sure things work even when something goes wrong (if you are working on an autopilot you will require much higher reliability and so much more careful planning to reach it compared to a twitter client, where you just need to not lose what the person typed).<p>Ok that's a lot longer than I intended.<p>TLDR; If you do any kind of analysis on why code fails and what you should do about that, you quickly realize that this library doesn't help at all. This library isn't even bad, the problem it is meant to solve is not well posed.