Having recently been tasked at work with upgrading some dependencies, I realise this is a nightmare.<p>We have unit tests, but they are not sufficient to be confident that nothing is subtly broken. This is a problem in many environments (maven, nuget, npm, etc). There was a story recently where (I think) a Ruby library subtly changed behaviour between versions and broke their payment system.<p>Our current process is to not update unless a security vulnerability is raised (we have tools to check for these), or a new version has features we want. Then automated unit testing and manual testing is done, but not to the level where, for instance, we would have caught that payments bug. We don't audit the source code of open-source libraries.<p>Is this a reasonable process? How does your company handle this? How could we do better?