I'd add that, in terms of tactical implementation, production tests can be implemented at least two different ways:<p>(1) You set up an outside service to send an HTTP response (or run a headless browser session) every minute, and your endpoint runs some internal assertions that everything looks good, and returns 200 on success.<p>(2) You set up a scheduled job to run every minute internal to your service. This job does some internal assertions that everything looks good, and sends a heartbeat to an outside service on success.<p>For #2: most apps of any complexity will already have some system for background and scheduled jobs, so #2 can make a lot of sense. It can also serve as a production assertion that your background job system (Sidekiq, Celery, Resque, crond, systemd, etc) is healthy and running! But it doesn't test the HTTP side of your stack at all.<p>For #1: it has the advantage that you also get to assert that all the layers between your user and your application are up and running: DNS, load balancers, SSL certificates, etc. But this means that on failure, it may be less immediately clear whether the failure is internal to your application, or somewhere else in the stack.<p>My personal take has been to lean toward #2 more heavily (lots of individual check jobs that run once per minute inside Sidekiq, and then check-in on success), but with a little bit of #1 sprinkled in as well (some lightweight health-check endpoints, others that do more intense checks on various parts of the system, a few that monitor various redirects like www->root domain or http->https). And for our team we implement both #1 and #2 with Heii On-Call <a href="https://heiioncall.com/" rel="nofollow">https://heiioncall.com/</a> : for #2, sending heartbeats from the cron-style check jobs to the "Inbound Liveness" triggers, and for #1, implementing a bunch of "Outbound Probe" HTTP uptime checks with various assertions on the response headers etc.<p>And this production monitoring is all in addition to a ton of rspec and capybara tests that run in CI before a build gets deployed. In terms of effort or lines of code, it's probably:<p><pre><code> 90% rspec and capybara tests that run on CI (not production tests)
9% various SystemXyzCheckJob tests that run every minute in production and send a heartbeat
1% various health check endpoints with different assertions that are hit externally in production
</code></pre>
And absolutely agree about requiring multiple consecutive failures before an alarm! Whenever I'm woken up by a false positive, my default timeout (i.e. # of consecutive failures required) gets a little bit higher :)