Most of what he's writing about, and much more, is made substantially easier with systemd timers.<p>E.g. want errors to cause e-mails, but everything else to just go to logs? Use a timer to activate a service, and make systemd activate another service on failure.<p>Want to avoid double execution? That's the default (timers are usually used to activate another unit, as long as that unit doesn't start something that doubleforks, it won't get activated twice).<p>(Some) protection against thundering herd is built in: You specify the level of accuracy (default 1m), and each machine on boot will randomly select a number of seconds to offset all timers on that host with. You can set this per timer or for the entire host.<p>And if you're using fleet, you can use fleet to automatically re-schedule cluster-wide jobs if a machine fails.<p>And the journal will capture all the output and timestamp it.<p>systemctl list-timers will show you which timers are scheduled, when they're scheduled to run next, how long is left until then, when they ran last, how long that is ago:<p><pre><code> $ systemctl list-timers
NEXT LEFT LAST PASSED UNIT
Sat 2015-10-17 01:30:15 UTC 51s left Sat 2015-10-17 01:29:15 UTC 8s ago motdgen.timer
Sat 2015-10-17 12:00:34 UTC 10h left Sat 2015-10-17 00:00:33 UTC 1h 28min ago rkt-gc.timer
Sun 2015-10-18 00:00:00 UTC 22h left Sat 2015-10-17 00:00:00 UTC 1h 29min ago logrotate.timer
Sun 2015-10-18 00:15:26 UTC 22h left Sat 2015-10-17 00:15:26 UTC 1h 13min ago systemd-tmpfiles-clean.timer
</code></pre>
And the timer specification itself is extremely flexible. E.g. you can schedule a timer to run x seconds after a specific unit was activated, or x seconds after boot, or x seconds after the timer itself fired, or x seconds after another unit was deactivated. Or combinations.
Something we've found to be fairly lightweight (compared to e.g. Chronos), but incredibly featureful is using Jenkins (the CI server) as a cron runner. We use <a href="http://docs.openstack.org/infra/jenkins-job-builder/" rel="nofollow">http://docs.openstack.org/infra/jenkins-job-builder/</a> to configure it at deploy-time so it lives as part of the deploy rather than system config.<p>Here's a small list of things we're getting out of it:<p>- concurrent run protection (& queue management via <a href="https://wiki.jenkins-ci.org/display/JENKINS/Concurrent+Run+Blocker+Plugin" rel="nofollow">https://wiki.jenkins-ci.org/display/JENKINS/Concurrent+Run+B...</a> )<p>- load balancing (e.g. max concurrent tasks) and remote execution with jenkins slaves [sounds complicated, but really jenkins just knows how to SSH]<p>- job timeouts. No more hanging jobs.<p>- failure notifications via slack/hipchat/email/whatever. [email only on status change via <a href="https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin" rel="nofollow">https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin</a> ]<p>- log/history management: rotation & compression.<p>- fancy scheduling: e.g. run this job once every 24h, but if it fails keep retrying in 5 minute increments (<a href="https://wiki.jenkins-ci.org/display/JENKINS/Naginator+Plugin" rel="nofollow">https://wiki.jenkins-ci.org/display/JENKINS/Naginator+Plugin</a> ). You could also use project dependencies for pipelines, but we've been staying away from that.<p>- monitoring: we use the datadog reporter & alert on time since last success. Given how mature Jenkins is, this likely translates to whatever system you're using just as well.<p>It's worked incredibly well for us. We migrated to Jenkins from crontabs with cronwrap (<a href="https://github.com/zomo/cronwrap" rel="nofollow">https://github.com/zomo/cronwrap</a>). We're never going back.
I've been using Dead Man's Snitch[0] in production for a few years. It's been a life saver. Not affiliated, just a happy customer.<p>[0] <a href="https://deadmanssnitch.com/" rel="nofollow">https://deadmanssnitch.com/</a>
This is not an argument against cron. It is a demonstration of people not abstracting code. One of the thousands i've come across.<p>Take all of the features he mentions, and abstract the to a launch_from_cron.sh file. Make that file accept a script path as an argument and viola! All of the safety added to cron without the need for code duplication or these massive overhead solutions listed in these comments.
I work for Yelp, and we use cron for purposes similar to those mentioned in this article, mostly synchronizing small bits of configuration or data that we want local to the machine. We're heavy Puppet users, and we made a module to assist us in the management of our crons [1]. If you're a Puppet shop, I highly recommend checking it out. It provides answers to each of the problems mentioned in the article, often using the same mechanisms. I especially like its integration with Sensu, which we use for monitoring the jobs.<p>We've found that deploying cronjobs onto individual hosts is quite powerful, and helps us fill a niche between configuration management tools (like Puppet) and specialized coprocesses (like Smartstack). We have cronjobs for downloading code deploys, showing Sensu state within the motd, reconfiguring daemons (especially the Smartstack ones), and (of course) cleaning up unused data.<p>Of course, there's also the separate problem of scheduling and coordinating tasks across an entire cluster. In most cases we don't use our cron daemons for this, although we do have some jobs that run on multiple hosts and enforce mutual exclusion by grabbing a lock in Zookeeper.<p>[1] <a href="https://github.com/Yelp/puppet-cron#puppet-cron" rel="nofollow">https://github.com/Yelp/puppet-cron#puppet-cron</a>
No one mentionned Rundeck: <a href="http://rundeck.org/" rel="nofollow">http://rundeck.org/</a><p>I've been using it for two years now. This has replaced cron on about 200 nodes.<p>Not only it does cron, but also helps deploying artefacts (integrated with Jenkins) through simple forms. We now have ops with 0 experience in Linux deploying code.
Having local mailboxes in each server is not really useful in a cloud setup with hundreds of machines. But it's not a reason to silence the output; something bad might happen and only stdout/stderr might give you an anwer of what exactly is going wrong.<p>Instead use <a href="https://github.com/zimbatm/logmail" rel="nofollow">https://github.com/zimbatm/logmail</a>. It's a `sendmail` replacement that forwards everything to syslog. Then forward all your syslogs to a central place an you can capture and analyze these messages.
I use Jenkins instead of cron. I get an rss feed of processes that exited with non-zero, it captures the output but doesn't e-mail it to me. This is totally not what it's designed for, but it is closer to what I want than cron is.
The problem isn't cron, cron is just a dumb execution tool.<p>The problem is that we don't have any way of alerting our monitoring systems from a cron job.<p>This is exactly what I've been implementing, a simple curl API call to our monitoring system when a cron job has run is all that we need. This puts the monitoring of cron into the same sphere as all other monitoring and puts the alert on a webpage where it can be found eventually by our 2nd line or our on-call personnel, instead of in someones mailbox.<p>Edit: And you don't need a fancy REST based API for your monitoring system to do this, ye ol' nagios agent could do it with some hacks.<p>The hard part is having the discipline to fix all your cron jobs in this way, but adding || true is already tantamount to this.
These all seem like issues you'd run into with any task scheduler. Error emails, overloading a central resource with many tasks. Most of these aren't particular/limited to cron at all.
gee this is a difficult concept<p>how to get cron to only send important emails and not every time it runs<p>you think maybe you should have just used<p><pre><code> > /dev/null
</code></pre>
and not<p><pre><code> > /dev/null 2>&1
</code></pre>
why is this a full blog post?
Very interesting discussion. I've been doing some related work where I needed to run some tasks in a non-overlapping way and while flock was an initial option I later moved to redis queue (ie rpush & blpop mix) to guarantee a certain (and needed) order of execution. This is mixed with a 'send email in case of error' check and so far is doing fine, though I'll definitely look into Jenkins if I ever feel this current approach proves not to be reliable enough.
Great read and definitely will keep this in my toolbox, the whole article is explaining why the below good when you need use cron:<p>15 * * * *
( flock -w 0 200 && sleep `perl -e 'print int(rand(60))'` && nice /command/to/run && date > /var/run/last_successful_run ) 2>&1 200> /var/run/cron_job_lock | while read line ; echo `date` "$line" ; done > /path/to/the/log || true
Excessive use of crons is a devops (hate the word) smell. You get reliant on their side-effects and to migrate to other solutions you need enormous amounts of testing and legacy interfaces. The most obvious downside to a cron is the at least 1m interval. On average you are waiting 30s for something which already should be there. Of course it's perfect for things like reporting which make sense for certain intervals. Using it for mail queues and stuff.. bad times.
Something else. The cron service is a one hit wonder. All it does is schedule. It places responsibility for handling output and setting semaphores for use by other applications to the person who wrote the command called by cron. You can't really blame cron if the command/script doesn't do these things. You just need to look to another type of scheduler/batch facility that provides a richer feature set for handling workflow, monitoring and reporting.
I've been pretty happy with shush[1], which is a similar script that helps with a lot of this--including randomdelays, locking to avoid overlapping runs, e-mailing only on errors (or other criteria as you see fit), and so forth.<p>[1] <a href="http://web.taranis.org/shush/" rel="nofollow">http://web.taranis.org/shush/</a>
<a href="https://github.com/Yipit/cron-sentry" rel="nofollow">https://github.com/Yipit/cron-sentry</a> is also quite nice as a wrapper to capture failing cron jobs and forward them to <a href="https://getsentry.com/" rel="nofollow">https://getsentry.com/</a>
CFEngine also provides a scheduling capability that can be used in conjunction with other factors using boolean expressions. Something like "Run at midnight on Saturday if you are a production linux server." The splaytime parameter can spread out the execution of a command across a cluster based on its name hash.
We use <a href="https://wiki.jenkins-ci.org/display/JENKINS/Monitoring+external+jobs" rel="nofollow">https://wiki.jenkins-ci.org/display/JENKINS/Monitoring+exter...</a> to monitor the cron jobs.
Is there any good open source distributed scheduler that blends both timer based tasks and event based tasks?<p>Chronos is the only one I'm aware of, but I don't believe it supports event based tasks.
Another cron trick is to use chronic/cronic before your command. It silences the command except for error states - cron likes to report any text it sees, which you don't want for non-error states. It also detects errors better than just assuming all errors happen on STDERR.