TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How do you monitor your systemd services?

127 pointsby wh33zlealmost 2 years ago
I am using systemd on my machine and try to configure most things through it. For example, I have a backup job that is triggered by a timer. I want to know when that job fails so I can investigate and fix it. Over time, I&#x27;ve had multiple solutions for this:<p>Send a notifcation via notify-send<p>Add `systemctl --failed` to my shell startup script<p>Send myself emails<p>None of these are quite ideal. Notifications are disruptive of the current workflow and ephemeral, meaning I might forget about it if I don&#x27;t deal with it immediately. Similarly, reading `systemctl --failed` on every new terminal is also disruptive but at least it makes me not forget about it. Both of these are also not really applicable to server systems. Sending myself emails feels a bit wrong but has so far been the best solution.<p>How are other people solving this? I did some research and I am surprised that there isn&#x27;t a more rounded solution. I&#x27;d expect that pretty much every Linux user must run into this problem.

31 comments

gjulianmalmost 2 years ago
Short answer: Prometheus + Grafana + Alertmanager. prometheus_node_exporter has an option to export SystemD service status and you can alert on failed services, and you can use Alertmanager to configure multiple types of alarms, including repeats so you don&#x27;t forget.<p>Long answer: Whenever I&#x27;ve started to add alerting and monitoring to a system, I end up wanting to add more things each time, so I find it valuable to start from the beginning with an extensible system. For me, Prometheus has been the best option: easy to configure, lightweight, doesn&#x27;t even need to run in the host, and can monitor multiple systems. You just have to configure which exporters you want it to pull data from. In this case, prometheus_node_exporter has a massive amount of stats about a system (including SystemD), and there are default alarms and dashboards out there that will help you create basic monitoring in a minute.<p>You can choose to use Grafana for visualization, and then either the integrated Grafana alerting or use the Prometheus alerting + Prometheus Alertmanager. I think in the latest versions Grafana Alerting includes basically an embedded AlertManager so it should have the same features.<p>Regarding the type of alert itself, I send myself mails for the persistence&#x2F;reminders + Telegram messages for the instant notifications. I find it the best option tbh.
评论 #36945161 未加载
评论 #36954606 未加载
gregmacalmost 2 years ago
I don&#x27;t monitor services at that level at all, because it means basically nothing. More acutely: the the lack of a notification doesn&#x27;t tell mean everything is &quot;ok&quot;.<p>I tend to monitor the actual service. If it&#x27;s a web server, have something checking that a specific URL is working (tip: use something specific, not &#x2F;). Likewise any other network service is pretty easy to monitor.<p>For backups, check the date on the most recent file in the backup target location. If that date is older than &quot;x&quot;, something is broken. This can apply to most other types of backend apps too -- everything has <i>some</i> kind of output.<p>It&#x27;s when these checks fail that you can investigate deeper and start diagnosing systemd or whatever. It&#x27;s also possible there&#x27;s a bigger problem -- like DNS got messed up, or the hardware died -- and checking the final outcome will catch all this.<p>Basically explicitly checking systemd is a lot of extra work for no real added benefit. If your systemd service is failing often enough that knowing that is the problem immediately (at the alert level) IMHO you&#x27;d be better off to spend the time fixing the service definition so it <i>doesn&#x27;t</i> fail.
评论 #36947346 未加载
tehalexalmost 2 years ago
If you are ok with a Saas and if it&#x27;s just scheduled jobs that you are monitoring, there are a number of monitoring tools where you tell when job completes (with a http request) and a missing ping (after a grace period) means that it failed.<p>I think <a href="https:&#x2F;&#x2F;deadmanssnitch.com&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;deadmanssnitch.com&#x2F;</a> may have been the original service for this.<p><a href="https:&#x2F;&#x2F;healthchecks.io&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;healthchecks.io&#x2F;</a> has a fairly generous free tier that I use now.<p>There are others that do the same thing Sentry, Uptime Robot, ...
评论 #36946876 未加载
评论 #36945734 未加载
评论 #36945655 未加载
评论 #36947880 未加载
chasilalmost 2 years ago
&gt; &quot;I have a backup job that is triggered by a timer. I want to know when that job fails so I can investigate and fix it.&quot;<p>This is really more in the realm of a shell script.<p>You could do this verbosely:<p><pre><code> #!&#x2F;bin&#x2F;sh &#x2F;path&#x2F;to&#x2F;my&#x2F;backup_job if [ $? -ne 0 ] then &#x2F;path&#x2F;to&#x2F;my&#x2F;failure_alert fi </code></pre> ...or, you could do this tersely:<p><pre><code> #!&#x2F;bin&#x2F;sh &#x2F;path&#x2F;to&#x2F;my&#x2F;backup_job || &#x2F;path&#x2F;to&#x2F;my&#x2F;failure_alert </code></pre> The wrapper script would go into your timer unit. I like dash.
评论 #36948163 未加载
评论 #36947712 未加载
PhilipRomanalmost 2 years ago
I was building an elaborate job monitoring system, but then I realized that what I <i>really</i> need is monitoring the actual end to end functionality.<p>For example, instead of monitoring my Minecraft server process that OpenRC spawns, I have a dedicated monitoring server that actually queries the server for version, number of players, etc. Same for websites, etc. Think of it as periodically running an integration test on a live system.<p>This way I get much more confidence that the service is doing what it should.<p>I&#x27;m not a big fan of over complicated monitoring systems - I simply have a script that builds a HTML status page with enough information to know when something goes wrong.
评论 #36947291 未加载
arjvikalmost 2 years ago
I love <a href="https:&#x2F;&#x2F;ntfy.sh&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;ntfy.sh&#x2F;</a> for my services running on headless servers - it lets me ping my phone with messages of varying urgency, and even duplicate the notifications to email for particularly information-dense messages.
2bluescalmost 2 years ago
I use the `OnFailure` property to trigger a service that emails me for failed services like backups which are run as system timers + service.<p>I also use `failure-monitor` which is Python service that monitors `journald`.<p>Files on Github for those interested:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;kylemanna&#x2F;systemd-utils">https:&#x2F;&#x2F;github.com&#x2F;kylemanna&#x2F;systemd-utils</a>
mcpherrinmalmost 2 years ago
I run the Prometheus node_exporter on my servers. That has a systemd collector for the state of services.<p>That reports the state of all systemd services to a central Prometheus and alertmanager cluster, which has various alert rules.
评论 #36945055 未加载
评论 #36954568 未加载
kelnosalmost 2 years ago
&gt; <i>Notifications are [...] ephemeral, meaning I might forget about it if I don&#x27;t deal with it immediately.</i><p>If you do like the notification method aside from this issue, try passing &quot;--urgency=critical&quot; or &quot;--expire-time=0&quot; to notify-send. Either (or both) of those should make the notifications stay popped up, assuming your notification daemon is doing something reasonable with those hints.<p>(Disclosure: I&#x27;m the author of xfce4-notifyd, which does behave in this way; other daemons may do other things.)
bravetraveleralmost 2 years ago
This thread is one of those cases where you read something and realize you&#x27;ve been completely missing something. I <i>don&#x27;t</i> monitor these as much as I should<p>Servers&#x2F;services? Definitely - take your pick. Timers&#x2F;jobs, particularly those on my system? Nothing!<p>With the right directives laid out <i>(&#x27;Wants&#x2F;Requires&#x2F;Before&#x2F;After&#x27;)</i>, they can be pretty robust&#x2F;easily forgotten.<p>I&#x27;ve been lucky in this regard; I check <i>&#x27;systemctl list-timers&#x27;</i> just to be sure - but they always run
mxuribealmost 2 years ago
@wh33zle For work, well, i have to follow already-established convention (some that others have noted). but for personal machines, i have not rolled out too many comprehensive monitoring solutions or platforms. Rather, i add focus on moitoring specific jobs&#x2F;tasks, and as such leverage cron to run the job, and use basic, old school sorts of bash scripts to assess success or failure. I&#x27;m statrting to look into leveragin more systemd as you noted.<p>Now, specific to alerting, well, i have rolled out my own solution...Caution: self-promotion coming next...<p>I stopped relying on email being sent from servers since i&#x27;ve had too many annoyances, constraints in my history. Also, nowadays email is a medium that is slow for me...that is, i treat it like its non-time-sensitive3 messaging (for the majortiy of the time). So, for system alert-style messagings, I use my own little python script that sends messages into a dedicated matrix room. Since, i&#x27;m always on matrix, its a place where i can quickly see a new system alert messaage (matrix clients like Element allow you to adjust visibility - i think they call it noise level - of which messages are given higher or lower priority for the client vieew, etc.). And, those messages tend to be ephemeral, since they&#x27;re just alerts, and such messages do not pollute my email inbox. There are plenty of options in this space of course. Mine is not the only one, but i also wanted to learn how to make apps for matrix ecosystem, etc. Here&#x27;s a link to my little notification app&#x2F;script that leverages the matrix network chat ecosystem: <a href="https:&#x2F;&#x2F;github.com&#x2F;mxuribe&#x2F;howler">https:&#x2F;&#x2F;github.com&#x2F;mxuribe&#x2F;howler</a>
veyhalmost 2 years ago
Uptime-Kuma [1] with ntfy [2]. Most of my services expose HTTP so I just have Uptime-Kuma monitor that. But if you have something that is not exposed to the public you can still use a &quot;push&quot; type monitor, and in a cron job on your server(s), send heartbeat to it when everything is working.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;louislam&#x2F;uptime-kuma">https:&#x2F;&#x2F;github.com&#x2F;louislam&#x2F;uptime-kuma</a><p>[2] <a href="https:&#x2F;&#x2F;ntfy.sh&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;ntfy.sh&#x2F;</a>
Phelinofistalmost 2 years ago
I use Nagios, easy, lean and gets the job done
评论 #36947544 未加载
mike_hearnalmost 2 years ago
Sadly there are lots of basic must-have tasks that Linux distros simply do not support out of the box. It&#x27;s not so much an OS as a kit for making operating systems. Backup is another.<p>Here&#x27;s how I set up email monitoring of systemd services, for anyone who wants it:<p><a href="https:&#x2F;&#x2F;gist.github.com&#x2F;mikehearn&#x2F;f1db694f24eaa05c753e5a759878772a" rel="nofollow noreferrer">https:&#x2F;&#x2F;gist.github.com&#x2F;mikehearn&#x2F;f1db694f24eaa05c753e5a7598...</a><p>It consists of three parts. Firstly a shell script that will email the unit status colorized to your preferred email address. Secondly, a service file that tells systemd how to call it, and finally, an OnFailure line in each service that you want to monitor. You can use systemd&#x27;s support for overlays to add this to existing services you didn&#x27;t write yourself.<p>You also have to make sure that your server can actually send mail to you. Installing default-mta will get you an SMTP relay that&#x27;s secure out of the box but your email service will consider it spam. If you use gmail it&#x27;s typically sufficient to just create a filter that ensures emails from your server are never marked as spam.
nurettinalmost 2 years ago
I have a carefully designed alert service for every project, checking various aspects of the system. It periodically checks heartbeats from various systems to make sure everything is in order. It sends alerts to UI via websocket, and to slack channels and makes calls to twilio numbers if things do not self-recover in time. I only check if the alert system is running via cron.
eternityforestalmost 2 years ago
For monitoring and alerts I look to how industrial SCADA does it.<p>Unfortunately I have no code to share, because... I&#x27;m a dev, rather than a sysadmin, and I do backups and such at home with the GUI, and I don&#x27;t work 9m anything microservicy, so I&#x27;ve only done monitoring of features within one monolithic application.<p>My preferred way to monitor a backup task would just be to use a backup tool that had it&#x27;s own monitoring built in, or integrations with a popular monitor solution. I&#x27;ve done DIY backup scripts, it always seems so simple that you might as well just write a few lines... But it&#x27;s also so common of a use case that there&#x27;s lots of really nice options.<p>I&#x27;ve done the systemd --failed thing on every new terminal, and probably should go back to doing so, but it doesn&#x27;t do much if you&#x27;re not logging in regularly. Although it does help when you&#x27;re logging in to see what went wrong.<p>But the general idea when I have actually implemented monitoring, is that you have state machine alerts. They go from normal, to tripped, to active.<p>If you acknowledge it, it becomes acknowledged, if it bad condition goes away, it becomes cleared, and returns to normal when acknowledged(Or instantly, if auto-ack is selected).<p>Every alert has a trip condition, which can be any function on one or more &quot;Tag points&quot;(Think observable variables with lots of extra features).<p>A tripped alert only becomes active if it remains tripped for N seconds, to filter out irrelevant things caused by normal dropped packets and such, while still logging them.<p>While an alert is active, it shows in the list on the server&#x27;s admin page, and can periodically make a noise or do some reminder. Eventually I&#x27;d like to find some kind of MQTT dashboard solution that shows everything in one place, and sends messages to an app, but I haven&#x27;t needed anything like that yet.<p>Under the hood the model is fairly complex but you don&#x27;t have to think about it much to use it.
INTPenisalmost 2 years ago
Only send alerts from the end user perspective. In your case the end user would most likely go into the backups and list them. So I would have a job that lists the backups every day and if something is missing it alerts.<p>See the difference here is that you don&#x27;t monitor the systemd backup job, you monitor the backup backend instead. Because systemd can be configured to retry a job, the end result is in the backend.<p>And in other cases I do have monitoring for individual services, but I only send alerts if the end user experiences an issue. So a web server process&#x2F;systemd unit is being monitored, but the alert is on a different monitor that checks if the website returns 200, or if it contains a keyword indicating it works.
speedyapocalmost 2 years ago
I push telemetry to Amazon CloudWatch (my infrastructure is on AWS) and then setup alarms accordingly. If I&#x27;m concerned about a service failing or becoming unresponsive, it&#x27;s easy to create an alarm based on the existence or non-existence of data.
kiririnalmost 2 years ago
`0 * * * * journalctl --since=&quot;61 minutes ago&quot; --priority=warning --quiet`<p>In crontab piped to a bunch of grep -v for the things I want to ignore<p>So basically the email approach, just have to be religious about marking unread if not immediately actioned
评论 #36946705 未加载
jiehongalmost 2 years ago
I’d go with sending myself a push notification on the phone through a dedicated service file and then call it with this in my unit file:<p>OnFailure=send-push-notification.service<p>Perhaps via a WhatsApp notification or any other instant message [0] or any other service such as matrix as said in another comment.<p>[0] <a href="https:&#x2F;&#x2F;developers.facebook.com&#x2F;docs&#x2F;whatsapp&#x2F;cloud-api&#x2F;get-started#sent-test-message" rel="nofollow noreferrer">https:&#x2F;&#x2F;developers.facebook.com&#x2F;docs&#x2F;whatsapp&#x2F;cloud-api&#x2F;get-...</a>
SoftTalkeralmost 2 years ago
Run the job via cron, if it fails you&#x27;ll get an email sent according to your system&#x27;s alias file. You can also grep the logs for failure if you think you didn&#x27;t get the email.
m3047almost 2 years ago
In general this evolves to a SIEM-like solution in IT or gets added to the tag menagerie in OT.<p>If you&#x27;re focused on &quot;notifications are bad&quot; note that notifications are push, and pull solutions are possible. Tail logs (or journalctl) and post significant events to Redis (<a href="https:&#x2F;&#x2F;github.com&#x2F;m3047&#x2F;rkvdns_examples&#x2F;tree&#x2F;main&#x2F;totalizers">https:&#x2F;&#x2F;github.com&#x2F;m3047&#x2F;rkvdns_examples&#x2F;tree&#x2F;main&#x2F;totalizer...</a>) for example.
OJFordalmost 2 years ago
Everyone seems to be talking about production services &amp; headless servers, but <i>my</i> impression is that you meant on the desktop?<p>I wrote a little script that puts a failed service count in waybar, and throws up a dismissable swaynag message with buttons to &#x27;toggle details&#x27;, and reset or restart the failed system&#x2F;user units.<p>It&#x27;s a bit noisy at the moment - but I think that&#x27;s probably just a helpful indication of units I need to sort out&#x2F;make a bit more robust anyway.
dig1almost 2 years ago
This combo does the job for me: grafana + riemann + influxdb and collectd as the main agent. collectd bundles many plugins so you can watch logs, monitor running processes or have something custom [1]. This setup is very light to start with and can scale well (up until you hit influxdb limits :D).<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;mbachry&#x2F;collectd-systemd">https:&#x2F;&#x2F;github.com&#x2F;mbachry&#x2F;collectd-systemd</a>
javajoshalmost 2 years ago
I think this is a great question. Consider <a href="https:&#x2F;&#x2F;blog.wesleyac.com&#x2F;posts&#x2F;how-i-run-my-servers" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.wesleyac.com&#x2F;posts&#x2F;how-i-run-my-servers</a>. His unit files do not mention monitoring.
HankB99almost 2 years ago
I&#x27;ve been using Checkmk (raw - e.g. free) to monitor stuff in my home lab (mostly for other things.) It has notified me of some failed Systemd services.
dsr_almost 2 years ago
We don&#x27;t use systemd, so we haven&#x27;t had issues with it.<p>Things get deployed by the automatic deployment system. If they go in cron, they are supervised by a program called errorwatch which does all the things that you want in a one-shot supervisor: logging, error codes, time bounds, checking for right output, checking for wrong output. If they are daemonic, they get &#x2F;etc&#x2F;init.d&#x2F; start&#x2F;stop scripts that have been tested.<p>If they have a habit of dying <i>and</i> we can&#x27;t afford that <i>and</i> we can&#x27;t fix it, we run them from daemontools instead of init.d.
cyfexalmost 2 years ago
&gt; Sending myself emails feels a bit wrong but has so far been the best solution.<p>Why does email feel wrong? I find it a pretty viable solution.
kazinatoralmost 2 years ago
I have an &#x2F;etc&#x2F;inittab entry with<p><pre><code> stmd:2345:respawn:&#x2F;bin&#x2F;systemd</code></pre>
renewiltordalmost 2 years ago
We just fire off a Slack message. It does the trick.
johneaalmost 2 years ago
I generally just call up goggle and ask them how my system is doing...