TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What do you monitor on your servers?

346 点作者 gorkemcetin9 个月前
We&#x27;ve been developing the BlueWave Uptime Manager [1] for the past 5 months with a team of 7 developers and 3 external contributors, and till today we always went under the radar.<p>As we move towards expanding from basic uptime tracking to a comprehensive monitoring solution, we&#x27;re interested in getting insights from the community.<p>For those of you managing server infrastructure,<p>- What are the key assets you monitor beyond the basics like CPU, RAM, and disk usage?<p>- Do you also keep tabs on network performance, processes, services, or other metrics?<p>Additionally, we&#x27;re debating whether to build a custom monitoring agent or leverage existing solutions like OpenTelemetry or Fluentd.<p>- What’s your take—would you trust a simple, bespoke agent, or would you feel more secure with a well-established solution?<p>- Lastly, what’s your preference for data collection—do you prefer an agent that pulls data or one that pushes it to the monitoring system?<p>[1] https:&#x2F;&#x2F;github.com&#x2F;bluewave-labs&#x2F;bluewave-uptime

79 条评论

kevg1239 个月前
&gt; What are the key assets you monitor beyond the basics like CPU, RAM, and disk usage?<p>* Network is another basic that should be there<p>* Average disk service time<p>* Memory is tricky (even MemAvailable can miss important anonymous memory pageouts with a mistuned vm.swappiness), so also monitor swap page out rates<p>* TCP retransmits as a warning sign of network&#x2F;hardware issues<p>* UDP &amp; TCP connection counts by state (for TCP: established, time_wait, etc.) broken down by incoming and outgoing<p>* Per-CPU utilization<p>* Rates of operating system warnings and errors in the kernel log<p>* Application average&#x2F;max response time<p>* Application throughput (both total and broken down by the error rate, e.g. HTTP response code &gt;= 400)<p>* Application thread pool utilization<p>* Rates of application warnings and errors in the application log<p>* Application up&#x2F;down with heartbeat<p>* Per-application &amp; per-thread CPU utilization<p>* Periodic on-CPU sampling for a bit of time and then flame graph that<p>* DNS lookup response times&#x2F;errors<p>&gt; Do you also keep tabs on network performance, processes, services, or other metrics?<p>Per-process and over time, yes, which are useful for post-mortem analysis
评论 #41280814 未加载
评论 #41377173 未加载
评论 #41279967 未加载
评论 #41280862 未加载
评论 #41281754 未加载
aflukasz9 个月前
When it comes to &quot;what&quot; to monitor, many usual suspects already posted in this thread, so in an attempt not to repeat what&#x27;s there already, I will mention just the following (will somewhat assume Linux&#x2F;systemd):<p>- systemd unit failures - I install a global OnFailure hook that applies for all the units, to trigger an alert via a mechanism of choice for a given system,<p>- restarts of key services - you typically don&#x27;t want to miss those, but if they are silent, then you quite likely will,<p>- netfilter reconfigurations - nftables cli has useful `monitor` subcommand for this,<p>- unexpected ingress or egress connection attempts,<p>- connections from unknown&#x2F;unexpected networks (if can&#x27;t just outright block them for any reason).
评论 #41281058 未加载
uaas9 个月前
You cannot go wrong with the most popular choice: Prometheus&#x2F;Grafana stack. That includes node_exporter for anything host related, and optionally Loki (and one of its agents) for logs. All this can run anywhere, not just on k8s.
评论 #41276315 未加载
评论 #41267184 未加载
评论 #41279893 未加载
评论 #41282216 未加载
dfox9 个月前
My two cents: monitoring RAM usage is completely useless, as whatever number you consider an “used&#x2F;free RAM” is meaningless (and the ideal state is that all of the RAM is somehow “used” anyway). You should monitor for page faults and cache misses in block device reads.
评论 #41278381 未加载
评论 #41278793 未加载
评论 #41284738 未加载
评论 #41289454 未加载
cmg9 个月前
With Icinga, for webservers:<p>- apt status (for security&#x2F;critical updates that haven&#x27;t been run yet)<p>- reboot needed (presence of &#x2F;var&#x2F;run&#x2F;reboot-required)<p>- fail2ban jail status (how many are in each of our defined jails)<p>- CPU usage<p>- MySQL active, long-running processes, number of queries<p>- iostat numbers<p>- disk space<p>- SSL cert expiration date<p>- domain expiration date<p>- reachability (ping, domain resolution, specific string in an HTTP request)<p>- Application-specific checks (WordPress, Drupal, CRM, etc)<p>- postfix queue size
评论 #41281228 未加载
mmarian9 个月前
I use netdata, works like a charm <a href="https:&#x2F;&#x2F;github.com&#x2F;netdata&#x2F;netdata">https:&#x2F;&#x2F;github.com&#x2F;netdata&#x2F;netdata</a>
评论 #41292570 未加载
zie9 个月前
I monitor periods between naps. The longer I get naps the happier I am :)<p>Seriously though, the server itself is not the part that matters, what matters is the application(s) running on the server. So it depends heavily on what the application(s) care about.<p>If I&#x27;m doing some CPU heavy calculations on one server and streaming HTTPS off a different server, I&#x27;m going to care about different things. Sure there are some common denominators, but for streaming static content I barely care about CPU stuff, but I care a lot about IO stuff.<p>I&#x27;m mostly agnostic to push vs pull, they both have their weaknesses. Ideally I would get to decide given my particular use case.<p>The lazy metrics, like you mentioned, are not that useful, like another commenter mentioned, &quot;free&quot; ram is mostly a pointless number, since these days most OS&#x27;s, wisely use it for caching. But information on the OS level caching can be very useful, depending on the work-loads I&#x27;m running on the system.<p>As for agents, what I care about is how stable, reliable and resource intensive it is. I want it to take zero resources, rock solid and reliable. Many agents fail spectacularly at all 3 of those things. Crowdstrike is the most recent example of failure here with agent based monitoring.<p>The point of monitoring systems to me are two-fold:<p><pre><code> * Trying to spot problems before they become problems(i.e. we have X days before disk is full given current usage patterns). * Trying to track down a problem as it is happening(i.e. App Y is slow in X scenario all of a sudden, why?). </code></pre> Focus on the point of monitoring and keep your agent as simple, solid and idiot proof as possible. Crowdstrike&#x27;s recent failure mode was completely preventable had the agent been written differently. Architect your agent as much as possible to never be another Crowdstrike.<p>Yes I know Crowdstrike was user machines, not servers, but server agent failures happen all the time too, in roughly the same ways, they just don&#x27;t make the news quite as often.
mgbmtl9 个月前
I like icinga&#x27;s model, which can run a small agent on the server, but it doesn&#x27;t run as root. I grant specific sudo rules for checks that need elevated permissions.<p>I find it easier to write custom checks for things where I don&#x27;t control the application. My custom checks often do API calls for the applications they monitor (using curl locally against their own API).<p>There are also lots of existing scripts I can re-use, either from the Icinga or from Nagios community, so that I don&#x27;t write my own.<p>For example, recently I added systemd monitoring. There is a package for the check (monitoring-plugins-systemd). So I used Ansible to install everywhere, and then &quot;apply&quot; a conf to all my Debian servers. Helped me find a bunch of failing services or timers, which previously went un-noticed, including things like backups, where my backup monitoring said everything was OK, but the systemd service for borgmatic was running a &quot;check&quot; a found some corruption.<p>For logs I use promtail&#x2F;loki. Also very much worth the investment. Useful to detect elevated error rates, and also for finding slow http queries (again, I don&#x27;t fully control the code of applications I manage).
LeoPanthera9 个月前
Perhaps I can hijack this post to ask some advice on <i>how</i> to monitor servers.<p>I don&#x27;t do this professionally. I have a small homelab that is mostly one router running opnsense, one fileserver running TrueNAS, and one container host running Proxmox.<p>Proxmox does have about 10-15 containers though, almost all Debian, and I feel like I should be doing more to keep an eye on both them and the physical servers themselves. Any suggestions?
评论 #41279060 未加载
评论 #41279056 未加载
评论 #41279396 未加载
评论 #41278909 未加载
mekster9 个月前
Make sure whatever information provided can be actionable.<p>For example, providing CPU metric alone is just for alerting. If it exceeds a threshold, make sure it gives insights into which process&#x2F;container was using how much CPU at given moment. Bonus point if you can link logs from that process&#x2F;container of that time.<p>For disks, tell which directory is large, and what kind of file types are using much space.<p>Pretty graphs that don&#x27;t tell you what to look for next are nothing.
1oooqooq9 个月前
none of the things you list are for Logs. metrics are different use cases. do not use opentelemetry or you will suffer (and everyone who suffered will try to bring you to their hell)<p>look for guides written before 2010. seriously. it&#x27;s this bad. then after you have everything in one syslog somewhere, dump to a facy dashboard like o2.
评论 #41281768 未加载
评论 #41276372 未加载
aleda1459 个月前
For my homeserver I just have a small python script dumping metrics (CPU, RAM, disk, temperature and network speed) into a database (timescaleDB).<p>Then I visualize it with grafana, It&#x27;s actually live here if you want to check it out: <a href="https:&#x2F;&#x2F;grafana.dahl.dev" rel="nofollow">https:&#x2F;&#x2F;grafana.dahl.dev</a>
评论 #41277029 未加载
评论 #41281414 未加载
评论 #41277044 未加载
valyala9 个月前
node_exporter ( <a href="https:&#x2F;&#x2F;github.com&#x2F;prometheus&#x2F;node_exporter">https:&#x2F;&#x2F;github.com&#x2F;prometheus&#x2F;node_exporter</a> ) and process_exporter ( <a href="https:&#x2F;&#x2F;github.com&#x2F;ncabatoff&#x2F;process-exporter">https:&#x2F;&#x2F;github.com&#x2F;ncabatoff&#x2F;process-exporter</a> ) expose the most of the useful metrics needed for monitoring server infrastructure together with the running processes. I&#x27;d recommend also taking a look at Coroot agent, which uses ebpf for exporting the essential host and process metrics - <a href="https:&#x2F;&#x2F;github.com&#x2F;coroot&#x2F;coroot-node-agent">https:&#x2F;&#x2F;github.com&#x2F;coroot&#x2F;coroot-node-agent</a> .<p>As for the agent, it is better from operations perspective to run a single observability agent per host. This agent should be small in size and lightweight on CPU and RAM usage, should have no external dependencies and should have close to zero configs, which need to be tuned, e.g. it should automatically discover all the apps and metrics needed to be monitored, and send them to the centralized observability database.<p>If you are lazy to write the agent on yourself, then take a look at vmagent ( <a href="https:&#x2F;&#x2F;docs.victoriametrics.com&#x2F;vmagent&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.victoriametrics.com&#x2F;vmagent&#x2F;</a> ), which scrapes metrics from the exporters mentioned above. vmagent satisfies most of the requirements stated above except of configuration - you need to provide configs for scraping metrics from separately installed exporters.
oriettaxx9 个月前
don&#x27;t forget the &quot;CPU steal&quot; state, and AWS cpu burst credit<p>I general I would also suggest to monitor server costs (aws EC2 costs, e.g.)<p>For example, you should be aware that T3 AWS Ec2 instances will just cost <i>double</i> if your CPU is just used, and this since the flag &quot;unlimited&quot; in credit is ON by default: I personally hate the whole &quot;cpu credit&quot; AWS model... it is an instrument totally in their (AWS) hands to just make more money...
holowoodman9 个月前
* Services that should be running (enabled&#x2F;autostart) but aren&#x27;t. This is easier and more comprehensive than stuff like &quot;monitor httpd on webservers&quot;, because all necessary services should be on autostart anyways, and all stuff that autostarts should work or be disabled.<p>* In our setup, container status is included in this thanks to quadlets. However, if using e.g. docker, separate container monitoring is necessary, but complex.<p>* apt&#x2F;yum&#x2F;fwupd&#x2F;... pending updates<p>* mailqueue length, root&#x27;s mailbox size: this is an indicator for stuff going wrong silently<p>* pending reboot after kernel update<p>* certain kinds of log entries (block device read error, OOMkills, core dumps).<p>* network checksum errors, dropped packets, martians<p>* presence or non-presence of USB devices: desktops should have keyboard and mouse. servers usually shouldn&#x27;t. usb storage is sometimes forbidden.
arcbyte9 个月前
Whatever directly and naterially affects affects cost and that&#x27;s it.<p>For some of my services on DigitalOcean for instance, I monitor RAM because using a smaller instance can dramatically save money.<p>But for the most part I don&#x27;t monitor anything - if it doesn&#x27;t make me money why do I care?
评论 #41278560 未加载
waynenilsen9 个月前
Available file descriptors
评论 #41281920 未加载
usernamed79 个月前
sounds like you&#x27;re reinventing nagios, which has well addressed all of the above. If nothing else, lots of good solutions in that ecosystem like push&#x2F;pull.
评论 #41276731 未加载
tiffanyh9 个月前
Q1: Can some ELI5 when you’d use:<p>- nagios<p>- Victoria metrics<p>- monit<p>- datadog<p>- prometheus grafana<p>- etc …<p>Q2: Also, is there something akin to “SQLite” for monitoring servers. Meaning, simple &#x2F; tested &#x2F; reliable tool to use.<p>Q3: if you ran a small saas business, which simple tool would you use to monitor your servers &amp; services health?
评论 #41280451 未加载
评论 #41280774 未加载
Izkata9 个月前
&gt; For those of you managing server infrastructure,<p>As a developer who has often had to look into problems and performance issues, instead of an infrastructure person, this is basically the bare minimum of what I want to see:<p>* CPU usage<p>* RAM breakdown by at least Used&#x2F;Disk cache&#x2F;Free<p>* Disk fullness (preferably in absolute numbers, percents get screwy when total size changes)<p>* Disk reads&#x2F;writes<p>* Network reads&#x2F;writes<p>And this is high on the list but not required:<p>* Number of open TCP connections, possibly broken down by state<p>* Used&#x2F;free inodes (for relevant filesystems); we have actually used them up before (thanks npm)
jiggawatts9 个月前
A lot of people here are suggesting metrics that are easy to collect but nearly useless for troubleshooting a problem, or even detecting it.<p>CPU and Memory are the easiest and most obvious to collect but the most irrelevant.<p>If nobody’s looked at any metrics before on the server fleet, then basic metrics have some utility: you can find the under- or over- provisioned servers and fix those issues… once. And then that well will very quickly run dry. Unfortunately, everyone will have seen this method “be a success” and will then insist on setting up dashboards or whatever. This might find one issue annually, if that, at great expense.<p>In practice, modern distributed tracing or application performance monitoring (APM) tools are vastly more useful for day-to-day troubleshooting. These things can find infrequent crashes, expired credentials, correlate issues with software versions or users, and on and on.<p>I use Azure Application Insights in Azure because of the native integration but New Relic and DataDog are also fine options.<p>Some system admins might respond to suggestions like this with: “Other people manage the apps!” not realising that therein lies their failure. Apps and their infrastructure should be designed and operated as a unified system. Auto scale on metrics relevant to the app, monitor health relevant to the app, collect logs relevant to the app, etc…<p>Otherwise when a customer calls about their failed purchase order the only thing you can respond with is: “From where I sit everything is fine! The CPUs are nice and cool.”
xorcist9 个月前
Active monitoring is a different animal from passive metrics collection. Which is different from log transport.<p>The Nagios ecosystem was fragmented for the longest time but now it seems most users have drifted towards Icinga, so this is what I use for monitoring. There is some basic integration with Grafana for metrics, so that is for I use for metrics panels. There is good reason to not use your innovation budget on monitoring, instead use simple software that will continue to be around for a long time.<p>As for what to monitor, that is application specific and should go into the application manifest or configuration management. But generally there should be some sort of active operation that touches the common data path, such as a login, creation of a dummy object such as for example an empty order, validation of said object, and destruction&#x2F;clean up.<p>Outside the application there should be checks for whatever the application relies on. Working DNS, NTP drift, Ansible health, certificate validity, applicable APT&#x2F;RPM packages, database vacuums, log transport health, and the exit status or last file date of scheduled or backgrounded jobs.<p>Metrics should be collected for total connections, their return status, all types of I&#x2F;O latency and throughput, and system resources such as CPU, memory, disk space.
29athrowaway9 个月前
Follow Brendan Gregg&#x27;s USE method<p><a href="https:&#x2F;&#x2F;www.brendangregg.com&#x2F;usemethod.html" rel="nofollow">https:&#x2F;&#x2F;www.brendangregg.com&#x2F;usemethod.html</a>
kkfx9 个月前
Essentially at a very generic level (from SOHO to not that critical services at SME level):<p>- automated alerts on unusual loads, meaning I do not care about CPU&#x2F;RAM&#x2F;disk usage as long as there are specific spikes, so the monitor just send alerts (mails) in case of significant&#x2F;protracted spikes, tuned after a bit of experience. No need to collect such data over significant periods, you have size your infra on expected loads, you deploy and see if you have done correctly, if so you just need to see usual things to filter them keeping alerts only for anomalies;<p>- log alerts for errors, warning, access logs etc, same principle, you deploy and collect a bit, than you have &quot;the normal logs&quot;, you create alerts for unusual things, retention depend on log types and services you run, some retention could be constrained by laws;<p>Performance metrics are a totally different thing that&#x27;s should be decided more by the dev than the operation, and much of it&#x27;s design depend of the kind of development and services you have. It&#x27;s much more complex because the monitor itself touch the performance of the system MUCH more than generic alerting an casual ping and alike to check service availability. Push and pull are mixed, for alerts push are the obvious goto, for availability pull are much more sound etc. There is no &quot;one choice&quot;.<p>Personally I tend to be very calm in more fine grain monitoring to start, it&#x27;s important of course, but should not became an analyze-paralyze trap nor waste too much human resources and IT ones for collection of potential garbage in potentially not marginal batches...
elashri9 个月前
I think this is a chance for me to go somehow off topic and ask people how they handle combining the monitoring of different logs one place. I think there are many solutions but must of them gear toward enterprise solutions. What do people use for poor&#x27;s man (personal usage like selfhosting&#x2F;homelab) approach. That does not require you to be VC funded or takes a lot of time to actually implement.
评论 #41281407 未加载
tgtweak9 个月前
using datadog these days, newrelic previously - basically every metric you can collect.<p>Disk i&#x2F;o and network i&#x2F;o is particularly important but most of the information you truly care about lies in application traces and application logs. Database metrics in a close second particularly cache&#x2F;index usage and disk activity, query profiling. Network too if your application is bandwidth heavy.
Jedd9 个月前
Server infrastructure is mostly a solved problem - hardware (snmp&#x2F;ipmi etc) and OS layer.<p>I think it&#x27;d be <i>very hard</i> at this point to come up with compelling alternatives to the incumbents in this space.<p>I&#x27;d certainly not want a non-free, non-battle-tested, potentially incompatible &amp; lock-in agent that wouldn&#x27;t align with the agents I currently utilise (all free in the good sense).<p>Push vs pull is an age-old conundrum - at dayjob we&#x27;re pull - Prometheus scraping Telegraf - for OS metrics.<p>Though traces, front-end, RUM, SaaS metrics, logs, etc, are obviously more complex.<p>Whether to pull or push often comes down to how static your fleet is, but mostly whether you&#x27;ve got a decent CMDB that you can rely on to tell you what the state of all your endpoints - registering and decom&#x27;ing endpoints, as well as coping with scheduled outages.
dangus9 个月前
I dunno if it’s just me, but I would never buy a monitoring solution from a company that has to ask a web forum this kind of question.<p>If you’re building a product from scratch you must have some kind of vision based on deficiencies in existing solutions that are motivating you to build a new product, right?
评论 #41286001 未加载
nrr9 个月前
One thing to be aware of is that up&#x2F;down alerting bakes downtime into the incident detection and response process, so literally anything anyone can do to get away from that will help.<p>A lot of the details are pretty application-specific, but the metrics I care about can be broadly classified as &quot;pressure&quot; metrics: CPU pressure, memory pressure, I&#x2F;O pressure, network pressure, etc.<p>Something that&#x27;s &quot;overpressure&quot; can manifest as, e.g., excessively paging in and out, a lot of processes&#x2F;threads stuck in &quot;defunct&quot; state, DNS resolutions failing, and so on.<p>I don&#x27;t have much of an opinion about push versus pull metrics collection as long as it doesn&#x27;t melt my switches. They both have their place. (That said, programmable aggregation on the metrics exporter is something that&#x27;s nice to have.)
评论 #41279885 未加载
klinquist9 个月前
I use monit and m&#x2F;monit server to measure CPU&#x2F;load&#x2F;memory&#x2F;disk, processes, and HTTP endpoints.
koliber9 个月前
In addition to the things already mentioned, there are a few higher level things which I find helpful:<p>- http req counts vs total non-200-response count vs. 404-and-30x count.<p>- whatever asynchronous jobs your run, a graph of jobs started vs jobs finished will show you a rough resource utilization and highlight gross bottlenecks.
whalesalad9 个月前
netdata on all our boxes. It’s incredible. Provides automagic statsd capture, redis, identifies systemd services, and all the usual stuff like network performance, memory, cpu, etc. recently they introduced log capture which is also great, broken down by systemd service too.
madaxe_again9 个月前
Don’t reinvent the wheel - there are many mature monitoring agents out there that you could ingest from, and it allows easy migration for customers.<p>As to what I monitor - normally, as little as humanly possible, and when needed, everything possible.
ralferoo9 个月前
<a href="https:&#x2F;&#x2F;hetrixtools.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;hetrixtools.com&#x2F;</a><p>It&#x27;s free if you don&#x27;t have too many servers - 15 uptime monitors (the most useful) and 32 blacklist monitors (useful for e-mail, but don&#x27;t know why you&#x27;d need so many compared to uptime).<p>It&#x27;s fairly easy to reach the free limits with not many servers if you&#x27;re also monitoring VMs, but I&#x27;ve found it reliable so far. It&#x27;s nice you can have ping tests from different locations, and it collects pretty much any metrics that are useful such as CPU, RAM, network, disk. The HTTP and SMTP tests are good too.
Mojah9 个月前
For web-application monitoring, we’ve [1] gone the approach of outside-in monitoring. There’s many approaches to monitoring and depending on your role in a team, you might care more about the individual health of each server, or the application as a whole, independent of its underlying (virtual) hardware.<p>For web applications for instance, we care about uptime &amp; performance, tls certificates, dns changes, crawled broken links&#x2F;mixed content &amp; seo&#x2F;lighthouse metrics.<p>[1] <a href="https:&#x2F;&#x2F;ohdear.app" rel="nofollow">https:&#x2F;&#x2F;ohdear.app</a>
mikewarot9 个月前
Suggestion: If you can adapt your monitoring servers to push data out though a data diode, you might be able to make some unique security guarantees with respect to ingress of control.
azthecx9 个月前
<p><pre><code> What’s your take—would you trust a simple, bespoke agent, or would you feel more secure with a well-established solution? </code></pre> No, and I have specifically tried to push against monitoring offerings like Datadog and Dynatrace, especially in the case of the second because running OneAgent and Dynakube CRDs are doing things like downloading tarballs from Dynatrace and listening to absolutely everything they can from processes to network.
sroussey9 个月前
Latencies. This is a sure fire flag that something is amiss.
mnahkies9 个月前
I think OOM kills is an important one, especially with containerized workloads. I&#x27;ve found that RAM used&#x2F;limit metrics aren&#x27;t sufficient as often the spike that leads to the OOM event happens faster than the metric resolution giving misleading charts.<p>Ideally I&#x27;d see these events overlaid with the time series to make it obvious that a restart was caused by OOM as opposed to other forms of crash.
natmaka9 个月前
I could not find a satisfying way to detect an unusual log, qualitatively (new message) or quantitatively (abnormal amount of occurrences of a given message, neglecting any variable part), and therefore developed a dirty hack and it works quite well for me: <a href="https:&#x2F;&#x2F;gitlab.com&#x2F;natmaka&#x2F;jrnmnt" rel="nofollow">https:&#x2F;&#x2F;gitlab.com&#x2F;natmaka&#x2F;jrnmnt</a>
bearjaws9 个月前
Tasks are the more annoying things to track.<p>Did it run? Is it still running? Did it have any errors? Why did it fail? Which step did it fail on?<p>My last job built a job tracker for &quot;cron&quot; tasks that supported actual cron tab + could schedule hitting a https endpoint.<p>Of course it requires code modification to ensure it writes <i>something</i> so you can tell it ran in the first place. But that was part of modernizing a 10 year old LAMP stack.
评论 #41280630 未加载
malkosta9 个月前
Amount of 4xx, 500, and 2xx of an http application can tell a lot about application anomalies. Other protocols also have their error responses.<p>I also keep a close eye in the throughput VS response time ratio, specially the 95th percentile of the resp time.<p>It’s also great to have this same ratio measurement for the DBs you might use.<p>Those are my go to daily metrics, the rest can be zoomed in their own dashboards after I first check this.
blueflow9 个月前
I used to have a Nagios, but after years of continuous uptime (except for planned maintenance) i felt it was not worth it. If your tech stack is simple enough and runs on VPS&#x27;es (whose physical availability is responsibility of your hoster), there isn&#x27;t much that could happen.<p>If i were to setup metrics, the first thing i would go for is the pressure stall information.
justinclift9 个月前
&gt; disk usage<p>There&#x27;s a bunch of ways of measuring &quot;usage&quot; for disks, apart from the &quot;how much space is used&quot;. There&#x27;s &quot;how many iops&quot; (vs the total available for the disk), there&#x27;s how much wear % is used&#x2F;left for the disk (specific to flash), how much read&#x2F;write bandwidth is being used (vs the maximum for the disk), and so on.
dig19 个月前
I try to monitor everything because it can get much more accessible to debug weird issues when sh*t hits the fan.<p>&gt; Do you also keep tabs on network performance, processes, services, or other metrics?<p>Everything :)<p>&gt; What&#x27;s your take—would you trust a simple, bespoke agent, or would you feel more secure with a well-established solution?<p>I went with collected [1] and Telegraf [2] simply because they support tons of modules and are very stable. However, I have a couple of bespoke agents where neither collected nor Telegraf will fit.<p>&gt; Lastly, what&#x27;s your preference for data collection—do you prefer an agent that pulls data or one that pushes it to the monitoring system?<p>We can argue to death, but I&#x27;m for push-based agents all the way down. It is much easier to scale, and things are painless to manage when the right tool is used (I&#x27;m using Riemann [3] for shaping, routing, and alerting). I used to run Zabbix setup, and scaling was always the issue (Zabbix is pull-based). I&#x27;m still baffled how pull-based monitoring gained traction, probably because modern gens need to repeat mistakes from the past.<p>[1] <a href="https:&#x2F;&#x2F;www.collectd.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.collectd.org&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;www.influxdata.com&#x2F;time-series-platform&#x2F;telegraf&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.influxdata.com&#x2F;time-series-platform&#x2F;telegraf&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;riemann.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;riemann.io&#x2F;</a>
评论 #41282903 未加载
giuliomagnifico9 个月前
Grafana, Prometheus, Mimir and Loki. Here’s my monitoring setup: <a href="https:&#x2F;&#x2F;giuliomagnifico.blog&#x2F;post&#x2F;2024-07-08-home-setup-v5&#x2F;" rel="nofollow">https:&#x2F;&#x2F;giuliomagnifico.blog&#x2F;post&#x2F;2024-07-08-home-setup-v5&#x2F;</a>
metadat9 个月前
1. System temperatures with a custom little python server I wrote that gets polled by HomeAssistant (for all machines on my tailnet, thanks Tailscale).<p>2. Hard drive health monitoring with Scrutiny.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;AnalogJ&#x2F;scrutiny">https:&#x2F;&#x2F;github.com&#x2F;AnalogJ&#x2F;scrutiny</a><p>Everything else, doesn&#x27;t matter to me for home use.<p>Good luck with your endeavor!
udev40969 个月前
For monitoring proxmox host(s), I use the influxdb to store all the proxmox metrics and then grafana for a beautiful display.<p>As for the servers, I use uptime kuma (notifies whenever a service goes down), glance (htop in web), vnstat (for network traffic usage) and loki (for logs monitoring)
veryrealsid9 个月前
Something that I’ve noticed a need for is usage vs requested utilization. Since we roll our own kube cluster, I’m trying to right size our pods and that’s been as straight forward as it could be since I have to do a lot of the math and recalculations myself.
jimnotgym9 个月前
I&#x27;m not in that game at the moment. I used to run some background services, that could be down for an hour without causing major difficulty (by design) I used to be very focused on checking the application was running rather than the server.
sebazzz9 个月前
Is this from a sysops perspective? Because Nagios and its fork Ingca are still a thing.
1oooqooq9 个月前
I&#x27;m actually considering linux+k8s log&#x2F;audit solution consulting or a saas (there still isn&#x27;t minimally decent journald log collectors) but not sure who would even pay for it... as you can see for the low attention this will get
评论 #41276511 未加载
damonll9 个月前
Look at major cloud providers and what they offer in monitoring such AWS CloudWatch, etc.<p>Be warned though there are a ton of monitoring solutions already. Hopefully yours has something special to bring to the table.
评论 #41283326 未加载
imperialdrive9 个月前
Used PRTG for many years. Works ok. Has a free offering too. It&#x27;s a bit of an artistic process figuring what to log and how to interpret it in an actionable way. Good luck and try to have fun.
londons_explore9 个月前
Sounds like you might be reinventing a wheel...<p>Can you simply include some existing open source tooling into your uptime monitor, and then contribute to those open source projects any desired new features?
sunshine-o9 个月前
I might sound weird but I got tired of the whole Prometheus thing so I just put my hosts on a NATS cluster and push the metrics I really care about there.
maxboone9 个月前
Take a look at vector, I personally prefer it over fluentd and don&#x27;t think you&#x27;ll need a custom monitoring agent with it.
jcrites9 个月前
For servers, I think the single most important statistic to monitor is percent of concurrent capacity in use, that is, the percent of your thread pool or task pool that&#x27;s processing requests. If you could only monitor one metric, this is the one to monitor.<p>For example, say a synchronous server has 100 threads in its thread pool, or an asynchronous server has a task pool of size 100; then Concurrent Capacity is an instantaneous measurement of what percentage of these threads&#x2F;tasks are in use. You can measure this when requests begin and&#x2F;or end. If when a request begins, 50 out of 100 threads&#x2F;tasks are currently in-use, then the metric is 0.5 = 50% of concurrent capacity utilization. It&#x27;s a percentage measurement like CPU Utilization but better!<p>I&#x27;ve found this is the most important to monitor and understand because it&#x27;s (1) what you have the most direct control over, as far as tuning, and (2) its behavior will encompass most other performance statistics anyway (such as CPU, RAM, etc.)<p>For example, if your server is overloaded on CPU usage, and can&#x27;t process requests fast enough, then they will pile up, and your concurrent capacity will begin to rise until it hits the cap of 100%. At that point, requests begin to queue and performance is impacted. The same is true for any other type of bottleneck: under load, they will all show up as unusually high concurrent capacity usage.<p>Metrics that measure &#x27;physical&#x27; (ish) properties of servers like CPU and RAM usage can be quite noisy, and they are not necessarily actionable; spikes in them don&#x27;t always indicate a bottleneck. To the extent that you need to care about these metrics, they will be reflected in a rising concurrent capacity metric, so concurrent capacity is what I prefer to monitor primarily, relying on these second metrics to diagnose problems when concurrent capacity is higher than desired.<p>Concurrent capacity most directly reflects the &quot;slack&quot; available in your system (when properly tuned; see next paragraph). For that reason, it&#x27;s a great metric to use for scaling, and particularly automated dynamic auto-scaling. As your system approaches 100% concurrent capacity usage in a sustained way (on average, fleet wide), then that&#x27;s a good sign that you need to scale up. Metrics like CPU or RAM usage do not so directly indicate whether you need to scale, but concurrent capacity does. And even if a particular stat (like disk usage) reflects a bottleneck, it will show up in concurrent capacity anyway.<p>Concurrent capacity is also the best metric to tune. You want to tune your maximum concurrent capacity so that your server can handle all requests normally when at 100% of concurrent capacity. That is, if you decide to have a thread pool or task pool of size 100, then it&#x27;s important that your server can handle 100 concurrent tasks normally, without exhausting any other resource (such as CPU, RAM, or outbound connections to another service). This tuning also reinforces the metric&#x27;s value as a monitoring metric, because it means you can be reasonably confident that your machines will not exhaust their other resources first (before concurrent capacity), and so you can focus on monitoring concurrent capacity primarily.<p>Depending on your service&#x27;s SLAs, you might decide to set the concurrent capacity conservatively or aggressively. If performance is really important, then you might tune it so that at 100% of concurrent capacity, the machine still has CPU and RAM in reserve as a buffer. Or if throughput and cost are more important than performance, you might set concurrent capacity so that when it&#x27;s at 100%, the machine is right at its limits of what it can process.<p>And it&#x27;s a great metric to tune because you can adjust it in a straightforward way. Maybe you&#x27;re leaving CPU on the table with a pool size of 100, so bump it up to 120, etc. Part of the process for tuning your application for each hardware configuration is determining what concurrent capacity it can safely handle. This does require some form of load testing to figure out though.
PeterZaitsev9 个月前
Check out Coroot - with use of eBPF and other modern technologies it can do advanced monitoring with zero configuration
评论 #41276946 未加载
itpragmatik9 个月前
For any web apps or API services, we monitor: - Uptime - Error Rate - Latency<p>Prometheus&#x2F;Grafana
holoduke9 个月前
Syslog, kern.log, messages, htop, iotop, df -h, ip2ban.log pm2 log, netstat -tulpn
fragmede9 个月前
for data collection, veneur is pretty nice, and is open source and vendor agnostic. by stripe.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;stripe&#x2F;veneur">https:&#x2F;&#x2F;github.com&#x2F;stripe&#x2F;veneur</a>
s09dfhks9 个月前
Grafana&#x2F;Prometheus stack
NovemberWhiskey9 个月前
Don&#x27;t monitor your servers, monitor your application.
评论 #41278403 未加载
magarnicle9 个月前
My servers send a lot of emails, so postfix failures.
kemalunel9 个月前
unless your resources are not the ephemeral resource then not needed to push data to somewhere. collecting is more makes sense
kemalunel9 个月前
unless your resources not an ephemeral resource, then not needed to push metric data to somewhere.
doctorpangloss9 个月前
Traces are valuable. But otherwise, I feel like most monitoring information is noise, unactionable or better collected elsewhere.
lakomen9 个月前
Nothing at all. And why should I waste energy on this and storage and bandwidth? To watch a few graphs when bored?
kazinator9 个月前
Shitheads trash-talking Lisp, followed by disk space, followed by unexplained CPU spikes and suspicious network activity.
jalcine9 个月前
<i>looks around</i> I use `htop`
Ologn9 个月前
Just echoing some of what others have said...iostat...temperature (sometimes added boards have temperature readings as well as the machine)....plus just hitting web pages or REST APIs and searching the response for expected output... ...file descriptors...<p>In addition to disk space, running out of inodes on your disk, even if you don&#x27;t plan to. If you have swap, seeing if you are swapping more than expected. Other things people said make sense depending on your needs as well.
whirlwin9 个月前
node_exporter all the way: <a href="https:&#x2F;&#x2F;github.com&#x2F;prometheus&#x2F;node_exporter">https:&#x2F;&#x2F;github.com&#x2F;prometheus&#x2F;node_exporter</a>
entrepy1239 个月前
RAID health
layer89 个月前
Counter question: Why do you think another product is needed in this space?
geocrasher9 个月前
Honestly? Look at Netdata for comparison. Everything from nginx hostname requests (we run web hosting servers) to cpu&#x2F;ram&#x2F;disk data but also network data and more. If you can do better than that somehow, by all means do it and make it better.<p>But there&#x27;s more to it than just collecting data in a dashboard. Having a reliable agent and being able to monitor the agent itself (fore example, not just saying &quot;server down!&quot; if the agent is offline, but detecting the server remotely for verification) would be nice.
dijksterhuis9 个月前
In my limited experience in a small biz running some SaaS web apps with new relic for monitoring<p>&gt; What are the key assets you monitor beyond the basics like CPU, RAM, and disk usage?<p>Not much tbh. Those were the key things. Alerts for high CPU and memory. Being able to track those per container etc was useful.<p>&gt; Do you also keep tabs on network performance, processes, services, or other metrics?<p>Services 100%. We did containerised services with docker swarm and one of the bug bears with new relic was having to sort out container label names and stuff to be able to filter things in the Ui. That took me a day or two to standardise (along with the fluentd logging labels so everything had the same labels).<p>Background Linux Processes less so, but it was still useful, although we had to turn them off in new relic as they significantly increased the data ingestion (I tuned NR agent configs to minimise data we sent just so we could stick with the free tier as best as we could).<p>&gt; Additionally, we&#x27;re debating whether to build a custom monitoring agent or leverage existing solutions like OpenTelemetry or Fluentd.<p>I like fluentd, but I hate setting it up. Like I can never remember the filter and match syntax. Once it’s running I just leave it though so that’s nice<p>never used open telemetry.<p>Not sure how useful that info is for you.<p>&gt; What’s your take—would you trust a simple, bespoke agent, or would you feel more secure with a well-established solution?<p>Ehhhh it depends. New relic was pretty established with a bunch of useful features but deffo felt like over kill for what was essentially two containerised django apps with some extra backend services. There was a lot of bloat in NR we probably didn’t ever touch. Including in the agentnitself which took up quite a bit of memory.<p>&gt; Lastly, what’s your preference for data collection—do you prefer an agent that pulls data or one that pushes it to the monitoring system?<p>Personally push, mostly because I can set it up and probably forget about it — run it and add egress firewalls. Job done. Helps with network effect probably as easy to start.<p>I can see pull being the preference for bigger enterprise though who would only want to allow x, y, z data out to third party. Especially for security etc. cos setting a new relic agent running with root access to the host is probably never gonna work in that environment (like new relic container agent asks for).<p>What new relic kinda got right with their pushing agent was the configs. But finding out the settings was a bear as the docs are a hit of a nightmare.<p>(Edited)
linuxdude3149 个月前
Why would you not use OTel?<p>This is clearly the industry standard protocol and the present and future of o11y.<p>The whole point is that o11y vendors can stop reinventing lower level protocols and actually offer unique value props to their customers.<p>So why would you want to waste your time on such an endeavor?
评论 #41278207 未加载
评论 #41278624 未加载
评论 #41277334 未加载
selim179 个月前
Good luck with your project, @gorkemcetin! I hope you achieve your goals. While I’m not a server manager, I’ve read through most of the comments in this thread and would like to suggest a few features that might help evolve your project:<p>- I noticed some discussions about alarm systems. It could be beneficial to integrate with alarm services like AWS SES and SNS, providing a seamless alerting mechanism.<p>- Consider adding a feature that allows users to compare their server metrics with others. This could provide valuable context and benchmarking capabilities.<p>- Integrating AI for log analysis could be a game-changer. An AI-powered tool that reads, analyzes, and reports on logs could help identify configuration errors that might be easily overlooked.<p>I hope these suggestions help with the development of BlueWave Uptime Manager!