TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Tips for analyzing logs

300 pointsby historynopsover 2 years ago

43 comments

digitalsushiover 2 years ago
One of my pet peeves is &quot;The Useless Use of cat Award&quot;. Someone awarded it to me as a teenager in the late 90s and I&#x27;ve been sore ever since.<p>Yup, it&#x27;s often a waste of resources to run an extra &#x27;cat&#x27;. It really demonstrates that you don&#x27;t have the usage of the command receiving the output completely memorized. You know, the thousand or so commands you might be piping it into.<p>But, if you&#x27;re doing a &#x27;useless&#x27; use of cat, you&#x27;re probably just doing it in an interactive session. You&#x27;re not writing a script. (Or maybe you are, but even still, I bet that script isn&#x27;t running thousands of times per second. And if it is, ok, time to question it).<p>So you&#x27;re wasting a few clock cycles. The computer is doing a few billion of these per second? By the time you explain the &#x27;useless&#x27; use of cat to someone, the time you wasted explaining to them why they are wrong, is greater than the total time that their lifetime usage of cat was going to waste.<p>There&#x27;s a set of people who correct the same three pairs of homophones that get used incorrectly, but don&#x27;t know what the word &#x27;homophone&#x27; is. (Har har, they&#x27;re&#x2F;their&#x2F;there). I always liken the people who are so quick to chew someone out for using cat, in the same batch of people who do this: what if I just want to use cat because it makes my command easier to edit? I can click up, warp to the front of the line, and change it real quick.<p>Sorry. I did say, it is a pet peeve.
评论 #33976632 未加载
评论 #33982297 未加载
评论 #33978334 未加载
评论 #33976266 未加载
评论 #33977812 未加载
评论 #33976831 未加载
评论 #33976825 未加载
评论 #33980707 未加载
评论 #33978242 未加载
评论 #33976672 未加载
评论 #33978263 未加载
评论 #33997289 未加载
heliostaticover 2 years ago
My biggest quality of life improvement for understanding logs has been lnav (<a href="https:&#x2F;&#x2F;lnav.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lnav.org&#x2F;</a>) -- does everything mentioned in this post in a single tool with interactive filtering and quick logical and time based navigation.
评论 #33976276 未加载
评论 #33973028 未加载
评论 #33974137 未加载
评论 #33973946 未加载
评论 #33977282 未加载
评论 #33974566 未加载
评论 #33972112 未加载
评论 #33976253 未加载
评论 #33980105 未加载
chapsover 2 years ago
One thing I&#x27;ve done to identify infrequent log entries within a log file is to remove all numbers from a file and print out a frequency of each. Basically just helps to disregard timestamps (not just at the beginning of the line), line numbers, etc.<p><pre><code> cat file.log | sed &#x27;s&#x2F;[0-9]&#x2F;&#x2F;g&#x27; | sort | uniq -c | sort -nr </code></pre> This has been incredibly helpful in quickly resolving outages more than once.
评论 #33977896 未加载
评论 #34011074 未加载
评论 #33977783 未加载
Severianover 2 years ago
My tips:<p>1) Fuck grep, use ripgrep, especially if you have to scour over an entire directory.<p>2) Get good with regex, seriously, it will shave hours off your searching.<p>3) For whatever application you are using, get to know how the logging is created. Find the methods used where said logs are made, and understand why such a log line exists.<p>4) Get good with piping into awk if needed if you need some nice readable output.
评论 #33974756 未加载
评论 #33978194 未加载
jlduggerover 2 years ago
Honestly, the most amazing thing I did with logs was learn how to do subtraction. Any time you have multiple instances of a thing and only some of them are bad, you can easily find the problem (if anyone bothered to log it) by performing bad - good.<p>The way you do this is by aggregating logs by fingerprints. Removing everything but punctuation is a generic approach to fingerprinting, but is not exactly human friendly. For Java, log4j can use class in your logging pattern, and that plus log level is usually pretty specific.<p>Once you have a fingerprint, the rest is just counting and division. Over a specific time window, count the number of log events, for every finger print, for both good and bad systems. Then score every fingerprint as (1+ # of bad events) &#x2F; (1 + # of good events) and everything at the top is most strongly bad. And the more often its logged, the further up it will be. No more lecturing people about &quot;correct&quot; interpretations of ERROR vs INFO vs DEBUG. No more &quot;this ERROR is always logged, even during normal operations&quot;.
评论 #33974870 未加载
评论 #33974751 未加载
geocrasherover 2 years ago
One thing I didn&#x27;t see was how to use GREP to view the lines before and after a match:<p><pre><code> grep regex &#x2F;var&#x2F;log&#x2F;logfile -A5 #To view the next 5 lines grep regex &#x2F;var&#x2F;log&#x2F;logfile -B5 #To view the previous 5 lines grep regex &#x2F;var&#x2F;log&#x2F;logfile -05 #To view the 5 lines before *and* after the match </code></pre> This is super handy to find out what happened just before a service crashed, for example.
评论 #33986456 未加载
siskover 2 years ago
Loosely related: a few years ago I wanted a simpler alternative to some of the more feature-full log viewers out there so I threw together a tiny (50kb) app that might be useful to some folks in here.<p>All it does is consistently colors the first field in a line from stdin so you can quickly see which log lines have the same first field.<p>I used it in combination with the parallel[0] command to prefix log lines by replica name when tailing logs across machines: <a href="https:&#x2F;&#x2F;github.com&#x2F;jasisk&#x2F;color-prefix-pipe" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jasisk&#x2F;color-prefix-pipe</a><p>[0]: <a href="https:&#x2F;&#x2F;www.gnu.org&#x2F;software&#x2F;parallel&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.gnu.org&#x2F;software&#x2F;parallel&#x2F;</a>
评论 #34027492 未加载
tethaover 2 years ago
As much as I approve of a skillset to analyze local logs, but after a relatively small scale (10-20 systems), a central decent log aggregation like opensearch or ELK just brings so much value even on 1-3 nodes. It&#x27;d be one of the first changes I make to an infrastructure because it&#x27;s so powerful.<p>And its not just log searching and correlation value. At work, the entire discussion &quot;oh but we need access to all servers because of logs&quot; just died when all logs were accessible via one web interface. I added a log aggregation and suddenly only ops needed access to servers.<p>Designing that thing with accessibility and discoverability in mind is a whone nother topic though.
评论 #33982368 未加载
评论 #33976666 未加载
评论 #34010994 未加载
评论 #33977789 未加载
chrisweeklyover 2 years ago
Glad to see lnav (<a href="https:&#x2F;&#x2F;lnav.org" rel="nofollow">https:&#x2F;&#x2F;lnav.org</a>) already getting some love in the comments. Hands-down the most reliable source of &quot;thank you! I wish I&#x27;d known about this tool sooner!&quot; responses, even from v experienced sysadmins &#x2F; SREs.
chasilover 2 years ago
A minor optimization is collapsing the grep -v, from this:<p><pre><code> cat file | grep -v THING1 | grep -v THING2 | grep -v THING3 | grep -v THING4 </code></pre> to this:<p><pre><code> egrep -v &#x27;THING1|THING2|THING3|THING4&#x27; file </code></pre> That gets rid of the cat and three greps. Both POSIX and GNU encourage grep -E to be used in preference to egrep.<p>A pcregrep utility also used to exist, if you want expansive perl-compatible regular expressions. This has been absorbed into GNU grep with the -P option.
评论 #33974436 未加载
评论 #33975081 未加载
评论 #33973878 未加载
评论 #33977096 未加载
kerblangover 2 years ago
&gt; you’ll get overwhelmed by a million irrelevant messages because the log level is set to INFO<p>I know this happens, but I think it&#x27;s because programmers are abusing INFO. In principle it&#x27;s reserved for messages that are informative at a level sys admins and a few others can make sense of and use. Unfortunately abuse often leads to &quot;We turned INFO off&quot; making it much harder to diagnose things after the fact.
评论 #33973192 未加载
kjellsbellsover 2 years ago
When I was training sysadmins back in the dark ages, one of the rules I taught was: know what good looks like in your logs. If you are scanning hundreds of lines of logging under duress to find a smoking gun, and you don&#x27;t know the difference between what the logs normally show, and what you are seeing, you&#x27;ll waste a lot of time.<p>Corollary is that good day logs should be minimal and &quot;clean&quot;, e.g not logging a lot, or, logging nice and predictably (which makes them easy to strip out via grep -v, etc.)
Manjuuuover 2 years ago
&gt; Often log lines will include a request ID.<p>Yes, always include a request id in every request structure you create and include it also in the response and print it. It would seem something obvious that everyone does by default but instead, no, it&#x27;s not so obvious it seems.
评论 #33972929 未加载
linsomniacover 2 years ago
I recently used clickhouse-local to do some log analysis on a lot of elastic load balancer logs (~10s of GBs) and it was spectacular.<p>In short, you can add clickhouse-local to a shell pipeline and then run SQL queries on the data. An example from the docs:<p>$ ps aux | tail -n +2 | awk &#x27;{ printf(&quot;%s\t%s\n&quot;, $1, $4) }&#x27; | clickhouse-local --structure &quot;user String, mem Float64&quot; --query &quot;SELECT user, round(sum(mem), 2) as memTotal FROM table GROUP BY user ORDER BY memTotal DESC FORMAT Pretty&quot;
ljw1004over 2 years ago
I wrote <a href="https:&#x2F;&#x2F;github.com&#x2F;ljw1004&#x2F;seaoflogs" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ljw1004&#x2F;seaoflogs</a> - an interactive filtering tool, for similar ends to what&#x27;s described here. I wrote it because my team was struggling to analyze LSP logs (that&#x27;s the protocol used by VSCode to communicate with language servers). But I made it general-purpose able to analyze more log formats too - for instance, we want to correlate LSP logs with server logs and other traffic logs.<p>(1) I wanted something where colleagues could easily share links in workplace chat with each other, so we could cooperatively investigate bugs.<p>(2) For LSP we&#x27;re often concerned with responsiveness, and I thought the best way to indicate times when viewing a log is with whitespace gaps between log messages in proportion to their time gap.<p>(3) For LSP we have lots of interleaved activity going on, and I wanted to have visual &quot;threads&quot; connecting related logs.<p>(4) As the post and lnav say, interactivity is everything. I tried to take it a step further with (1) javascript, (2) playground-style updates as you type, (3) autocomplete which &quot;learns&quot; what fields are available from structured logs.<p>My tool runs all in the browser. (I spent effort figuring out how people can distribute it safely and use it for their own confidential logs too). It&#x27;s fast enough up to about 10k lines of logs.
评论 #33975775 未加载
hayst4ckover 2 years ago
Some extra tips:<p><pre><code> Keep access logs, both when a service receives a request and finishes a request. Record request duration. Always rotate logs. Ingest logs into a central store if possible. Ingest exceptions into a central store if possible. Always use UTC everywhere in infra. Make sure all (semantic) lines in a log file contain a timestamp. Include thread ids if it makes sense to. It&#x27;s useful to log unix timestamp alongside human readable time because it is trivially sortable. Use head&#x2F;tail to test a command before running it on a large log file. </code></pre> If you find yourself going to logs for time series data then it is definitely time to use a time series database. If you can&#x27;t do that, at least write a `&#x2F;private&#x2F;stats` handler that displays in memory histograms&#x2F;counters&#x2F;gauges of relevant data.<p>Know the difference between stderr and stdout and how to manipulate them on the command line (2&gt;&#x2F;dev&#x2F;null is invaluable, 2&gt;&amp;1 is useful), use them appropriately for script output.<p>Use atop, it makes debugging machine level&#x2F;resource problems 10 fold easier.<p>Have a general knowledge of log files (sometimes &#x2F;var&#x2F;log&#x2F;syslog will tell you exactly your problem, often in red colored text).<p>If you keep around a list of relevant hostnames:<p><pre><code> cat $hostname_list_file | xargs -P $parallelness -I XHOSTNAME ssh XHOSTNAME -- grep &lt;request_id&gt; &lt;log_file&gt; </code></pre> This needs to be used carefully and deliberately. This is the style of command that can test your backups. This style command <i>has</i> caused multiple <i>_major_</i> outages. With it, you can find a needle in a haystack across an entire fleet of machines quickly and trivially. If you need to do more complex things, `bash -c` can be the command sent to ssh.<p>I&#x27;ve had an unreasonable amount of success opening up log files in vim and using vim to explore and operate on them. You can do command line actions one at a time (:!$bash_cmd), and you can trivially undo (or redo) anything to the logs. Searching and sorting, line jumping, pagedown&#x2F;up, etc, diffing, jump to top of file or bottom, status bar telling you how far you are into a file or how many lines it has without having to wc -l, etc.<p>Lastly, it&#x27;s great to think of the command line in terms of map and reduce. `sed` is a mapping command, `grep` is a reducing command. Awk is frequently used for either mapping or reducing.
评论 #33979934 未加载
评论 #33978945 未加载
jeppesen-ioover 2 years ago
also, journalctl. Very nice to have all logs in one place in a semi-structured nature<p>`journalctl -u myservice -S &quot;5 min ago&quot;`
phillipcarterover 2 years ago
Surprised to see that under the section &quot;correlate between different systems&quot; tracing isn&#x27;t mentioned as an alternative approach. That&#x27;s what tracing is: logging across different systems and getting that structure all stitched together for you.
2devnullover 2 years ago
“scroll through the log really fast”<p>This is a great point and probably underrated.<p>You may also benefit by scrolling sideways really fast.<p><a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Superior_colliculus" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Superior_colliculus</a>
edweisover 2 years ago
Best tips I discovered: use emojis. They have colors and they are easy to spot.<p>For instance if an API call is made use the phone emoji, when there is a timeout use a clock, when an order is dispatched used a package...<p>When you have to go through huge log file it is a life saver.
评论 #34019295 未加载
baud147258over 2 years ago
A few weeks ago I had a windows installer that was silently failing when upgrading from an older version (installation from scratch was working without issues). And as windows install logs aren&#x27;t exactly easy to read, I was stumped, until I took an upgrade log from an older, working build, strip all date information from both files and compare them, checking all the sections which were different, until I found a line indicating that a colleague had forgotten about a limitation when dealing with msp (don&#x27;t delete components on minor upgrades, but I didn&#x27;t throw any stones as I&#x27;ve done the same mistake, twice, one and two years ago...)
Tooover 2 years ago
One of my favorite tricks is to use a visual difftool.<p>Copy good log into left panel. Copy bad log into right panel. Quickly show which lines are new, which are missing and which are out of order. Obviously ignore the timestamps ;)
评论 #34011019 未加载
oxffover 2 years ago
Just use structured logs, its 2022 ffs.
julian_sarkover 2 years ago
&gt; cat file | grep -v THING1 | grep -v THING2 | grep -v THING3 | grep -v THING4<p>Keyboard manufacturers HATE this simple trick:<p>grep -vE &quot;THING1|THING2|THING3|THING4&quot; file
spc476over 2 years ago
One thing we did at my previous job was to add a &quot;trace flag&quot; to each account. Normally, they log nothing about a transaction (other than it happened), but if the trace flag is set, then a lot of information is logged about the transaction. Also, this trace flag is propagated throughout the distributed system they have, so we can trace the action across the network.
kqrover 2 years ago
One thing that&#x27;s improved my log analysis is learning awk (well actually Perl but I think 95 % of what I do with Perl I could also do with just awk). Often the most useful way to look at logs is statistically, and awk can quickly let you aggregate things like time-between-statements, counts, rates, state transition probabilities, etc for arbitrary patterns.
trashcanover 2 years ago
Also, sometimes logs stretch across multiple lines, and the other lines won&#x27;t have the identifier you are searching for. For example, Java stack traces. In that case if you are stuck parsing unstructured logs, the simplest thing to do is to look at the entire file and search for the timestamp that found the first line.
29athrowawayover 2 years ago
Your friends<p>- grep (or ripgrep), sort, uniq, head, tail, cut, colrm, tr<p>- diff<p>- jq<p>- bat<p>- ministat<p>- less, more, percol<p>- Facebook Path Picker (fpp)
评论 #33980370 未加载
评论 #33988922 未加载
zug_zugover 2 years ago
I feel like this is a huge anti-pattern. Use a hosted service that does all of this for you, and then have a whole query language, build alerts, graphs, etc based on these results.<p>It&#x27;s not super cheap, but it&#x27;s 10x cheaper than wasting dev time in the terminal. (Sumologic, splunk are the two I can vouch for)
emmelaichover 2 years ago
I found the histogram technique to be really helpful. Slight mod - I tend to sort reverse at the end of the pipeline (sort -rn); then |head is often more useful.<p>It&#x27;s also good to have histograms by hour or day. I&#x27;ve hacked up scripts to do this but I should really make something better!
AtNightWeCodeover 2 years ago
Usage of correlation ids and structured logs is pretty much standard. Don&#x27;t go down this path.
teromover 2 years ago
Re timing, logs like nginx access logs have their timestamp from when the request completed, not when the request came in. That&#x27;s a significant difference for long duration (~10s+) requests, and matters when trying to correlate logs or metrics to a request.
badrabbitover 2 years ago
I saw a tip a whilr back about not needing to keep adding &quot;| grep -v stuff&quot; but instead &quot;grep -e -v stuff -v stuff2&quot; i remember getting it to work on Linux but last i tried that on macos I didn&#x27;t have much luck
danjcover 2 years ago
We&#x27;ve added a log tailing feature into our product UI which also has a basic find&#x2F;filter. It&#x27;s been enormously useful for cases where something weird happens as you can immediately access the last few mins of logs.
andrewgilmartinover 2 years ago
If I had a nickel for every time I have used this pattern<p>... | perl -ne &#x27;print &quot;$1 $2 ...\n&quot; if &#x2F;some-regex-with-capture-groups&#x2F;&#x27; | ...
didipover 2 years ago
If you are using Kubernetes, I highly recommend using <a href="https:&#x2F;&#x2F;github.com&#x2F;stern&#x2F;stern" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;stern&#x2F;stern</a>
评论 #33977678 未加载
febedover 2 years ago
No mention of klogg? It’s a lifesaver when opening huge log files
kamma4434over 2 years ago
Jeez, this stuff is frontpage on HN? Sounds… pretty basic. I’m sure our AI overlords could produce deeper content.
评论 #33975519 未加载
评论 #33976555 未加载
评论 #33976268 未加载
DesiLurkerover 2 years ago
in addition to these one of my favorite is a perl one liner that generates time delta from a regex pattern of interest. And then I plot it using gnuplot. It seriously helps to &#x27;see&#x27; the events with timing in a chart &amp; allows you to do a quick visual search for problem areas.
alexpetraliaover 2 years ago
Has anyone tried passing logs into chatGPT? I had thought it would be especially effective here.
评论 #33976218 未加载
parover 2 years ago
My tips for analyzing logs:<p>cd ~&#x2F;logs<p>cat * | grep &lt;my string&gt;
评论 #33976072 未加载
bomboloover 2 years ago
journalctl supports grep, queries by priority, time, daemon, and custom defined fields.
the_arunover 2 years ago
Do we still use utilities like grep for searching logs? Are these when we cannot stream logs to tools like Splunk &amp; Loggly and use their search services?
评论 #33978594 未加载
评论 #33975034 未加载