TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Grepping logs is terrible

94 点作者 _5csa大约 10 年前

24 条评论

onion2k大约 10 年前
<i>Binary logs are opaque! Just as much as text logs.</i><p>I don&#x27;t agree with the second assertion there. Text logs are only opaque as far as <i>the format</i> is concerned, but not so much as far as the content goes. Using the example in the article;<p><pre><code> 127.0.0.1 - - [04&#x2F;May&#x2F;2015:16:02:53 +0200] &quot;GET &#x2F; HTTP&#x2F;1.1&quot; 304 0 &quot;-&quot; &quot;Mozilla&#x2F;5.0&quot; </code></pre> You can read a lot of information without knowing the format, the application that generated it, or even which file it was in - you know it&#x27;s something to do with localhost, you know when it happened, you know the protocol, from which you can infer the &quot;304&quot; means Not Modified, and you know it came from a Mozilla agent. That&#x27;s a lot more information than you could get from a binary log without any tools.<p>That isn&#x27;t necessarily an argument against binary logging, but the notion that text log files are opaque in the same way as binary logs isn&#x27;t really true.
评论 #9497203 未加载
评论 #9497277 未加载
ghshephard大约 10 年前
If your logs aren&#x27;t text, and it&#x27;s a small system, I&#x27;m not going to look at them. Therefore they don&#x27;t exist. That&#x27;s one reason why people don&#x27;t like binary logs - they are effectively useless.<p>On the flip side, if the system is huge - then we can use tools like splunk.<p>grep&#x2F;tail&#x2F;awk are the first three tools I use on any system - if you create logs that I can&#x27;t manipulate with those three tools, then you haven&#x27;t created logs for your system that I can use.
moonshinefe大约 10 年前
Yes, grepping logs is terrible if &quot;you have 100Gb of logs a day&quot;. I&#x27;m not sure why the author is thinking his use case is anything near the norm or why he&#x27;s shocked in most use cases people prefer text files.<p>I&#x27;m also not getting why he just doesn&#x27;t use scripts to parse the logs and insert them into a database at that point. Why use some ad-hoc logging binary format if you&#x27;re doing complex queries that SQL would be better suited for anyway, on proven db systems?<p>Maybe I&#x27;m missing something.
评论 #9497138 未加载
评论 #9497092 未加载
评论 #9497117 未加载
评论 #9497238 未加载
评论 #9497244 未加载
评论 #9497170 未加载
评论 #9497519 未加载
dsr_大约 10 年前
Change for the sake of change is anti-engineering. It is anti-productive. Your changes must be improvements, and they must not cost more than they save or generate in a reasonable period of time.<p>Many organizations have a fully functional, well-debugged logging infrastructure. The basic design happened years ago, was implemented years ago, and was expected to be useful basically forever. Growth was planned for. Ongoing expenses expected to be small.<p>That&#x27;s what happens when you build reliable systems on technologies that are as well understood as bricks and mortar. You get multiple independent implementations which are generally interoperable. You get robustness. And you get cost-efficiency, because any changes you decide to make can be incremental.<p>Where are the rsyslogd and syslog-ng competitors to systemd&#x27;s journald? Where is the interoperability? Where is the smooth, useful upgrade mechanism?<p>Short term solutions are generally non-optimal in the long term. Using AWS, Google Compute and other instant-service cloud mechanisms trades money, security and control for speed of deployment. An efficient mature company may well wish to trade in the opposite direction: reducing operating costs by planning, understanding growth and making investments instead of paying rent.<p>Forcing a major incompatible change in basic infrastructure rather than offering it as an option to people who want to take advantage of it is an anti-pattern.
评论 #9498087 未加载
评论 #9497759 未加载
blueskin_大约 10 年前
People don&#x27;t want it because it&#x27;s binary, not because you can&#x27;t grep it.<p>* you need to use a new proprietary tool to interact with them<p>* all scripts relating to logs are now broken<p>* binary logs are easy to corrupt, e.g. if they didn&#x27;t get closed properly.<p>&gt;You can have a binary index and text logs too! &#x2F; You can. But what&#x27;s the point?<p>The point is having human-readable logs without having to use a proprietary piece of crap to read them. A binary index would actually be a perfect solution - if you&#x27;re worried about the extra space readable logs take, just .gz&#x2F;.bz2 them; on decent hardware, the performance penalty for reading is almost nonexistent.<p>If you generate 100GB&#x2F;day, you should be feeding them into logstash and using elasticsearch to go through them (or use splunk if $money &gt; $sense), not keeping them as files. Grepping logs can&#x27;t do all the stuff the author wants anyway, but existing tools can, that are compatible with rsyslog, meaning there is no need for the monstrosity that is systemd.
评论 #9497095 未加载
评论 #9497572 未加载
评论 #9497112 未加载
datenwolf大约 10 年前
<p><pre><code> &gt; Embedded systems don&#x27;t have the resources! &gt; ... &gt; I&#x27;d still use a binary log storage, because &gt; I find that more efficient to write and parse, &gt; but the indexing part is useless in this case. </code></pre> This is yet again a case of a programmer completely misjudging how an actual implementation will perform in the real world.<p>When I wrote the logging system for this thing <a href="http:&#x2F;&#x2F;optores.com&#x2F;index.php&#x2F;products&#x2F;1-1310nm-mhz-fdml-laser" rel="nofollow">http:&#x2F;&#x2F;optores.com&#x2F;index.php&#x2F;products&#x2F;1-1310nm-mhz-fdml-lase...</a> I first fell for the very same misjudgement: &quot;This is running on a small, embedded processor: Binary will probably be much more efficient and simpler.&quot;<p>So I actually did first implement a binary logging system. Not only logging, but also the code to retrieve and display the logs via the front panel user interface. And the performance was absolutely terrible. Also the code to manage the binary structure in the round robin staging area, working in concert with the storage dump became an absolute mess; mind you the whole thing is thread safe, so this also means that logging can cause inter thread synchronization on a device that puts hard realtime demands on some threads.<p>Eventually I came to the conclusion to go back and try a simple, text only log dumper with some text pattern matching for the log retrieval. Result: The text based logging system code is only about 35% of the binary logging code and it&#x27;s about 10 times faster because it doesn&#x27;t spend all these CPU cycles structuring the binary. And even that text pattern matching is faster than walking the binary structure.<p>Like so often... premature optimization.
评论 #9497554 未加载
评论 #9497480 未加载
alephnil大约 10 年前
I guess that much of the resistance against the binary logs of systemd is the unfamiliarity and to some extent lack of well known tools for dealing with them. Sysadmins that have years of experience with traditional Unix tools now suddenly have to start almost from scratch when it comes to everyday tools for examining the system. Not only that, programmers are also most familiar with text based formats, and libraries for handling these formats have to become more available in the most popular programming languages and become familiar for programmers that develop tools for analysing systems. Until that happens, sysadmins feel that they are set back by the introduction of binary logs, even if binary logs are technically superior.
评论 #9499866 未加载
leni536大约 10 年前
I don&#x27;t have experience with binary logs. I think the fragility of binary logs is not baseless though. AFAIK there was (is?) a problem in systemd&#x27;s journal where a local corruption of the log could cause a global unavailability of the logged data.<p>People like text logs because local corruptions remain local. Some lines could be gibberish, but that&#x27;s all. I&#x27;m not suggesting that this couldn&#x27;t be done with binary logs, but you have to carefully design your binary logging format to keep this property.<p>Otherwise I agree with the author that we shouldn&#x27;t be afraid of binary formats in general, we need much more general formats and tools though (grep, less equivalents).<p>I&#x27;m not fond of &quot;human readable&quot; tree formats like XML or JSON either. bencode could be equally &quot;human readable&quot; as an utf-8 text if one has a less equivalent for bencode.
评论 #9497191 未加载
评论 #9497343 未加载
tatterdemalion大约 10 年前
This applies more generally than just to logs. I love Unix, but &quot;everything is text&quot; is not actually great. It&#x27;s better that Unix utils output arbitrary ASCII than that they output arbitrary binary data, but it&#x27;s obvious why people don&#x27;t do serious IPC &#x27;the Unix way.&#x27; Imagine if instead of exchanging JSON, or ProtoBufs, or whatever, your programs all exchanged text you had to regex into some sort of adhoc structure. So why do we manage our logs and our pipelines that way? There&#x27;s no actual reason that the terminal couldn&#x27;t interpret structured data into text for us so that, in the world of intercommunicating processes on the other side of the TTY, everything is well-structured, semantically comprehensible data.
评论 #9497267 未加载
评论 #9497389 未加载
评论 #9497977 未加载
bigbugbag大约 10 年前
The title is misleading, I was expecting to discover a better way of dealing with logs in the general case. Instead I got served an attempt of the author to generalize its way as if his quite specific use case could apply to the outside world.<p>Reading this was a waste of my time.<p>Being a universal open format text is a better format than binary, unless you don&#x27;t care about being able to read your data in the future. There&#x27;s already enough issue with filesystems and storage media, no need to add more complexity to the issue.
halayli大约 10 年前
logs should be in text. The last thing you want is to find out that your binary format cannot be decoded due to a bug in the logging or because file got corrupted. Not to mention that you won&#x27;t be able to integrate with a lot of log systems like Splunk and friends.<p>On the other hand, if you have logs, you need to store them in a centralized place and have an aging policy, etc... Grepping is definitely not the answer. Systems like Splunk exist for a reason.
评论 #9497591 未加载
agjmills大约 10 年前
The greatest thing that I&#x27;ve found recently was fluentd and elasticsearch - we have fluentd on all of our nodes that aggregate logs to a central fluentd search which dumps all of the data into elastic search, then we use kibana as a graphical frontend to elasticsearch<p>It took a while to get developers to use it, but now it&#x27;s indispensable - particularly when someone asks me &#x27;what happened to the 1000 emails I sent last month&#x27;<p>I now know, as previously, the data would have been logrotated
jeady大约 10 年前
I think the author is conflating several problems here. There are several ways logs can be used, and efficiency is a scale. For example, if I receive a bug report, I like to be able to locate the textual logs from when the incident occurred and actually just sit and read what was happening at the time. On the other hand, if I&#x27;m doing higher-level analysis such as what features do users use most, clearly it&#x27;s more efficient to have some sort of structure format because you&#x27;re interested in the logs in aggregate. The author makes it sound like they&#x27;re advocating optimizing for the aggregate use case at the expense of other use cases. I think that the declaration that textual logs are terrible is an oversimplification of the considerations in play.<p>Also, if the author has a 5-node cluster producing 100Gbs of logs a day, the logs may also be too verbose or poorly organized. I work on a system that produces 100s of Gbs of logs a day but with proper organization they&#x27;re perfectly manageable.<p>I think that a more nuanced solution is to log things that are useful to manual examination in text form, but high-frequency events that are not particularly useful could reasonably be logged elsewhere (e.g. a database or binary log that is asynchronously fed into a database).<p>In conclusion, as is frequently the case with engineering, I think the author oversimplifies the problem here and tries to present a one-size-fits-all solution instead of taking a more pragmatic solution. Textual logs are useful when meant for human consumption (debugging) and when they can be organized such that the logs of interest at any time are limited in size, and some other binary-based format is useful for aggregate higher-level analysis.
评论 #9497614 未加载
henrik_w大约 10 年前
One solution to the problem of too much logging data can be what I call &quot;session-based logging&quot; (also known as tracing). You can enable logging on a single session (e.g. a phone call), and for that call you get a lot of logging data, much more than a typical logging system.<p>This obviously only works when you are trouble shooting a specific issue, not when you need to investigate something that happened in the past (where the logging for the session wasn&#x27;t enabled). However, it has proven to be an excellent tool for troubleshooting issues in the system.<p>I have used session-based logging both when I worked at Ericsson (the AXE system), and at Symsoft (the Nobill system), and both were excellent. However, I get a feeling that they are not in widespread use (may be wrong on that though), so that&#x27;s why I wrote a description of them: <a href="http:&#x2F;&#x2F;henrikwarne.com&#x2F;2014&#x2F;01&#x2F;21&#x2F;session-based-logging&#x2F;" rel="nofollow">http:&#x2F;&#x2F;henrikwarne.com&#x2F;2014&#x2F;01&#x2F;21&#x2F;session-based-logging&#x2F;</a>
评论 #9498289 未加载
hxn大约 10 年前
Text logs let me do all the things I want to do.<p>Grep them, tail them, copy and paste, search, transform them, look at them in less, open them in any editor. I love two write little bash oneliners that answer questions about logs. I can use these onliners everywhere anytime.<p>I dont have any of the ­efficiency problems the author talks about.
AceJohnny2大约 10 年前
The author&#x27;s use of logs is sophisticated and proactive. Sadly, most Linux installations I&#x27;ve dealt with are lazy and reactive, where logs are kept around &quot;just in case&quot; for future forensics (hah!).
webhat大约 10 年前
I think binary logging is the wrong word to use. As far as I can tell it&#x27;s not binary he means, but database logging. Storing things in a database sounds far less scary than binary.<p>At best it&#x27;s a NUL separated database structure where the fields are not compressed, which IS greppable just use \x00 in your regexp. At worst he might mean BER, which is an ASN.1 data encoding structure.<p><a href="http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;X.690#BER_encoding" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;X.690#BER_encoding</a>
pdkl95大约 10 年前
So some people want a log format that is more structured than plain text lines. That is going to require some sort of specialized tool. So if a dependency is allowable (instead of leaving the log in a format that is already readable by ~everything), why can&#x27;t the specialized tool generate an efficient <i>index</i>?<p>A traditional log with a parallel index would be completely backwards compatible, the query tool should work the same way, and you could even treat the index file as a rebuildable cache which can be useful. The interface presented by a specialized tool doesn&#x27;t have to depend on any specific storage method.<p>Really, this recent fad of trying to remove old formats in the believe the old format was somehow preventing any new format from working in parallel reminds me of JWZ&#x27;s recommendations[1] on mbox &quot;summary files&quot; over the complexity of an actual database. Sometimes you can get the features you want <i>without</i> sacrificing performance or compatibility.<p>[1] <a href="http:&#x2F;&#x2F;www.jwz.org&#x2F;doc&#x2F;mailsum.html" rel="nofollow">http:&#x2F;&#x2F;www.jwz.org&#x2F;doc&#x2F;mailsum.html</a>
regularfry大约 10 年前
This is all well and good if you want to, and can, spend time up front figuring out how to parse each and every log line format which might appear in syslog so you can drop it in your structured store.<p>The alternative is to leave everything unstructured, and understand the formats minimally and lazily. Laziness is a virtue, right?
评论 #9497621 未加载
zimbatm大约 10 年前
What binary logging solution is the author using if he&#x27;s not using the systemd journal ?
erikb大约 10 年前
Look at a first year computer science student. He will already put prints in his programs and if he is smart and has a bigger assignment he might already start to write other programs to parse that output. You can&#x27;t beat that, because it is nearly impossible for a newbie to even know that there might be a problem with text logging and that binary logging might be a solution. In fact he might not even know that what he does is called logging. But he is already doing it!<p>So even if binary logging is way better (I can&#x27;t say, not enough experience) you simply can&#x27;t beat text logging, because text logging is natural. It just happens.<p>print(&quot;Hello World!&quot;)
babuskov大约 10 年前
If you need to grep logs on regular basis, you&#x27;re doing it wrong.<p>Store important data in the database so that you can query it efficiently.<p>Keep logs for random searches when something unexpected happens. I log gigabytes per day, but only grep maybe once-twice a year.
评论 #9497624 未加载
616c大约 10 年前
On a slightly unrelated note, as a largely amateur Linux user: have people made systems that instead of grepping for info, use machine learning do detect normal patterns of a log file (like what type of events, similar, at different intervals) and report the anomalous output via email or report to an admin?<p>I was thinking this would be a cool area of research for me to try programming again, but it seems so daunting I am not sure where to start.
评论 #9497217 未加载
评论 #9497235 未加载
评论 #9497605 未加载
评论 #9497630 未加载
michipili大约 10 年前
Of course grepping log is terrible! Grep is a generic tool, why shouldn&#x27;t it be defeated by specialised tools?<p><a href="http:&#x2F;&#x2F;unix-workstation.blogspot.de&#x2F;2015&#x2F;05&#x2F;of-course-grepping-log-is-terrible.html" rel="nofollow">http:&#x2F;&#x2F;unix-workstation.blogspot.de&#x2F;2015&#x2F;05&#x2F;of-course-greppi...</a>