TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How do you log application events?

221 点作者 GaiusCoffee大约 10 年前
We are currently inserting our logs in an sql database, with timestamp, logType, userId, userAgent and description columns. It makes it trivial for us to debug any event by just querying the db. However, after three and a half years of continued use, the table is now way too large.<p>How do you guys log application events in such a way that extracting information from it is easy, but still keep the size of the logs manageable?

56 条评论

thaumaturgy大约 10 年前
Ehm, the contrast between my answer and everyone else&#x27;s here makes me feel surprisingly greybearded, but...<p>Application logging has been a solved problem for decades now. syslog or direct-to-disk in a reasonable format, let logrotate do the job it&#x27;s faithfully done for years and let the gzipped old files get picked up by the offsite backups that you&#x27;re surely running, and use the standard collection of tools for mining text files: grep, cut, tail, etc.<p>I&#x27;m a little weirded out that &quot;my logs are too big&quot; is still a thing, and that the most common answer to this is &quot;glue even more complexity together&quot;.
评论 #9448021 未加载
评论 #9448086 未加载
评论 #9448204 未加载
评论 #9451321 未加载
thedevopsguy大约 10 年前
Log analytics is a big topic so I&#x27;ll hit the main points. The approach you take to logging depends on the analysis you want to do after the log event has been recorded. The value of the logs diminishes rapidly as the age of the events get older. Most places want to keep the logs hot for a period ranging from a day to week. After that,the logs are compressed using gzip or Google snappy compression. Even though they are in a compressed form they should still be searchable.<p>The most commont logging formats I&#x27;ve come across in production environments are:<p>1.log4j(java) or nlog(.NET)<p>2.json<p>3.syslog<p>Tools that I&#x27;ve used to search ,visualize and analyse log data have been:<p>1.Elasticsearch, Logstash and Kibana (ELK) stack<p>2.splunk (commercial)<p>3.Logscape (commercial)<p>Changes to the fields representing your data with the database approach is expensive because you are locked in by the schema. The database schema will never fully represent your full understanding of the data. With the tools I&#x27;ve mentioned above you have the option to extract ad-hoc fields at runtime.<p>Hope this helps.
评论 #9445488 未加载
评论 #9446715 未加载
评论 #9447156 未加载
neilh23大约 10 年前
Do you really need to debug events from 3 and half years ago? Full logs only really need to stick around as long as you&#x27;re likely to want to debug them. Log rotation is a must (I&#x27;ve seen debug logs nobody reads sitting in the gigabytes ...) Past that, you can cherry pick and store metadata about the events (e.g. X hits from userAgent Y on this day) with enough information you&#x27;ll need to do trend analysis, although it&#x27;s generally a good idea to keep backups of old full logs in case you need to reload the logs to find out that one thing you forgot to add to your metadata ... If you do genuinely need all of the data back that far, you should look at partitioning the data so you&#x27;re not indexing over millions of rows - how you do that depends how you&#x27;re intending on using the data.
评论 #9445035 未加载
MichaelGG大约 10 年前
Elasticsearch is amazing. It lives up to the hype. It&#x27;s perfect for rolling over logs, and they have lots of documentation on how to make it work just right.<p>Just as an example of how awesome Elasticsearch is, you can trivially segment your storage tiers (say, SSD versus HDD) and then easily move older data to other storage, with a single command.<p>They have a log-specific handler called Logstash, and a dashboard system called Kibana (which is sorta neat but the UI seems a big laggy in my brief experience). Apparently some folks use Logstash&#x2F;Elasticsearch to record millions and millions of events per day and ES does a great job.<p>If you want hosted, check out Stackify. I&#x27;m totally blown away with the product (no affiliation other than being a new user). You can send log info to them and they&#x27;ll sort it all out, similar to Splunk, but not ridiculously priced and no dealing with terrible sales teams. But it gets better - they offer all sorts of ways to define app-specific data and metrics, so you can get KPIs and dashboards just adding a line or two of code here and there. It&#x27;s a lot easier than running your own system, and it looks like it can make ops a ton easier.<p>Another hosted service is SumoLogic. I only used them for logging, but it seemed to work well enough.
评论 #9446087 未加载
评论 #9445181 未加载
eloycoto大约 10 年前
Hi,<p>I used graphite and now I&#x27;m using influxdb, in the other hand kibana+logstash+ES.<p>With statsd and influxdb you can measure all the events in a database, it&#x27;s pretty easy and you have statsd libraries in some languages. I measure all the events in my products, from response timings, database queries, logins, sign-ups, calls, all go to statsd.<p>Logs are good to debug, but if you want to measure all events in your platform, statsd+influxdb+grafana are your best friends, and your manages will be happy with that ;-)<p>A few weeks ago I gave a talk about this, you can see the slides here + a few examples+ deploy in docker:<p><a href="http:&#x2F;&#x2F;acalustra.com&#x2F;statsd-talk-at-python-vigo-meetup.html" rel="nofollow">http:&#x2F;&#x2F;acalustra.com&#x2F;statsd-talk-at-python-vigo-meetup.html</a><p>Regards ;-)
评论 #9445346 未加载
chupy大约 10 年前
At the place where I work we use a couple of different tools for logging events:<p>Logstash + graylog &#x2F; elasticsearch - mostly for monitoring application error logs and easy ad hoc querying and debugging.<p>statsd+graphite+ nagios&#x2F;pagerduty - general monitoring&#x2F;alerting and performance stats<p>zeromq (in the process of changing now to kafka) + storm and redis for real time events analytics dashboards. We are also writing it to hdfs and running batch jobs over the data for more in depth processing.<p>We also have a legacy sql server in which we save events &#x2F; logs which is still maintained so maybe this could help you. Just FYI we analyse more than 500 million records &#x2F; day and we had to do some optimisations there:<p>-if the database allows then partition the table by date. -create different tables for different applications and &#x2F; or different events -1 table &#x2F; day which is then at the start of the new day getting merged in a different monthly table in a separate read only database. -create daily summary tables which are used for analytics -if you actually need to query all the data then use union on the monthly tables or the summary tables -I want to also say this, I know it&#x27;s a given but if you have large amounts of data batch and then use bulk inserts..<p>I suggest you take a couple of steps back and think hard about exactly how you want to access and query the data and think what the best tool for you in the long run is.
Someone大约 10 年前
Why do you feel the log is way too large?<p>If log entries take up too much disk space, switching to a different system will not help; you will have to do something with the data. You can either archive old years (export in some way, compress, put in cold storage) or throw them away, either partially or fully (do you need to keep debug logging around forever?). Using partitions can help here, as it makes it faster to drop older data (<a href="http:&#x2F;&#x2F;www.postgresql.org&#x2F;docs&#x2F;current&#x2F;interactive&#x2F;ddl-partitioning.html" rel="nofollow">http:&#x2F;&#x2F;www.postgresql.org&#x2F;docs&#x2F;current&#x2F;interactive&#x2F;ddl-parti...</a>)<p>You also may consider compressing some fields inside the database (did you normalize logType and userAgent or are they strings? Can your database compress descriptions?), but that may affect logging performance (that&#x27;s a _may_. There&#x27;s extra work to do, but less data to write)<p>If, on the other hand, indexes take up too much space or querying gets too slow, consider using a partial index (<a href="http:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Partial_index" rel="nofollow">http:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Partial_index</a>). You won&#x27;t be able to efficiently query older data, but if you do that only rarely, that may be sufficient.
k1w1大约 10 年前
Here is another solution that hasn&#x27;t been mentioned yet, but has by far the best price&#x2F;performance if it matches your use-case. Google BigQuery isn&#x27;t advertised as being for log search, but in practice it works phenomenally well. It provides exceptionally low storage costs, combined with a powerful query language and reasonable query costs. The counter-intuitive part is that the query performance, even on tens or hundreds of gigabytes of data is amazing, and better in practice than many purpose built inverted index log search systems.<p>If you want to use your logs for troubleshooting (e.g. ad-hoc queries to find error messages) or ad-hoc analytics it is ideal. Hundreds of gigabytes can be searched or analyzed in 5-6 seconds per query.<p>Fluentd can be used to collect log data and send to BigQuery.
评论 #9446845 未加载
vindmi大约 10 年前
ElasticSearch + Logstash + Kibana.<p>Custom NLog renderer which implements SysLog protocol and NLog target which pushes logs to RabbitMQ.
评论 #9444930 未加载
评论 #9445400 未加载
webjunkie大约 10 年前
I really like Sentry (<a href="https:&#x2F;&#x2F;github.com&#x2F;getsentry&#x2F;sentry" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;getsentry&#x2F;sentry</a>) for exception tracking. It&#x27;s easy to set up, supports different platforms, and looks great.
评论 #9444914 未加载
therealkay大约 10 年前
You could also take a look at Graylog (<a href="https:&#x2F;&#x2F;www.graylog.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.graylog.org&#x2F;</a>), it supports structured data in a variety of formats and can send alerts as well.<p>It&#x27;s similar in spirit to elasticsearch + logstash + kibana, but more integrated.<p>Disclaimer: I work on it, so I&#x27;m not going say what&#x27;s better, just giving another pointer.
bra-ket大约 10 年前
1) elasticsearch +kibana: <a href="https:&#x2F;&#x2F;www.elastic.co&#x2F;products&#x2F;kibana" rel="nofollow">https:&#x2F;&#x2F;www.elastic.co&#x2F;products&#x2F;kibana</a><p>2) hbase+phoenix: <a href="http:&#x2F;&#x2F;phoenix.apache.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;phoenix.apache.org&#x2F;</a><p>3) opentsdb: <a href="http:&#x2F;&#x2F;opentsdb.net&#x2F;" rel="nofollow">http:&#x2F;&#x2F;opentsdb.net&#x2F;</a>
dorfsmay大约 10 年前
My experience is that:<p>• open source solution require a lot of work<p>• commercial solution get very expensive very quickly<p>If you can narrow down how much logs you want to keep, then the commercial solutions are amazing, but as you need (or think you need) to keep them longer and longer, they become prohibitevely expensive.<p>The next time I have to tackle this issue, specifically keeping the log forever, I will give the hadoop stores (HBase, Impala etc...) a try. Hadoop solutions work really well for very large set of write-once only data, which is what logs are.
Sir_Cmpwn大约 10 年前
I run services that log to plaintext files and I use logrotate to periodically gzip and rotate them out for archival.<p>Just use grep to query recent logs, zgrep if you have to dig a little.
评论 #9446245 未加载
myrryr大约 10 年前
We stage stuff out.<p>After a week, it goes out of cache. After a month, we no longer keep multiple copies around. After 3 months, we gather stats from it, and push it to some tar.xz files, which we store. So its out of the database.<p>We can still do processing runs over it, and do... but it is no longer indexed, so they take longer.<p>After 3 years, the files are deleted.
fscof大约 10 年前
My company uses pretty basic logging functionality (no third party services yet), but one thing we&#x27;ve done that&#x27;s helpful when reading logs is adding a context id to help us track down API calls as they travel through our system - I wrote up a quick blog post about it here: <a href="https:&#x2F;&#x2F;www.cbinsights.com&#x2F;blog&#x2F;error-logging-context-identifiers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cbinsights.com&#x2F;blog&#x2F;error-logging-context-identi...</a>
ccleve大约 10 年前
<a href="https:&#x2F;&#x2F;logentries.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;logentries.com&#x2F;</a> has worked out well for us, at least at the small scale we&#x27;re using it now. Pricing is reasonable.<p>The important feature for us is S3 archiving. They&#x27;ll keep your logs online for a certain period of time, and then copy the old ones to S3. You don&#x27;t have to get rid of anything, and you&#x27;re still able to keep costs under control.
sirtopas大约 10 年前
We use elmah (<a href="https:&#x2F;&#x2F;code.google.com&#x2F;p&#x2F;elmah&#x2F;" rel="nofollow">https:&#x2F;&#x2F;code.google.com&#x2F;p&#x2F;elmah&#x2F;</a>) for logging our ASP.NET&#x2F;MVC apps.<p>It works well for us, nice accessible UI if you need it and a solid database behind it. Also RSS&#x2F;Email alerts if you need it. We&#x27;ve got thousands of entries in there and even on the old SQL2005 box we use, it seems to work just fine.
nightTrevors大约 10 年前
I&#x27;m probably the only one doing it outside a bank or hedge fund, but since kdb+ opened up their 32-bit license for free, it&#x27;s been amazing working with. Log files and splayed tables are stored neatly on disk so backing up to aws nightly is a breeze. It is a great solution for high tick rate logging of homogeneous data, especially when that data needs to be highly available in business applications.
lucb1e大约 10 年前
You didn&#x27;t specify your location, but in some counties like the Netherlands, it&#x27;s not legal to store PI (personally identifiable) data that long. There is no reason to keep access logs for 3+ years. What are you ever going to do with that data?<p>Like others here said, extract what you want to keep (unique visitors per day or so) and throw the rest out after a few weeks.
michaelmcmillan大约 10 年前
I use a logging library called Winston (<a href="https:&#x2F;&#x2F;github.com&#x2F;winstonjs&#x2F;winston" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;winstonjs&#x2F;winston</a>). I have it hooked up to Pushbullet with Winston-Pushbullet (<a href="https:&#x2F;&#x2F;github.com&#x2F;michaelmcmillan&#x2F;winston-pushbullet" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;michaelmcmillan&#x2F;winston-pushbullet</a>) so that when an unhandled exception or error is thrown I get an instant notification on my Nexus 5 and MacBook.<p>Winston is a node&#x2F;iojs library though, but I guess you could find something equivalent in any other stack. The Pushbullet part is really useful.<p>Edit: I run a pretty small site however (<a href="http:&#x2F;&#x2F;littlist.no" rel="nofollow">http:&#x2F;&#x2F;littlist.no</a>). I don&#x27;t think I would enable the Pushbullet part if I had several hundred thousand visitors per day.
评论 #9445406 未加载
RBerenguel大约 10 年前
I just log to a file, rotating&#x2F;deleting when&#x2F;if needed
mkhpalm大约 10 年前
We generally run them through central syslog servers or directly to a logstash tcp or udp input. One way or another all logs from around the world end up in an elasticsearch cluster where we either query for things manually or use kibana to interact with them. Works pretty well actually.
buro9大约 10 年前
&gt; It makes it trivial for us to debug any event by just querying the db. However, after three and a half years of continued use, the table is now way too large.<p>Why are you keeping all of the logs? Are you doing anything with it?<p>Are the old logs relevant at all? If your program structure has changed, then anything logged before that point isn&#x27;t even applicable.<p>My advice: If what you is working, but only failed because of volume of data, apply a retention policy and delete data older than some point in time.<p>An example: Nuke all data older than 1 month for starters, and if you find that you really don&#x27;t use even that much (perhaps you only need 7 days to provide customer support and debug new releases) then be more aggressive and store less.
youknowjack大约 10 年前
Back in 2012, we talked about our foundation for this at Indeed:<p>Blog: <a href="http:&#x2F;&#x2F;engineering.indeed.com&#x2F;blog&#x2F;2012&#x2F;11&#x2F;logrepo-enabling-data-driven-decisions&#x2F;" rel="nofollow">http:&#x2F;&#x2F;engineering.indeed.com&#x2F;blog&#x2F;2012&#x2F;11&#x2F;logrepo-enabling-...</a><p>Talk: <a href="http:&#x2F;&#x2F;engineering.indeed.com&#x2F;talks&#x2F;logrepo-enabling-data-driven-decisions&#x2F;" rel="nofollow">http:&#x2F;&#x2F;engineering.indeed.com&#x2F;talks&#x2F;logrepo-enabling-data-dr...</a><p>tl; dr: a human-readable log format that uses a sortable UID and arbitrary types&#x2F;fields, captured via a log4j syslog-ng adapter, and aggregated to a central server for manual access and processing
YorickPeterse大约 10 年前
Syslog + Logentries for raw logging (e.g. &quot;User Alice created X&quot;). New Relic APM for performance monitoring, New Relic Insights for statistics (e.g. tracking downloads, page views, API requests, etc).
troels大约 10 年前
What kind of log data do you mean exactly? E.g. what&#x27;s the granularity?<p>We have web server logs going 30 days back, on disk, managed by logrotate. Then we have error logging in Sentry. For user level events, we track in Analytics, but we also have our own database-backed event logging for certain events. Currently this is in the same db as everything else, but we have deliberately factored the tables such that there are no key constraints&#x2F;joins across these tables and the rest of the schema, which means it should be trivial to shard it out in its own db in time.
KaiserPro大约 10 年前
It depends on what type of data you are logging.<p>For performance metrics we use graphite&#x2F;stats-d This allows us to log hits&#x2F;access times for many things, all without state handling code inside the app.<p>This allows us to get rid of a lot of logs after only a few days. As we&#x27;re not doing silly things like shipping verbose logs for processing.<p>However in your usercase this might not be appropriate. As other people have mentioned, truncing the tables and shipping out to cold storage is a good idea if you really need three years of full resolution data.
OhHeyItsE大约 10 年前
Well-solved via SaaS. Logentries, Loggly, Papertrail, amongst others.
fasfawefaw大约 10 年前
&gt; We are currently inserting our logs in an sql database, with timestamp, logType, userId, userAgent and description columns.<p>That&#x27;s what I would do.<p>&gt; However, after three and a half years of continued use, the table is now way too large.<p>Yeah, that&#x27;s what happens...<p>There are many ways to handle this issue. The simplest is to start archiving your records ( i.e. dumping your old records into archival tables ).<p>Do you have access to a DBA or a data team? They should be able to help you out with this if you have special requirements.
halayli大约 10 年前
I am biased, but you should look into a logging system like splunk. You shouldn&#x27;t be using an RDBMS for your logs. Your logs don&#x27;t have a schema.<p>With splunk, you just output your logs in this format:<p>&lt;timestamp&gt; key1=value key2=value key3=value<p>install splunk agent on your machines, and splunk takes care of everything from there. You can search, filter, graph, create alerts etc...<p>Splunk indexer allows you to age your logs, and keeps the newer ones in hot buckets for fast access.
评论 #9444903 未加载
edsiper2大约 10 年前
Using a fast, scalable and flexible tool called Fluentd:<p><a href="http:&#x2F;&#x2F;www.fluentd.org" rel="nofollow">http:&#x2F;&#x2F;www.fluentd.org</a><p>Here is a good presentation of Fluentd about it design and general capabilities:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=sIVGsQgMHIo" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=sIVGsQgMHIo</a><p>note: it&#x27;s good to mention that Fluentd have more than 300 plugins to interact with different sources and outputs.
znq大约 10 年前
Specifically for mobile logging and remote debugging you might wanna check out Bugfender&#x27;s remote logger: <a href="http:&#x2F;&#x2F;bugfender.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;bugfender.com&#x2F;</a><p>Disclosure: I&#x27;m on of the co-founders. We&#x27;ve a couple of other related tools in the pipeline, but the BF remote logger was the first we built, mostly to solve our own need at Mobile Jazz.
评论 #9450833 未加载
imperialWicket大约 10 年前
ELK and others have been mentioned and are great tools, but if you want a more simple solution within the Sql realm postgresql with table partitions works well for that particular problem.<p>I agree with many comments that this isn&#x27;t ideal, but setting up weekly&#x2F;monthly partitions might buy you plenty of time to think through and implement an alternative solution.
jmickey大约 10 年前
Surprised no-one has mentioned Papertrail yet - <a href="https:&#x2F;&#x2F;papertrailapp.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;papertrailapp.com&#x2F;</a><p>We use them for all our apps and have not seen any issues so far. It can be a bit tricky to set up, but once the logging works, it&#x27;s hassle free from then on. Pricing is also very affordable.
评论 #9445952 未加载
评论 #9445963 未加载
thejosh大约 10 年前
Rollbar has been pretty fantastic for us.<p>Also NewRelic if you want to spend the money (or get it throuhg Amazon&#x2F;Rackspace for free)
abhimskywalker大约 10 年前
Elasticsearch, Logstash and Kibana (ELK) stack.<p>This is very convenient for decently complex querying and analysis at great speeds.
perbu大约 10 年前
We push data into shared memory. Then we have clients that can read the memory and present it. This makes it possible to log millions of lines per second with a very limited cost.<p>This has the benefit of making logging more or less asynchronous. You still need to handle the logs coming out of this, of course.
gtrubetskoy大约 10 年前
If you&#x27;re only looking to debug with the data, then something like Splunk ($$$) or Elasticsearch should work. However, if this is for some kind of an Analyticial&#x2F;Data Scienc-y use, then you&#x27;d be better off with a format like Avro and keeping it in Hadoop&#x2F;Hive.
brandonjlutz大约 10 年前
One of my java projects I use logback with a mongodb appender. This allows me to structure the logs for easy querying plus I have access to all stacktraces from all servers in one spot.<p>If you go this route, use a capped collection. I generally don&#x27;t care about my old logs anyway.
true_religion大约 10 年前
I log to redis and scrape the logs to SQL for long-term storage. Memory is fairly cheap now adays so it works out for my app.<p>If I had a lot of logging to do though, I&#x27;d use elasticsearch since that&#x27;s what I run for my main DB. It handles sharding beautifully.
xyby大约 10 年前
I track them via the analytics event tracking API. It is really useful and full of surprises when you look at the stats:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9444862" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9444862</a>
matrix大约 10 年前
Piggy-backing on this topic: does anyone successfully use Amazon S3 as the log store for application event logging? The low cost is attractive, but at first glance it seems like the latency is too high for it work well.
afshinmeh大约 10 年前
It depends on the priority and importance of events.<p>For instance, we use a log file for HTTP access logs but I store all of errors and warnings from MongoDB. However, I clean the log storages every month.<p>We use NodeJS and MongoDB in www.floatalk.com
jakozaur大约 10 年前
Sumo Logic (<a href="https:&#x2F;&#x2F;www.sumologic.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.sumologic.com&#x2F;</a>)<p>Works in cloud. Easy to setup and very scalable.<p>Free tier: 500 MB&#x2F;day, 7 day retention<p>Disclosure: I work there.
buf大约 10 年前
When I&#x27;m hacking something together, I log things in... Slack.<p>As it grows into a seemingly useable feature, I might move it to GA or Mixpanel.<p>When it gets to be large and stable, then it goes into syslog
polskibus大约 10 年前
Could you expand on how do you use the log data ? How often do you query it, what time periods do you query, have you considered building a data warehouse for your analytics?
nargella大约 10 年前
Zabbix to monitor hardware, logstash&#x2F;elasticsearch (kibana for UI) to monitor service logs, Sentry for application level logs<p>At least this is what we&#x27;re moving to at work.
lmm大约 10 年前
Exactly that, but rotating after three years. If it&#x27;s three years ago it probably doesn&#x27;t matter any more.
评论 #9445071 未加载
TheSandyWalsh大约 10 年前
<a href="http:&#x2F;&#x2F;www.stacktach.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.stacktach.com&#x2F;</a>
blooberr大约 10 年前
Fluentd + Elasticsearch.<p>Very easy to setup.
ninjakeyboard大约 10 年前
We&#x27;re using ELK stack - it&#x27;s pretty nice.
enedil大约 10 年前
Text file + grep + awk
jtfairbank大约 10 年前
Checkout segment.io
ratheeshkr大约 10 年前
Test
SFjulie1大约 10 年前
Okay, You never log logs in DB in the first place.<p>You never fill table with non capped&#x2F;infinitly growing records (capped = collections with an upper limit).<p>You use at best rotating collections (like circular buffer ring). But anyway, if you have success the log flow should always grow more than your number of customers (coupling) thus, it grows more than linearly. So the upper limit will grow too.<p>Tools have software complexity in retrieving, inserting and deleting. There is not a tool that can be log(n) for all cases and be ACID.<p>The big data fraud is about letting business handling growing set of datas that are inducing diminishing returns in OPEX.<p>In software theory the more data, the more resource you need that is a growing function of size of your data. Size that grows more than your customers, and linearly other time.<p>The more customers you have, the longer you keep them, the more they cost you. It is in terms of business stupid.<p>Storing ALL your logs is like being an living being that refuses to poo. It is not healthy.<p>Solutions lies in sampling or reducing datas after an amount of time and scheme like round robin databases.
评论 #9446053 未加载