I have always used GoAccess on my blog (<a href="https://2byt.es" rel="nofollow">https://2byt.es</a>) which gets very modest traffic because I don't post much and don't advertise outside of my few twitter followers. Privacy has always been a core principle of mine.<p>I've found that over time, crawlers drown out the numbers of actual visitors but I find GoAccess hard to use to get any meaningful data from when interesting things do happen.<p>Can anyone suggest a way I can do something similar to this without relying on a service I don't host (and without having to write parsers into a SQL or similar DB by hand)?
Hmm, it ocured to me that you can probably get a nice list of robot user-agents by querying all UAs that accessed the robots.txt file. I don't think normal browsers touch that file.<p>Also a thing to do on the cheap, if you want more usable logs is to do JSON logging[1] (one object per line). This is trivial to import into PostgreSQL and also trivial to query via tools like jq, as is.<p>[1] Example: <a href="https://stackoverflow.com/questions/25049667/how-to-generate-a-json-log-from-nginx#42564710" rel="nofollow">https://stackoverflow.com/questions/25049667/how-to-generate...</a>
I've found the same issue. A lot of traffic will get blocked if you use a simple JavaScript integration. The solution is (obviously) to track from the backend and provide a simple dashboard for it. I've started building a library [0] written in Go, which I could integrate into my website and until the end of last year, it became a product (in beta right now) called Pirsch [1]. We offer a JS integration to onboard customers more easily, but one of the main reasons we build it is, that you can use it from your backend through our API [2]. We plan to add more SDKs and plugins (Wordpress, ...) to make the integration easier, but it should be fairly simple already.<p>I would love to hear feedback, as we plan to fully release it soon :)<p>[0] <a href="https://github.com/pirsch-analytics/pirsch" rel="nofollow">https://github.com/pirsch-analytics/pirsch</a><p>[1] <a href="https://pirsch.io/" rel="nofollow">https://pirsch.io/</a><p>[2] <a href="https://docs.pirsch.io/get-started/backend-integration/" rel="nofollow">https://docs.pirsch.io/get-started/backend-integration/</a><p>[Edit]<p>I forgot to mention my website, which I initially created Pirsch for. The article I wrote about the issue and my solution is here: <a href="https://marvinblum.de/blog/server-side-tracking-without-cookies-in-go-OxdzmGZ1Bl" rel="nofollow">https://marvinblum.de/blog/server-side-tracking-without-cook...</a>
I want to second that plug for Athena for ad-hoc analysis. (If you're hosting your own stuff and at the scale where it'd be useful, there's Presto/Hive, which Athena is based on, and/or at Trino, the Presto fork maintained by some of its initial developers.)<p>It was useful for me when tweaking spam/bot detection rules a while ago; if I could roughly describe a rule in a query, I could back-test it on old traffic and follow up on questionable-looking results (e.g. what other requests did this IP make around the time of the suspicious ones?). We also used Athena on a project looking into performance, and on network flow logs. The lack of recurring charges for an always-on cluster makes it great for occasional use like that.<p>You can use what the docs call "partition projection" to efficiently limit the date range of logs to look at (<a href="https://docs.aws.amazon.com/athena/latest/ug/partition-projection.html" rel="nofollow">https://docs.aws.amazon.com/athena/latest/ug/partition-proje...</a>), so it was free-ish to experiment with a query on the last couple days of data before looking further back.<p>More generally, Athena/Presto/Hive support various data sources and formats (including applying regexps to text). Compressed plain-text formats like ALB logs can already be surprisingly cheap to store/scan. If you're producing/exporting data, it's worth looking into how these tools "like" to receive it--you may be able to use a more compact columnar format (Parquet or ORC) or take advantage of partitioning/bucketing (<a href="https://docs.aws.amazon.com/athena/latest/ug/partitions.html" rel="nofollow">https://docs.aws.amazon.com/athena/latest/ug/partitions.html</a>, <a href="https://trino.io/blog/2019/05/29/improved-hive-bucketing.html" rel="nofollow">https://trino.io/blog/2019/05/29/improved-hive-bucketing.htm...</a>) for more efficient querying later.<p>As the blog post notes, usability was...imperfect, especially during initial setup. Error messages sometimes point at one of the first few tokens of the SQL, nowhere near the mistake, and there are lots of knobs to tweak, some controlled by 'magical.dotted.names.in.strings'. CLIs were sometimes easier than the GUI. But you can get a lot out of it once you've got it working!
Interesting, there's quite a big number of people running ad blockers!<p>"Both Google Analytics and Goatcounter agreed that I got ~13k unique visitors across the couple days where it spiked. GoAccess and my own custom Athena queries agreed that it was more like ~33k unique visitors, giving me a rough ratio of 2.5x more visitors than reported by analytics, and meaning that about 60% of my readers are using an adblocker."
We use goaccess against a pretty busy centralized log server and has worked really well for years. We don't have to worry about JS and that's always a plus. I personally like how it follows the unix philosophy.
With OctoSQL[0], as I wanted to see how people are using it, I literally just set up an http endpoint which received a JSON request on each CLI invocation (you can see the data sent in the code, it's open source) and appended it to an on-disk JSON file.<p>Then I used... OctoSQL to analyze it!<p>Nit: The project may seem dead for a few months, but I'm just in midst of a rewrite (on a branch) which gets rid of wrong decisions and makes it easier to embed in existing applications.<p>[0]:<a href="https://github.com/cube2222/octosql" rel="nofollow">https://github.com/cube2222/octosql</a>
I do a similar thing for my site, but instead of renting a database cluster in the cloud, I wrote a small Python script that converts nginx log files into a SQLite database. <a href="https://github.com/ruuda/sqlog" rel="nofollow">https://github.com/ruuda/sqlog</a>
For a readymade stack similar to what’s described in the article to self-host in your own AWS account have a look at <a href="https://ownstats.cloud" rel="nofollow">https://ownstats.cloud</a>