So I, along with others mates from the Open Source community created a log aggregation tool (backend tool) named Bark ( https://github.com/techrail/bark ). Some of you might be rolling your eyes thinking "yet another log aggregation tool in the pool of already existing dozens over dozens of log aggregators...why does anyone build these things!?" So here are the salient points about it:<p>1. It contains an aggregator server and a client - both mainly written in go but Java client is being worked upon.<p>2. It tries to be very easy to setup and use - binaries and docker images are available. Supply the DB URL which contains the required schema and start the server and that's it. The client library is even easier to use (a little on that later).<p>3. It sends the logs to a PostgreSQL server - Since PostgerSQL is much more familiar to many developers than any other search engine (like Lucene/ELK) and easier to setup than a large APM solution, it is fairly straightforward for most (of course, not all) backend developers.<p>Now, on paper this looks like a really bad decision since PostgreSQL is not suitable for crazy insert speeds or loads of textual data or FTS. However, the focus here is to be like a step between plaintext logs and the enterprise-ready terabyte-scale software like ELK, NewRelic, Splunk, DataDog etc. This tool (bark) does not aim to be an APM at all and is not really targeted towards installations which produce more than a few GBs of data everyday. For such usecases, it is better to use other tools. But it does try to ease the way into structured remote logging. The performance of the server is approximately around 2000-3000 logs per second without an index and 1500-2000 with a single-column index on timestamp column and 5 DB connections.<p>The go client supports 3 modes of operation:<p>1. stdout logging: The most normal case - you just start dumping the logs to your standard output. You can also choose to dump it in a file. It uses the `log/slog` package from go 1.21 for achieving this.<p>2. Sending logs to a server: You can create a client of bark which can connect to Bark server which can accept REST API calls and save those entries to the DB. This is useful when you have multiple services which want to store the logs at a central place. You can still do stdout logging on the client.<p>3. Embedded server side in the client: This use-case is in between the two options above. The idea is that when you still have a monolith but want to send the logs to the DB (you can still dump a copy of the logs to your stdout), but you don't want to setup another service.<p>The interesting point is - you can use the embedded server side in client method at scale too: you can just create a client in all (multiple) services which then send the logs directly to the DB.<p>I hope it helps someone other than me too.