As others have pointed out in another HN discussion (1), Mongodb's reply is definitely questionable, if only by its tone.<p>I already replied to metheus on Twitter (2) in a thread where we asked for a way to repro their claims. I found their reply and comments very inappropriate, similar to the comment in here. Arrogant and derogatory to OnGres.<p>Anyway, I was writing this to note that OnGres has replied to Mongo's reply setting an example of how tech discussions should happen: without derogatory and arrogant comments, open to valid criticism (i.e. with something more than words and numbers that cannot be reproduced) and transparency.<p>Check it out:
<a href="https://ongres.com/blog/benchmarking-do-it-with-transparency/" rel="nofollow">https://ongres.com/blog/benchmarking-do-it-with-transparency...</a><p>In there you'll see how Mongo consistently mis-interpreted (or mis-represented?) the results. They kept mixing the benchmarks and constantly talked about an experimental driver and missing connection pooling. In fact, they did use the official Mongo Lua driver <i>and the official Java driver</i> for different benchmarks and they did some of the benchmarks <i>with and without</i> connection pooling and published both results.<p>It's really sad to see Mongo reply to a thorough benchmark like this. It probably has its flaws but instead of correcting them or publishing a better benchmark like the one they did (to magically get 240x...) they chose to mischaracterize the work of others, spreading FUD and accusing them of cheating and being dishonest.<p>Hopefully they'll turn around and fix it. All it takes is to publish how they got they amazing numbers so that others can comment, repro or dispute the benchmark.<p>(1) <a href="https://news.ycombinator.com/item?id=20479670" rel="nofollow">https://news.ycombinator.com/item?id=20479670</a><p>(2) <a href="https://twitter.com/javiermaestro/status/1151849279226556417" rel="nofollow">https://twitter.com/javiermaestro/status/1151849279226556417</a>
I was at the presentation last Thursday, they (OnGres) have fully open sourced both their methodology and their results and had a pretty strict divide between teams designing the benchmarks and teams running the benchmarks.<p>MongoDB could create a Pull Request/Merge Request against that repository so we can all judge those results ourselves, their current response is only words and a single table showing unlikely results.<p>However I do think the criticism of not tuning MongoDB is valid, however their response is dishonest:<p>> with their own heavily tuned PostgreSQL.<p>This was explicitly not the case according to OnGres other than the established norms of taking 25% memory for `shared_buffers` etc. No other tuning that is normally done for big clusters was done.<p><a href="https://gitlab.com/ongresinc/benchplatform/" rel="nofollow">https://gitlab.com/ongresinc/benchplatform/</a>
<a href="https://gitlab.com/ongresinc/txbenchmark" rel="nofollow">https://gitlab.com/ongresinc/txbenchmark</a>