Seeing benchmarks conducted by Phoronix always fills me with emptiness on the inside. While I really welcome the idea of benchmarking Linux, their methodology always seem lacking for me.<p>All we get are a bunch of numbers, without any actual investigation of what those numbers should represent, what can be the reason of the outcome, and sometimes the measurements make simply no sense.<p>For example, according to these benchmarks, Ubuntu 7.04 reads memory twice as fast as newer versions. There is no possible way it can be a valid result. At least assuming that the exact same compiled code was used on every installation. Which brings us to another problem: no information on the tests. All we get is a software name, a version number and the result numbers. Which would be almost fine if they were prepackaged binaries, but with FOSS different compile time options and compiler flags can make quite a difference in the results too.<p>About the nonsensical tests: RAM speed should be the same regardless of the OS. So dedicating a full page to RAM speed tests should be senseless. No. It's actually a nice control to the tests, and the numbers show that there's a problem somewhere. Either the tests, the measurements are off significantly, or there is something flawed in either the 7.04 configuration or the others that cause almost 50% difference in such test.<p>Also, measuring compile times. They managed to measure the time it takes to compile 3 software written in C using an unspecified compiler with unspecified options.<p>At the end, no conclusion were drawn, just the results summaried in English instead of plain numbers. The whole thing gives me the feeling that they don't really know that they are testing, they're just running a bunch of programs and reporting the numbers they output.<p>I'm sorry if it seems like I'm just ranting, but I've tried a couple times sending emails that point out the flaws in their methodology, to no use.