More than the open access aspect of the issue, it's the reviewing practices that appear to be the real success here.<p>Papers in EECS area are typically published in so called "top tier" conferences. These conferences like to have acceptance rates of 20% or lower. Supposedly this ensures that only the best of the best papers are published in these venues. In practice, the 20% or lower criterion ends accepting a motley crowd of papers with all sorts of biases. IMO papers submitted by PC members are favored, certain "hot" subfields tend to be favored, papers written by well-established research groups are favored over papers from "unknown" groups. I'm sure there are many other biases.<p>From the authors' perspective, we end up playing all kinds of "positioning" games to try and increase the chance of acceptance. Maybe I really have a technique that I designed to increase performance, but the hot new thing program committees are looking for is reliability. So I'll try and sell my paper as a reliability enhancer with a side-benefit of better performance. Or maybe I have some technique that works really well in practice but is just a combination of two previously known ideas. If my papers says so in plain English, there's almost zero chance of acceptance a top tier venue. So instead I'll go to great lengths to obfuscate the connection between the prior art and my work and spin it as brand new revolutionary insight that just so happens to be vaguely related to these previously known techniques.<p>The original point of peer review was: (a) catch unsound experimental practices and methodologies and (b) provide authors feedback to help improve the paper. The competitive nature of modern peer review seems to have lost sight of these original goals. Instead it's being used as some sort of ranking system for estimated future impact/novelty based on necessarily limited current information. The review practices in the OP seem to be going back to the original goal.