I spent a while a few years back trying to track down the source(s) of the "superprogrammers are an order of magnitude more effective than mere mortals" belief. From my own experience of working with some great programmers, I believe this to be true, but I wanted to see if there was actual data to back it up. Unfortunately the study that's almost always cited doesn't really say what people think: the 28:1 ratio oft quoted is for debugging someone else's program, and involves two entirely different scenarios — one debugging machine code without realtime access to a computer, and one debugging a higher level language at the computer! In fact that's what the study was actually testing: whether working at the computer was really faster than old-school batch job programming. Really. This was a matter of much debate in the late 60s. Proponents of batch jobs argued that allowing a programmer to actually sit at a computer and type their code there would make them much too lazy and sloppy, just letting the computer tell them when they made a mistake, rather than taking the care to get it correct first like Real Programmers™. (Reading old CS papers can be fascinating!)<p>I'd bet quite a lot of money that most people citing this study have not only never read it, but don't have the faintest clue what it was actually about.<p><a href="http://dustyvolumes.com/archives/category/papers?s=sackman" rel="nofollow">http://dustyvolumes.com/archives/category/papers?s=sackman</a>
I just cite papers I can find online. If you publish in some fancy magazine which is too expensive for my school (or my friends' organizations) to have a library access to - too bad for you I wont cite you.
I think their is also a difference in papers you have skimmed trough, or actually read. I notice that a lot of people make citations to the first category, but when a question is asked about a specific paper, they need to make a clean breast of it and admit that they only skimmed through the paper and did not read it with a certain amount of profoundness.