How (if at all) can you effectively measure the speed with which someone implements new features, learns new things, etc? I do firmly believe that the quality of developers is distributed across a power law curve, but which objective metrics would you look at for this?
For feature implementation speed, function point analysis (<a href="http://alvinalexander.com/FunctionPoints/" rel="nofollow">http://alvinalexander.com/FunctionPoints/</a>) is a good start. There are tables available that give rough industry-wide figures for hours per function point. However keep in mind these will change quite a bit depending on the size of the project, the competence of the team as a whole, unique constraints on the particular software, and the tools in use; we're talking a range of 1/2hr to 20hrs per function point, so you need to factor those items in. Also you have to be willing to break down the requirements in ways that are a little odd sometimes when dealing with more interactive applications (say, games versus spreadsheets) or that have long-running processes (daemons versus batch).<p>Speed of learning is much harder to quantify because people aren't uniform in the experience they already bring to the table. So you'd have to administer tests of some sort across a wide variety of things to encompass things they do and don't already know.<p>But the talk of "objective metrics" for the "quality of developers" sounds a little bit leading, making this whole question seem a bit off-putting. You might be well-served by reading "Zen and the Art of Motorcycle Maintenance", especially the parts where Pirsig describes the difficulty of assessing Quality without already being an expert in the subject.
I don't think you can. And more importantly I don't think you should.<p>So many times I have spent longs hours working on a very simple task but those long hours went in trying to find out the best, most elegant, most efficient, yet all inclusive (for cases), and still requires the minimum amount of code.<p>And once you get there, you get the feeling of eureka. You know that tini-tiny little piece of code can take anything you throw at it.<p>How in the world would you quantify that :).
You have to compare them to some metric in your world. In my case if I make an estimate I have to quantify it as a "Sean" estimate. That means to need to effectively triple it for most of the team. We rate people as fractions of the Sean time continuum.<p>Of course that's subjective... And impossible to compare dev a using tech 1 vs dev b using tech2.
I don't think you can. The naive answer would be "how many lines of code does the person produce" but I'm sure we'd all disagree with that.<p>I'd say as long as the person doesn't make the same mistake over and over and (maybe) doesn't ask the same question over and over, the person is fine.
Well, whatever your approach, you'd better include some consideration of how much crap you have to clean up, after the fact.<p>Some people "crank it out", but... don't consider the bigger picture. (Don't understand and/or don't take time to consider context.)<p>Even when their solutions are technically correct, or "adequate", for the current definition and state of things, you may find significant problems down the road that someone else, perhaps not such a "speed demon", might have foreseen and avoided/mitigated.