I'm not sure I follow. So you failed to measure software productivity in lines of code, therefore it follows that "There's No Such Thing as Software Productivity"? Don't you think that giving up after n=1 attempts at measuring software productivity might be a <i>tad</i> too fast to draw a generalized claim of impossibility? I might argue the real lesson learned is "Lines of Code are Not a Measure of Productivity in an Isolated, Toy Example".<p>I suspect this sort of thing gets promulgated because it kind of massages our ego, like yes, they can measure <i>other</i> sorts of productivity, but not <i>ours</i>, oh no, we're too complex and intelligent, there's no way to measure the deep sorts of work that we do! Which, yes, OK, we're not exactly bricklayers, but surely, if you had to, you could do better.
You can say this about any discipline. The root of the issue is that productivity for productivity's sake is meaningless, and it makes <i>no sense</i> to measure productivity as a general property when outputs <i>vary</i>.<p>A tire factory has a distinct, singular goal: produce tires. It does this continually. Productivity is meaningful, but <i>only</i> in relation to a target that is typically specified by externalities (e.g. amount of demand)<p>A software company is usually not in the business of producing consumable <i>commodities</i> so this kind of measurement does not make sense. It can make sense to measure productivity during a period for delivering a particular piece of software within a given time bound, but once it's delivered, productivity becomes meaningless. You always need to understand productivity in relation to some <i>purpose</i> and I don't know how these knuckleheads who think this abstract idea is basically like a concrete measurable essence, like mass, or liquid, got leadership positions.
> I would argue that what good software developers do is remove problems. The opposite, in fact, of production<p>But something is being produced - it is version 2.0 of the software. This is an artifact that is then shipped to users or deployed to a server. Peter’s solution fixed the issue and did not (seemingly) create further maintenance burden, which would have taken attention away from other tasks, i.e. reduced future productivity.<p>I agree that metrics for programmer productivity are often useless (e.g. using lines of code is a bad idea for obvious reasons), but it seems silly to claim that the entire concept of productivity does not apply to the production of software.
Let's say that you have two runners running the same marathon. The first one, Frank, sprints at full speed and eventually tires out and slows down. The second runner, Peter, takes a nap first and then finishes the marathon at the exact same time. Which of these two runners was faster in the race? The answer is: It doesn't matter. And therefore there is no such thing as running speed.<p>There is such a thing as productivity in programming. If you could measure it it would likely be some combination of peer review and an analysis of the impact implemented features and fixes had. Some companies actually have programmers rate each other. I don't know how well it works and I think it can lead to perverse scenarios. But you can come up with metrics that are positively correlated with productivity.
Sure there is. It's a ratio of inputs to outputs.. even in the example the inputs and outputs are measurable.<p>The only thing this article gets at is that engineers may not know how to calculate their own productivity; but it doesn't means it's not calculable.
I think this is just a semantic quibble based on a narrow interpretation of the word "produce".<p>Because it also means:<p>"cause (a particular result or situation) to happen or exist."
I realized why this post rubs me the wrong way.<p>It complains, but doesn't offer a solution. It simply criticizes and says "all engineers cannot and thusly should not be measured".<p>The ironic thing is, the blog post is implicitly measuring by not explicitly measuring. The measurement is the bug ticket itself and whatever value attached to it.<p>But to this end, I generally agree. There are qualitative and quantitative measurements. Quantitative is the value of the ticket commonly ascribed by the team (scrum? agile? whatever). Qualitative should come up in review.<p>Qualitative is SO HARD. Top down? Team 360? Mixed? But it must be undertaken and refined by the team at each level of the org. Otherwise you will run into the exact situation described by the blog post and you won't know how to judge left from right, good from bad. Maybe the blog post's example isn't that great, too much information is missing to make a solid judgement, but you need to decide who to reward via promotion, annual raises and who to reprimand and who to not change.<p>But still, all systems are terrible, but you must pick one less it be picked for you.
Isn't this whole debate a repeat of the old talking point about what is value? Labour theory of value, that kind of thing?<p>This is simply never going to end if carried on along the lines of this article, or along the lines of most of the comments here.<p>There's no way to objectively and reasonably put a value on something.<p>All we have is a theory of subjective value, which does a bit of handwaving about utility and works out some ways where we can come to a price, regardless of the fact that the values in peoples' heads are subjective.<p>Thus the distinction between knowledge work and "tangible work" like bricklaying is actually a moot point. Yes, you can measure "productivity" of bricklaying in metres per day, but ultimately you care about value, not amounts of wall.<p>The arguments about one guy definitely being more productive than another are similar. One person values speed, another values maintenance costs downstream. It is subjective what ought to be more important.
In the scenario described in the article, Peter and Paul both achieve the same outcome in the same wall clock time. Obviously they are equivalently productive despite different working styles, by construction.<p>But this doesn't account for the more realistic examples of Prakash, who completely fails to deliver a working solution, or delivers half a solution, and Percy, who gets it done two weeks late. I'm pretty sure you can define a shit_done/time_elapsed productivity metric for those two guys that is worse than that of Peter and Paul.<p>Maybe I am a cynic but I suspect that some people are upvoting this because the framing makes them feel OK about getting paid six figures to work three hours a day...
Thats why bugs and chores have 0 points per default in pivotal tracker. Thats also why there was a push to force us say "as a user i..." at the beginning of story descriptions to make sure this is something creating user value. With these guards in place i don't follow the argument that productivity is not measurable, if a team builds features that solve an expressed user need and lets even say go through a final user acceptance check, this is a productive process and it is also very measurable!
YUP<p>My first rule of software programming is: Work Hard to Avoid Writing Code<p>— Code is habitat for bugs; more code equals more bugs, and interactions between separate parts of code can harbor even more "interesting" varieties.<p>— Code takes time to run. Any NoOp is faster than even your best hand-tuned assembly module.<p>Obviously, this is not absolute in any sense — it breaks down as soon as you need the system to actually do something, at which point you must write <i>some</i> code. But it should be the minimal amount to get the job done, and nothing more.<p>Obligatory car analogy: While doing some amateur sportscar racing, a coach asked me<p>"What are the things you do that slow the car down?".<p>I thought for a moment and started saying "when I start into a corner, if I do a bit too much...",<p>he interrupted saying "NO, no, what are the <i>BIG</i> things you do that slow the car down?".<p>"Oh, like braking and turning and lifting off the throttle?".<p>"Right. So what that means is that you should <i>always avoid doing those things</i>. Obviously, you will certainly have to so some of them as soon as you approach the end of the front straight after the start, but make sure you understand your car, the track, and your skills to the point where you do only the absolute minimum."<p>Both the software and sportscar versions are deceptively simple — they take a LOT more thinking than it seems at first glance. And that thinking is totally worth it.
Solving problems is productivity. That also means, solving them in a way that the solution doesn’t spawn new problems down the road, which is why we should follow the
best practices. I can’t say whether deleting 100 lines is better than adding 1K lines, because before passing that judgement I would need to see what exactly those 100 and 1K lines are. OP is arguing too black and white IMO, and draws the conclusion before I‘m sold on the premise.
Impact is the only useful metric.<p>It is staggeringly hard to measure.<p>Output is a weak proxy for impact. But it’s the one that makes intuitive sense to people. Doesn’t make it right or useful. I’m sure you all can envision a parable about your subfield of expertise that showcases how a seemingly light touch has a huge positive impact.
So if you have a developer who writes no code and closes no tickets, who essentially does nothing except come to meetings, they are just as productive as a developer who writes some code and closes some tickets? Obviously there is a difference there. Negative code is still doing something as well. Back in the 90s we used Personal Software Process from the SEI, and we measured lines added, deleted, and changed. (as well as defects removed and added)<p>It becomes clear that simple quantity of code and tickets is not enough- but it's also not nothing. Part of what is missing is quality and assessments of task complexity. Part of what is missing is the other parts of the jobs, like design and code reviews.<p>I don't think it's hopeless, and can at least be used to look into why some people don't seem to make much at all.
You could always look at things in a more abstract way ;)<p>From one point of view, the <i>users</i> of the software are supposed to enjoy so much of a productivity increase that it's not supposed to matter if the coders are as productive as they could be or not.<p>Give or take a few hundred percent at least.<p>I realize that most people who've been to business school still aren't going to develop the needed acumen to handle a situation like this.<p>Too many times the only training retained is a knee-jerk over-reaction to a fraction of a percent :\<p>Ever see one of these "leaders" have a cow and it was as stupid as it could have possibly been?<p>I'm confident there are still <i>some</i> natural leaders that can thrive without worrying about every ounce of nose to the grindstone for their staff.<p>Some things you just can't fix.
Asking a software engineer to be more productive is akin to asking a mechanical engineer to be more productive. What does that end up looking like? More useless blueprints? It turns out when you ask software engineers to just crank out code, you just get lots of code.
Productivity in economics refers to how many units of output you can generate with a given amount of input, it doesn't take into account quality, usefulness, etc. Complaining "lines-of-code is not a sensical productivity measure" is too kind to the concept of productivity.<p>Using productivity as a metric leads to the same nonsequitur stuff in many many fields.<p>It works somewhat for industrial output when you're working with commodities. Or it can work in some more fields as an ancillary measure if you pair it with some other quality, customer satisfaction, outcomes etc measure. But usually you don't want to maximize work while getting good outcomes, you just want the good outcomes.
Good productivity is about being able to do all tasks as planned. That includes the management, as its task is to attribute tasks. And whoever creates tasks. And whoever defines the objectives those tasks should answer to.
The difficulty of measuring productivity is particularly felt by senior developers. They save time and effort in ways that are non-obvious, which might be measured by dependencies that were not added, design patterns that were rejected, or processes that they pushed back on. Just like with living things, unchecked growth is unhealthy for an organism but the actions required are difficult to measure. One could start an attempt with a counter-factual narrative, but this does not map cleanly to KPIs.
Ok, but ironically the article does explain by a comparison of two alternatives just how to measure productivity—-solving problems most efficiently (least amount of maintainable code).
You can very well at least count the problems that were solved (or deleted) you can also probably measure the value those solutions have in revenue or another metric.<p>It's still true that measuring lines of code, time spent coding, commits or anything else is at best a proxy of productivity. It's also true that without any code changes problems more often than not don't get solved or we at least can't call the activity software development.
I do appreciate the little coda at the end - nice that the author was self-aware enough to realize where the cruft was and cut it (and courteous to the reader!)
Time spent on "incidental complexity" (new features, key fixes, performance) versus time spent on "accidental complexity" (anything and everything else).<p>Easy metric to understand, easy metric to teach, just remember that it applies to teams and not individuals.<p>See: Out of the Tar Pit.
And yet somehow, software developers are doing <i>something</i> that's worth paying them a salary. And it's not always impossible to tell if someone's being held back at a lower salary band than they should be, or was promoted into a salary band that's higher than they're able to keep up with.
Wholeheartedly agree. SWEs are not factory workers. They don't punch in at 6am, put on their uniforms, make sure they meet their daily "lines of code" quota before clocking out at 10pm. We should not measure software productivity the same way we measure ball bearings production.<p>Using lines of code to track productivity is absurd (do people really believe it or is it just a strawman at this point?). I'm reminded of that midwit meme where a junior has very few lines of code written because they don't know the code base well enough, the midwit writes up an whole framework, and the senior engineer has a net negative lines of code contribution.<p>Theres also a fundamental difference between creating and maintaining code. Something like 10 guys wrote visicalc. Does that mean they were contributing millions of dollars in profit per hours? What about the maintainance to keep it going? Bug fixes? Patches? On call infra guys? What about opportunity cost of putting engineers on deadend projects?<p>My point is tracking productivity in software dev - maybe all knowledge work for that matter - is complicated. Maybe that's why there's so much "busywork" (emails, slack, tickets, meetings, etc). Everyone wants to look productive but no one knows what it means
<p><pre><code> "Put another way, productivity has no applicability as a metric in software.
"How much did we create today?" is not a relevant question to ask. Even if it
could be measured, productivity in software does not approximate business value
in any meaningful way. This is because software development is not an activity
that necessarily produces anything.
</code></pre>
This is ridiculous, irrelevant, and wrong. Of course software development produces things. It produces software.<p><pre><code> Which of these two developers was more "productive" today? The answer is: It doesn't
matter. What matters the that Peter solved the problem, while simultaneously reducing
long term maintenance costs for the team. Frank also solved the problem, but he
increased maintenance costs by producing code, and so (all other things being equal)
his solution is inferior. To call Peter more "productive" is to torture the metaphor
beyond any possible point of utility.
</code></pre>
Ohhhhhhhhh. I get it. The author doesn't know what the word productivity means.<p>Productivity does not mean "increases business value while decreasing maintenance costs [and having no net negative impact in any way]". It doesn't even mean "solving a problem".<p>Productivity just means "to make something", or more specifically the <i>rate</i> at which something is made. That's all. You can make 10x more of something, and it can be garbage quality, but you did make it, and you did make more of it, so your productivity increased.<p>If you produce 10x more grain than you did yesterday, you are more productive. The grain might now be full of heavy metals, pesticides and toxins. But you did in fact produce more grain. If you were trying to measure <i>productivity of usable, healthy, high-quality grain</i>, that is a different measurement than just <i>productivity of grain</i>. You may assume everybody knows what you mean when you say "productive", but you'd be wrong.
Strawman argument. It would have been (semi) interesting if the title was "Software productivity can't be measured by lines of code" (but already very eloquently stated in <a href="https://folklore.org/Negative_2000_Lines_Of_Code.html" rel="nofollow">https://folklore.org/Negative_2000_Lines_Of_Code.html</a> ).
And yet … there are some engineers who manage to solve hard problems to get something new working and debug it, however long it takes, and many others who don’t and/or can’t.
"There is no such thing as software productivity." != "There is no such thing as a simple, objective software productivity metric".