In my ~25 years of professional software development, the single biggest factor in productivity for me has been whether I was involved at the start of a project. Knowing the initial design decisions, and being comfortable changing <i>anything</i>, allows me to be orders of magnitude more productive than when I'm diving into existing code designed by someone else.<p>I saw this perhaps most acutely with a company I sold - for a couple of years I was more productive than on nearly any other large software project I've worked on, because I knew the ins and outs of everything. The developers who bought it and took over are probably better developers than I am, and they are unquestionably excellent coders, yet it took a couple of years for them to get productive at making even medium sized changes. It became incredibly obvious to me how handicapped you are diving into something someone else made, especially if the original designer isn't there anymore.<p>Meshing really well with managers & PMs is probably the next biggest factor in my own experience, but it doesn't come even close to the gap between being there from day 1 vs coming in much later.<p>> Productivity tracking tools and incentive programs will never have as great an impact as a positive culture in the workplace.<p>I'm a fan of <i>choosing</i> to use time management apps and productivity tools to manage my own budgets. But I admit that I hate it when I have to do it for someone else.
The article argues that there is no useful measure that operates at a finer grain than “tasks multiplied by complexity”.<p>I think that complexity is hard to measure and therefore easy to game.<p>At GitLab we only measure tasks completed, the number of changes that shipped to production, with the requirement that every change has to add value. This measure has been used throughout R&D <a href="https://about.gitlab.com/handbook/engineering/performance-indicators/#rd-overall-mr-rate" rel="nofollow">https://about.gitlab.com/handbook/engineering/performance-in...</a> to assess productivity for multiple years now with good success <a href="https://about.gitlab.com/blog/2020/08/27/measuring-engineering-productivity-at-gitlab/" rel="nofollow">https://about.gitlab.com/blog/2020/08/27/measuring-engineeri...</a><p>When you tell new engineers about this target they see a great opportunity to game it, just ship smaller changes. It turns out that smaller changes are quicker to ship. Lead to better code and tests. Have lower risk of cancellation and problems in production. And lead to earlier and better feedback.<p>Inspired by Goodhart’s Law I'll propose the following: A measure that when it becomes a target improves productivity. ~Sijbrandij's Law
Productivity as a software developer consists chiefly in not making mistakes. That can lead to the situation where your best developers may appear to do nothing for long stretches. Research and deliberation are desirable. Blind hacking is the least valuable yet most visible activity of inexperienced programmers. All common "objective" measures of productivity such as closed tickets, lines of code, or PRs are seriously flawed.
We can't even come up with an objective way to score software itself. So how in the world are we going to go even deeper and score the process and the people that create it?<p>Sarah and Bob make clocks, but sometimes they make hats, and sometimes they make screws, or hammers, or lamps. And sometimes the things they make get sold to customers, but sometimes other employees take them home, sometimes they make parts for each other to use when making bigger projects, and often they help each other and other employees out on unrelated projects. And sometimes they do repairs too. Oh yeah they also paint portraits that hang up around the office.<p>Try coming up with a measurement for their individual productivity that is easy enough to be useful, hard to game, and cheap enough to make it worth the price.<p>The first step is to figure out how to measure the value of all the stuff they make...
Most developers like solving problems - this gives them high. Often, without realizing it, they create problems they are eager to solve to get their dose. Solving problems can be quantified, too. Unfortunately, it's hard to quantify the number of problems avoided by manifesting! Often this goes against the first goal I mentioned. For example, solving one problem 10 times gives you closing 10 tickets, making 10 PRs, and contributing a lot more LOCs in a short amount of time. But creating one PR and one ticket, which not only prevents those 10 but 100s and 1,000s more in the future, is quantified as "less work." I've had this at one job recently where every time I suggested fixing a repeated issue was answered with: "We have a bigger fish to fry." Yet, we kept wasting time frying tadpoles.
The smallest organizational unit at which productivity can usefully be measured is an agile team of about 7 people. Below that size the effort of quantifying productivity exceeds any possible value of doing so, and incentivizes the wrong behaviors.<p>A good manager can get a reasonable subjective sense of individual productivity but won't be able to quantitatively measure it.
The problem is that we do not have a standard "output unit."<p>> Productivity: the effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input.<p>We can all agree "lines of code" is a shit metric, and we can't say "# of bugs closed," because each will have variable difficulty and value. Programmers employed by a business are in charge of automating repetitive tasks, not performing them (the classic measure of productivity).<p>I perform UX research on APIs. Here, we standardize the "output unit" and therefore can get a better idea of a developer's productivity. Every developer performs the same task, so we can simply measure time spent.<p>There will never be an ethical solution to measure developer productivity during the workday; this isn't Ford's assembly line.
He mentions increasing salary won't lead to increased productivity ... and that's true, if the same developer remains. But what if we remove that constraint? What if increased salary means a higher quality of developer takes the position? Wouldn't this mean higher productivity?<p>Bit of a cold scenario, but one way to game it out is hypothetically removing the current dev and then hiring someone better at double the pay.<p>Or, less unfair seeming, double the pay by hiring a second dev. That might not double productivity ... depending on the situation it might 1.5x it, or just as easily 4x it.
Maybe when developer productivity measurment becomes standard accross the industry we will realise that tech workers are in fact workers. Cogs in a machine. And not independant individuals imposing their will to the world through sheer will like some Randian hero. Maybe then it will then be plainly evident that developers are as alienated as any service worker, and in the end as disposable in the eyes of the shareholders.<p>Will we then organize with other workers to create better working conditions for everyone or will there be fewer and fewer developers working with ever more powerful technology chasing richer than ever VCs?
In every organization I've worked in, it was obvious who the high performers were and who the low performers were. It was obvious to everyone. The only blind spots were people usually seriously misjudged their own performance.<p>The problem, however, is that management is always being pushed to make objective measurements. For example, to fire someone, you have to first put him on an improvement plan with objective measurements. Otherwise, you're wide open to a lawsuit over discrimination, etc. You have to prove to a judge someone isn't performing, or that you gave raises based on performance.<p>Management also gets pushed into these attempts at objective measurements by attempts to optimize the numbers like what works great for a manufacturing process.
I once talked to a retired hardware engineer, a fellow who made real electronic devices, not software. He told me that, over the whole course of his career, 80% of the projects he worked on never made it to market. In other words, 4/5th of his total "productivity" turned out to be waste. Make of it what you will.
Any decently competent technical leader can tell if a developer is being productive or not. It's stupid to waste time trying to measure something that is virtually unmeasurable.
I don't get orgs that use stats like commits/LoC/PRs as KPIs. Most time for software engineering ought to spent ensuring you're building the right thing which requires a lot of collaboration, writing design docs, thinking about the problem, etc to avoid 'building the wrong thing' which is probably the 'default' behavior and hard to avoid. Software engineering is only really valuable if you can easily extend and build it on it to enable whatever product or service you're selling to change as the business changes. If you're churning out throw-away code you never reuse you don't realize any of that value and you will lose.<p>I did have the idea of directly tying value to a graph of code that enabled a certain user journey. Sorta like 'CUJ-coverage' instead of test coverage. So if a user spent $20 at checkout, every line of code that was touched to enable that user's journey would be credited with that $20. I think this would be an interesting metric I'd probably respect but there are still probably a lot of blindspots this methodology doesn't capture.
Productivity is important, sure. But as with all other professions in which people interact, the interpersonal skills and behavior tend to be more important IMO. Productivity can be massively impacted (positively and negatively) by how well people communicate and get along with each other.
As an individual I often wonder if my contributions are meaningful. The author says, “individual performance is best left for individual contributors to measure in themselves and each other.” How can individuals possibly measure their own performance if it can’t be measured externally?
love the classic 'are story points hours / no / then wtf are they' conversation when PMs intro jira + cousins<p>have never been sure how summing together something that is supposed to have no relationship with time magically provides an estimate of anything<p>also not sure why teams are using the central source of truth for progress as the 'daily todo list making' tool<p>I live in the real world so I estimate in hours
I have never felt my individual productivity go up.<p>It feels like as I progress my individual work stays the same, but helping others eats any efficiency gains I personally make.<p>As if when you are new to a module, you are slow because you don’t know anything, then once you have expertise, you are slow because you know everything and are helping others.<p>Would be interesting to measure this somehow.
The only way to do it I can think of: have two teams or individuals develop the same thing simultaneously and measure the time required to get a result of the same quality. This should be done in longer term to take into account code quality (poor code quality slows down future development).
I've had a number of projects to either add features or fix a bug in large volumes of truly weird (and sometimes jerkoff code), COLT and JES3 being particularly flagrant examples. It can take weeks to find out where the bad code and less than half a dozen lines to fix the problem.<p>In just about any system of productivity metrics, these two episodes would mark me as dismally productive:<p>In the bank I was working for, the incidence rate of online banking mainframe reIPLs went from every few days to zero.<p>At a telecommunication provider, data center reIPLs similarly reduced.
This assumes direct managers want productive developers - this is not my experience. The goal of managers is to increase the number of people they manage, and get more money. I have time and again done things fast only to have blocks put in place to slow things down - no one wants the job done easily and go home, where's the money in that. The inability to measure productivity is a direct result of this imho.
One of the most useful programmer metrics that I've found is code churn: (new lines + deleted lines) / total changed lines. Instead of telling you how much work your programmers are doing, this metric tells you what kind of work your programmers are doing. Small numbers mean bug fixing (end of project and maintenance) and large numbers mean new development and features.
You can measure productivity just fine on any tasks that repeat. How long does it take you to run the right tests, find the implementation for a failing test case, make a merge request, create a patch release, pull up the logs in case of an incident? All these tasks repeat over and over again and a good developed can do them much quicker.
Sure, it's easy. Count how many lines of code they write per day. Likewise, aeronautical engineering productivity can be measured by counting kilograms of mass added per day.<p>The real underperformers go negative.
Most software projects get managed with a ticketing system that logs the work to be done as individual tickets. Counting the number of cards a developer closes over a certain period allows us to see what actual work is getting closed off.<p>Measuring closed tickets is an excellent metric if the tasks are written well and assigned based on business priority. When more tickets get closed, more good things are happening with the project, be that bugs getting closed off or features made.