Just the other day I was in my logistics class and the professor started deviating into something that I believe was pure nonsense.<p>He started the lecture by analyzing how many pieces a machine could manufacture per day. Fair enough. He extended the model to measure different ratios of capacity. Makes sense.<p>Then he tried to extend the model to all machines, including humans. His example was: "How do you measure the capacity of a legal team?". I thought it was a trick question, so I answered (paraphrasing) "You can't answer that question the same way you answer for the machine. You can't give a single metric." He told me I was wrong and that the _right_ measure would be (total number of working hours/day).<p>I was tempted to try to convince him otherwise. The analogy was deeply flawed. He certainly measured the machines in (number of pieces / day) but measured the legal team in (hours/day). So, in analyzing a machine, you take into account its efficiency, but you don't do the same thing for humans.<p>I believe that is exactly the same thing that is going on in the post. Managers/Logistics/Economists are very susceptible to this kind of generalization pitfalls.<p>Edit: Given that this answer has generated some discussion I feel the need to expand on it. The legal team was not expected to sell their services "by the hour". In fact, any discussion about how their services were sold was shut down by the professor. From his point of view, the lawyers were machines and he was asking the question "how much can this machine produce?"<p>Yes, other students also suggested taking the number of billable hours/revenue into account, but that's not the answer the professor was looking for.<p>I don't criticize whether his answer is not <i>technically</i> right, but I feel it holds no real-world meaning. It was a purely academic question that leads nowhere instead of having a debate about how you measure the productivity of a group of human beings. And on top of that, his final answer was definitive and (from his point of view) was irrefutable.
One of my most proud accomplishments was reducing the size of a driver from ~3000 lines of code to ~800. The file was 15 years old and had been modified by many people since. There were conditionals that were impossible to hit, features that had been abandoned years ago, duplicate code, and lots of comments that didn’t match the code anymore. After my changes and a few new tests, the driver had full MCDC code coverage and the code actually matched the device specification!
This take is probably going to be controversial here, but I suspect that most metrics don't accomplish anything beyond giving control freak managers a sense of control or insight.<p>Most complex processes can't be reduced to a handful of simple variables - it's oversimplification at its worst. The best you can do is use metrics for a jumping-off point for where something /might/ be going wrong and thus start engaging with actual humans (or reading code/logs/some other source of feedback). Too often I've had to deal with management who go straight from metrics to decisions and end up making bad decisions (or wasting everyone's time shuffling paper to generate good looking metrics).
The management psychology side of this wants a sequel from some veteran who was in the meetings and is now ready to confess:<p>> ... wrote in the number: -2000.<p>> I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking <i>Bill</i> to fill out the form, and he gladly complied.<p><i>Bill</i> — but what about the rest of the team? The devil’s answer: They were expected to keep supplying the number, because line management was forwarding the stats up, having previously “sold” upper management on their value. And to admit error on such a fundamental is career-threatening.
My favorite quote about this topic:<p><pre><code> Measuring programming progress by lines of code is like measuring aircraft building progress by weight. - Bill Gates
</code></pre>
My personal point of view is that: every line of code you write is a liability. Code is not an asset; a solved problem is.<p>Edit: To clarify, I'm definitely not encouraging writing "clever" short code. Always strive to write clear code. You write it once, but it will be read (and potentially changed) many, many times.
I am a staunch code minimalist. Less code is (almost?) always better. The best, fastest, cleanest code is the code that doesn't exist at all. Always aim to write the least code. Less code is less maintenance, it's less to grok for the next person to read it.
I had a manager once tell a co-worker (who is easily one of the best programmers I have ever worked with), "There's no way this can work. There's not enough code."<p>Why we keep promoting these people into positions of management is beyond me.
There are some fascinating stories on the site, including some ones about Bill Atkinson (the programmer here) that really help to flesh out the sheer absurdity of asking him, in particular, to fill out a sheet like that. I believe he personally wrote something like 2/3 of the original Macintosh ROM.<p>edit: here's one that I find to be an interesting character study:
<a href="https://www.folklore.org/StoryView.py?story=Round_Rects_Are_Everywhere.txt" rel="nofollow">https://www.folklore.org/StoryView.py?story=Round_Rects_Are_...</a>
Years ago I worked for an ISP & managed hosting company as third line support. 1st and 2nd line support were pretty good, and would handle a lot of the cases that came through.<p>Generally by the time it'd reach us, it was something requiring more in depth troubleshooting.<p>They introduced a metric to measure ticket performance. The rough idea was "faster it's resolved, the better" (reasonable measure, if you're also tracking customer satisfaction), combined with "fewer interactions with customer the better" which was an absolutely stupid way to measure performance.<p>About a month after it came out, we were getting chewed out for our "conversion score" being low. Too many interactions with customers, and tickets taking a while to handle. No shit, we're the top tier of support. If it got to us it was <i>bound</i> to take time to resolve, and almost certainly involved a lot of customer interaction.<p>One of the engineers in the team managed to dig up how to get a "conversion" rate report up for any support engineer, though not the code that generated the figures, and very quickly realised that the way to get 100% conversion rate was just to resolve and immediately re-open the ticket as soon as you picked it up. We all promptly started doing that, and they stopped chewing us out.<p>If you incentivise the wrong behaviour, you're going to get results you likely don't want.
"It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove." - Antoine de Saint Exupéry
There are still companies trying to impose metrics on software development. It isn't just lines of code, it may also include commits per day. And this isn't even taking into account various "Agile"-related metrics, like story points per sprint and the like.<p>I wonder if we should name the companies who do this, or if it is fighting a losing battle? In the end, some management just wants to look at charts.
It's crazy that even in 2021, I know some of the teams are still measuring productivity by LOC _only_. People just never learn the lesson since '80
I track my progress on the novel I'm writing by word count. I've had more than a few days of negative word count writing days which have invariably been some of my more productive days.
Disclaimer: I work for GM - this is solely my own opinion.<p>Whenever I hear people in the automotive industry boast about the complexity and lines of code in vehicles I weep and shake my head.
One of my tasks as a young developer was to make a shell script faster. The systems engineers (who used this script to setup and configure large scale network management systems) were complaining that the "menu" took 60 seconds or more between selections.<p>Ok, sure, sounds easy. Then I opened the script... 9000 lines. After reading and understanding what it was supposed to do, I rewrote it in 1500 lines (still basic SunOS unix shell code), and with reasonable use of internal data structures for caching so only the first menu visit required a time hit. Beyond that, it was 1 second for menu selections. To say the system engineers were pleased would be an understatement.<p>My manager was pleased but also displeased, because he was the author of the 9000 line monstrosity.
Say what you want about Steve Balmer but he had the right attitude towards that <a href="https://www.youtube.com/watch?v=kHI7RTKhlz0" rel="nofollow">https://www.youtube.com/watch?v=kHI7RTKhlz0</a>
I have had code I wrote replaced by something that was 30% smaller, faster and more stable. It was a good and humbling experience.<p>I contributed an inliner to a language about ten years ago. Inlining is a problem that might seem easy at first, but for me it was like trying to restrain a rabid dog on a leash. I was pretty damn pleased with the end results and it served the language implentation well until about 2015. Then someone with an actual understanding of the problem worked on it for a week and produced something that I would describe as poetry.
> My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.<p>~Dijkstra (1988) "On the cruelty of really teaching computing science (EWD1036).<p><a href="https://en.wikiquote.org/wiki/Edsger_W._Dijkstra" rel="nofollow">https://en.wikiquote.org/wiki/Edsger_W._Dijkstra</a>
The best code is the code I don't write.<p>As a manager, I would value a developer that spent a week, refining a small, high-quality, robust and performant class, than one that churned out rococo monsters in a short period of time.<p>I tend to write a lot of code, and one of the things that I do, when I refactor, is look for chunks I can consolidate or remove.<p>OO is a good way to do that. It's a shame it's so "out of fashion," these days. The ability to reduce ten classes into ten little declarations of five lines each, because I was able to factor out the 300 lines of common functionality, is a nice feeling.<p>An interesting metric for me, is when I run cloc on my codebase. I tend to have about a 50/50 LoC (Lines of Code) vs. LoC (Lines of Comments).
Nowadays that should be phrased as: fewer features is better. But management doesn't understand. They always want more features to be delivered (and faster than our competitors!), and so we end up with more code to maintain and in the need of hiring more developers (that's actually another excuse as well to migrate our stuff to microservices!).<p>Companies only want to grow, they don't care anymore about polished products.<p>Of course, I'm generalizing, there are some few companies that care about the product, but they are just a few.
Consider the cost of lines of code in a solution. Double the lines means double the time spent to simply type them in. Double the time later to read them. Probably many more times than double to re-understand them. Double the time to explain them. Double the compile time. Execution time is probably around double.<p>It's not just twice as good to write shorter code. Its something like 64X as good. By some ways of thinking.
Sometimes I wish I kept a better count of my deletions, because the 'best' I ever recorded was just under 600 lines and I honestly feel a little regret that other people are managing much bigger deletions.<p>I think the real reason is that as I moved to refactoring (as part of that 600 LOC experience), my deletions per year went up but my deletions per story regressed toward the mean.
Around the turn of the Willennium, I beat a long-standing "Most C LOC approved in a month" record at Old School Big Name Inc. and you are <i>not</i> taking it away from me.<p>No one had come close to that record in a decade and I beat it and you can kiss my keyboard if you think I was lazy about it. Harumph, I say!
If you want to read about flawed metrics, how they affect the production and what to do about it in a form of interesting novel - check out The Goal by Elijjahu M. Goldratt. It's a classic, but gets recommended seldom for how good it is.
More than the story itself, the website:<p><a href="https://www.folklore.org/" rel="nofollow">https://www.folklore.org/</a><p>is packed with stories which will make you smile, cry, or more enlightened. Or all of the above at the same time.
Related :) <a href="https://twitter.com/tregoning/status/1286329086176976896" rel="nofollow">https://twitter.com/tregoning/status/1286329086176976896</a>
Folklore.org is full of fascinating anecdotes.<p>I'm starving for more stories of this size from CS history. PARC, Bell Labs, wherever! I'm sure there's thousands of fun little stories out there.
Those few days that I managed -1000 lines of real code are among the happiest work-related days I had. It feels so good to find a simple solution that enables you to remove so much.
we must measure edited lines not the difference between
numbers of lines after and numbers of lines before!
It is like if you work in money exchanges: there is
buying and selling.