We should probably be using the absolute value of lines of code, so abs(LOC) as the metric, or something like a least squares mean for estimating the moving average of LOC per day.<p>Anymore, my daily average LOC is probably negative since I tend to rescue floundering projects. I usually avoid object-oriented (OO) programming whenever possible, as I've found that functional one-shot code taking inputs and returning outputs, with no side effects, using mostly higher order functions, is 1 to 2 orders of magnitude smaller/simpler than OO code.<p>Also I have a theory that OO itself is what limits most programs to around 1 million lines of code. It's because the human mind can't simulate the state of classes with mutable variables beyond that size. Shoot, I have trouble simulating even a handful of classes now, even with full tracing and a debugger.<p>I'd like to see us move past LOC to something like a complexity measurement of the intermediate code or tree form.<p>And on that note, my gut feeling is that none of this even matters. The world's moving towards results-oriented programming, where all that matters is maximizing user satisfaction over cost of development. So acceptance test-driven development (ATDD) should probably be highest priority, then behavior-driven tests (BDD), then unit tests (TDD). On that note, these take at least as long to write as the code itself. I'd even argue that they express the true abstractions of a program, while the code itself is just implementation details. Maybe we should be using user stories implemented per day as the metric.