This reminds me of a talk that Leslie Lamport (author of LaTeX & prominent figure in the field of distributed computing) gave recently [1]. I remember him arguing that the difficult part in writing code is not to determine what code to write to compute something, but to determine what this something is in the first place. "Logic errors" are really about valid algorithms that end up computing the wrong thing - they're gonna compile, they're gonna run, but they won't do what you want them to do.<p>One example he gives is computing the maximum element in a sequence of numbers. This is something trivial to implement but you need to decide what to do with the obvious edge case: empty sequences. One solution is to return some kind of error or exception, but another is to extend what we mean by the largest element in a sequence the way mathematicians typically do. Indeed, the maximum function can be extended for empty sequences by letting max([]) := -infinity, the same way empty sums are often defined as 0, and empty products as 1. The alleged benefit of following the second approach is that it should lead to simpler code/algorithms, but it also requires more upfront thinking.<p>[1] <a href="https://www.youtube.com/watch?v=tsSDvflzJbc" rel="nofollow">https://www.youtube.com/watch?v=tsSDvflzJbc</a>