I've contributed a tiny amount to git (the high-level "git mergetool") so I can't speak for all of the git developers, but I've spent enough time hanging around for them to say that the general feeling they have is that git's algorithm which is "3-way merge, and then look at the intervening commits to fix any merge conflicts" is good enough.<p>You can always try to spend more time trying to use more data, or deducing more semantic information, but past a certain point, it's what Linus Torvalds has called "mental masterburation".<p>For example, you could try to create an algorithm that notices that in branch A a method function has been renamed, and in branch B, a call to that method function was introduced, and when you merge A and B, it will also automatically rename the method function invocation that was added in branch B. That might be closer to "doing the right thing". But does it matter? In practice, a quick trial compile check of the sources before you finalize the merge will solve the problem, and that way you don't have to start adding language-specific semantic parsers for C++, Java, etc. So just because something <i>could</i> be done to make merges smarter, doesn't mean that it <i>should</i> be done.<p>It's a similar case going on here. Yes, if you prepend and postpend identical text, a 3-way merge can get confused. And since git doesn't invoke its extra resolution magic unless the merge fails, the "wrong" result, at least according to the darcs folks, can happen. But the reason why git has chosen this result is that Linus wanted merges to be fast. If you have to examine every single intermediate node to figure out what might be going on, merges would become much slower, since in real life there will be many, many more intermediate nodes that darcs would have to analyze. Given that this situation doesn't happen much in real life (not withstanding SCM geeks who spend all day dreaming up artificial merge scenarios), it's considered a worthwhile tradeoff.