Then we complain about the sad state of the industry.<p>What about empirically proving that something works instead? Like in serious professions.<p>Imagine going to a doctor and the doctor saying: "The cure to your illness is eating a big warm turd every morning. I know you will object but turn it into a goal and you'll see its value".<p>Better to use guru status and force people to use an objectionable idea (turn objections into goals) until they are invested in it and hence start to find value in it. This is Machiavellism 101. Once you get invested you get emotional and you will defend it, no matter if its shit.<p>As another example go find the original scrum papers by Sutherland. Every claim is pulled from his rear end. No serious studies, metrics, none, whatsoever.<p>Luckily this people does not promote brain surgery procedures and techniques.
Paradigm: Death Based Development (DBD). If the tests fail, the software engineer who wrote the code is executed.<p>Step 1: I can't do DBD because I don't want to die.<p>Step 2: I <i>can</i> do DBD if I'm willing to die.<p>Step 3: When I'm willing to accept death as a natural part of the software development cycle, I can do DBD.<p>Nice, I like this.
I love this post, even though it gives even more evidence that the original TCR proposal is unworkable in practice. Namely, because all the bullet points at the bottom define different conditions that all have to be satisfied for TCR to be workable (and many more are omitted), you end up needing your unit test situation to be the intersection of a huge number of special cases that are almost never true in practice.<p>You need (TCR doesn’t fragment commit history && tests are fast && confidence of test accuracy is high && ....). As you AND more things in there (and there are many more things to AND into the list of conditions necessary for TCR), the probability that it all applies to your working situation is going to drop rapidly except for some extremely isolated cases, and then one might question if it’s a good idea to adopt an extreme practice like TCR even when the heroic assumptions are satisfied, because it will mean mixed development practices in cases when the assumptions are / are not satisfied, and that inconsistency itself costs you, especially in a team setting.<p>But the idea of reversing objections into statements of conditions is brilliant and I plan to practice thinking this way for sure!
Another way of looking at this is: all advice is conditional. It only works for people in certain setups and environments. People rarely specify their conditions, and may not even be aware of the other possibilities.<p>So when offering advice or suggestions for process improvement, always try to add at least a couple of sentences describing the context at the front. It'll reduce the amount of objections you get.
The stated goals are not tied to any business value. This is putting the cart before the horse. TCR is a means to an end, not an end itself.<p>Dale's essay on "Resistance as a Resource" is very good, but this is not how I would apply it.
Huh, I was bullish on “log-driven programming” (<a href="https://news.ycombinator.com/item?id=18165472" rel="nofollow">https://news.ycombinator.com/item?id=18165472</a>) but this one strikes me as fairly obvious common sense. Is it any more than “try to respond to objections in a positive way”?<p>If someone objects “I can’t use this TCR process because it will mess up version control history” it seems pretty clear that either TCR or version history will have to change.
This is a terrific article. I've used some similar techniques to reduce negativity and reflexive opposition to change (mostly borrowed from sales techniques), but this logical formulation is very clear and easy to use.
As the author admits, this isn't how logic works. Next time, use the Law of Contrapositives, which at least works when the Law of Excluded Middle is assumed.<p>Suppose, for example, I object to cats; I might say, "If cats are in the house, then they will bring me mice as tribute." The contrapositive is, "If I am not being brought mice as tribute, then there are not cats in the house." Plural logic aside, this is pretty close! And now we can turn it into a goal: "If I want mice to exist in my house in non-dead states, then I should avoid cats." Now we've managed to refactor this into a goal: "I should get cats if I want fewer mice."<p>I'm not sure if anything was gained from this exercise.