<i>A colleague of mine used to call C a WYWIWYG language—"what you write is what you get"—wherein each line of code roughly mapped one-to-one, in a self-evident way, with a corresponding handful of assembly instructions. This is a stark contrast to C#, wherein a single line of code can allocate many objects and have an impact to the surrounding code by introducing numerous silent indirections. For this reason alone, understanding what things cost and paying attention to them is admittedly more difficult – and arguably more important – in C# than it was back in the good ole’ C days. ILDASM is your friend … as is the disassembler. Yes, good systems programmers regularly look at the assembly code generated by the .NET JIT. Don’t assume it generates the code you think it does.
</i><p>I love it. Rehighlighting:<p><i>good systems programmers regularly look at the assembly code generated by the .NET JIT.</i><p>Let's consider the percentages, and what it implies about the Joe's assessment of programmers.<p>What percentage of systems programmers in any language look at the generated assembly?<p>Is the percentage for .NET programmers higher or lower than the average systems programmer?<p>What percentage of programmers who regularly look the generated assembly meet the unstated "other requirements" of being a "good systems programmer"?<p>I can't believe it's a very high percentage that make it through this filter, for .NET or other.<p>And yet I think he's right: how can one possibly be a good programmer without understanding what the computer is actually executing? And how can you understand what the computer is executing without looking at the assembly?
I have always felt that premature optimization is the root of all evil implied that you will profile the code post writing and fix the bottlenecks. Also as long as there is no impact on code clarity, I don't think writing (prematurely)optimum code is bad.