Being aware of and managing my ignorance.<p>I don't know if I can describe this very well, and I've only recently started explicitly noticing this about myself, but I seem to have a well-developed intuition of how much I am "in the dark" about a particular domain, problem, technology, library, behavior, etc. I seem to sense well (and can back up with explicit arguments if necessary, but it stars with a feeling) when there's too much darkness around me, and then my primary focus must be on learning/understanding/tinkering/iterating to make it go away, rather than groping around in it. But it's also important to know when to stop, to avoid the danger of depth-first devouring of information that can consume too much time (not unlike "wikipedia surfing", when you suddenly come to and realize you've been reading it for three hours).<p>I never imagined this to be any kind of special ability, until I started noticing that some otherwise competent people seem to lack it. So perhaps it's a useful habit, and perhaps it can be cultivated, but I'm not sure how, except by trying to be aware of your ignorance as much as possible.<p>This principle seems to apply in different situations: when designing, when debugging, when writing code to interact with someone else's code. It always pays to maintain a mental model that includes the gaps, and to estimate how important filling the gaps is.<p>For example, when debugging a difficult problem, like an elusive bug or mysterious behavior, I usually make conjectures of where the problem could broadly originate, and try to rule them out one by one. Maybe I can ask if things are already "bad" after this place or before this place in the source code. Maybe I can vary or reduce interaction with other systems to rule out the problem there. I'm half-explicitly half-intuitively dispelling the "darkness" around my understanding of the problem, forcing it to hide in fewer and fewer places. Suppose that one of the plausible conjectures is that this mysterious behavior may be caused by a bug in a core library or the compiler/interpreter of the language. I may need to "dive" there and start reading much unfamiliar code to learn and understand those domains, but I'm going to postpone this until absolutely necessary, and rule out easier domains first. I'm managing my ignorance explicitly.<p>(Now that I'm thinking about it, maybe this is why many people, including myself, often prefer debug prints to working in a debugger - debug prints are good at giving you useful negative information: "it's still OK here, the bug's not here". When working interactively with a debugger, this is more difficult to get at).<p>Or imagine that you're thinking about how you will use a standard component - an SQL database, a "NoSQL" database, a network library, S3, a language, anything. Assume you understand the API. How much do you know about the constraints and limitations of that component, and how much <i>should</i> you know? I don't feel the need to throw together a piece of code that uses connect(), accept(), send() etc. before I use sockets pervasively, but if it were my first time writing a network client/server, I probably would. I've never used S3, so merely reading about it and reading the API wouldn't be enough to start something big using it. I'd have to tinker a little first, get some intuitive, dispel some darkness. All this seems rather trivial to write out, but I think that we fail to act this way surprisingly often. I've seen people write multi-threaded programs in pure Python, complete with starting multiple threads, using locks, etc., oblivious to the existence of the GIL and the fact that they're losing rather than improving performance. I've almost done the same thing myself when I was new to Python (I still think that Python hides this aspect of its behavior much too well from outsiders and beginners, and am a little chagrined over it).<p>Premature optimizations usually fall under this principle as well. When I'm optimizing prematurely, it's because I am not uncomfortable enough with the amount of darkness around the behavior of my system. I don't actually have a good understanding of where the bottlenecks are now or may be in the future, but it doesn't bother me <i>enough</i>; I'm groping in the dark without realizing it. If I do realize it, I will step back, try to look at my system with a critical eye and do some hard measurements before I try to optimize. This will usually be a good thing.