I have to take issue with how the binary system is explained. Not because it is wrong, but because it is comes off as way too intimidating for what it should be.<p>Binary arithmetic is not useful for the beginner. But binary logic is <i>extremely</i> useful...and more importantly, binary logic is something that non-programmers typically have a blind spot for.<p>Someone once asked me what digital concepts non-programmers should be familiar with...at the top of that list for me is the concept of everything being either 0 or 1.<p>For example, the answer to the question: <i>is 2 greater than 1?</i> can be represented as a 1, for "true"<p>So how do computers solve complex problems? By breaking a complex problem into many, many, <i>many</i> yes or no questions.<p>How do you tell if a pixel is reddish? Its R value is greater than a certain threshold (and of course, this comparison is a composition of multiple binary operations). How do you tell if in a given photo if a man's eyes are closed? Well, given a set of pixels where the eyes are located, calculate whether most of those pixels are reddish rather than white (or brown or whatever the person's skin is). And how do you locate the eyes in a jpeg in the first place? Well, that's <i>more</i> binary decisions...which leads you to talking about functions and methods and encapsulation of complex code (another feature of programming that non-programmers should know).<p>Once a novice can see how everything, whether it's comparing two numbers, deciding a web-scraping strategy, or designing an AI, can be reduced to yes-no questions, then I feel that programming becomes much more interesting...after all, they need a way to process those countless yes-no questions in a reasonable amount of time.