This has been attributed to so many places. I actually spent some time coming up with an algebraic proof that it worked when I was in college. It turns out that it is perfectly clear what is happening if you write both numbers in binary as another poster has pointed out.<p>However for historical purposes, the question is why? I think there is a very simple reason. Before you have based number systems, most number systems are usually derived from tally sticks. This makes some functional sense, and once you think about it from a tally stick, roman numerals make perfect sense. For example consider the number XXIV. It represents a position on a tally stick:<p>IIIIVIIIIXIIIIVIIIIXIIIIVIIIIXIIIIVIIIIXIIIIVIIIIL<p>Count to the second X and then one before the V. IXL is the same. Count backwards one X and then one I from the L. So while our system of numbers today is based on position of digits, many earlier systems were based on visualized markings on tally sticks. So you have numbers without a base system. This divide and add approach works very well when you don't have a number system. You can take your tally stick, figure out the number, and the same on the other side. I cannot think of another way to do multiplication on a tally stick.<p>Interestingly tally sticks of a slightly different sort were in use up until the early 19th century in Britain as tax receipts and in 1836 the attempt of the British government to get rid of these humble staves leveled both houses of parliament. Before you have widespread numeracy, tally sticks are how you track amounts for everything from debts to receipts. It isn't surprising that this would be a widespread method.
Since dividing by two (ignoring remainders) is just a left shift, and multiplying by two is just a right shift, this means you can do multiplication with just <<, >>>, and +.<p>Of course, here's another, er, related algorithm. Take your two numbers A and B. Make a variable called C, initially set to 0. Iteratively decrease A by 1, and increase C by B. Do this until A = 0. Then C is your answer. Everyone knows this technique, it's obvious. But it means that you can do multiplication with just + and --.<p>I wonder what other ways you can do multiplication with a limited set of mathematical operators.
I've heard this referred to as the "Peasant's algorithm," though I don't know why. Interesting fact: In binary, the peasant and traditional multiplication algorithms are actually step-for-step identical.
Some folks have pointed out the relationship to binary multiplication. Here's a TED talk that discusses the long history of binary math in ancient Africa.
<a href="http://www.ted.com/talks/ron_eglash_on_african_fractals.html" rel="nofollow">http://www.ted.com/talks/ron_eglash_on_african_fractals.html</a>
Spoiler: Boole got the idea from Africa by way of the Middle East!
My dad wrote a short article on this in the 1980s:<p><a href="http://darrow.faithweb.com/peasantmultiply.htm" rel="nofollow">http://darrow.faithweb.com/peasantmultiply.htm</a>
In Ruby:<p><pre><code> # Ethiopian Multiplication
# usage: ruby em.rb 673 7
m, n = ARGV.map(&:to_i)
product = 0
while m >= 1
puts "%4d : %4d %s" % [m, n, m.even? ? "Ignore" : ""]
product += n unless m.even?
m = m / 2
n = n * 2
end
puts "Product: #{product}"
</code></pre>
<a href="https://gist.github.com/3660240" rel="nofollow">https://gist.github.com/3660240</a>
My son was taught this method at school as THE way to do multiplication. Imagine my confusion when he brought homework back and I started doing multiplication the way we were all taught, only to be told "DADDY, you don't know anything!".
Made me think of the Trachtenberg Speed System:
<a href="http://mathforum.org/dr.math/faq/faq.trachten.html" rel="nofollow">http://mathforum.org/dr.math/faq/faq.trachten.html</a>