Strawman. Of course mapping followed by reducing isn't new, nobody serious ever claimed that it was (the Dean & Ghemawat paper certainly doesn't). Systems dedicated to performing map-reduce operations using giant clusters of commodity hardware over tera/petabytes of data are, though, if for no other reason than that the presence of and resources to economically store that much data have only recently become widely available. MapReduce, as a term, refers to those systems, not just the act of mapping and reducing.
For the longest time, after I read about MapReduce, I wondered what the big fuss was about. MPI has equivalent functionality (Broadcast/Scatter and Reduce) along with many other useful high-level communication functions. It does restrict you to a limited (and unpopular) set of languages for web programming though.
Good read:<p><i>Right now, this is a distinction pretty much without a difference. If you choose an implementation of MapReduce — like pure Hadoop (say in the Cloudera distribution) or Hadoop-Vertica or Aster Data’s SQL/MapReduce – you’re basically picking an entire technology stack. But those stacks are going to do a whole lot of changing and maturing in the near future – and as they do, it’s likely that projects will interact or even combine in all sorts of interesting ways.</i>
A good point, but while Map Reduce is not new I feel it emphasized clarity and simplicity (at least for the problem of sorting), so that is probably why it markets easier than MPI or a database. I wrote a bit on this point some time ago: <a href="http://www.win-vector.com/blog/2009/01/map-reduce-a-good-idea/" rel="nofollow">http://www.win-vector.com/blog/2009/01/map-reduce-a-good-ide...</a>