(1) From a dialectic point of view, I think Kurwzeil makes an impressively compelling argument that the rate of progress has been exponential, <i>historically</i> (e.g. using others' milestones).<p>So it's striking that he doesn't argue for an underlying mechanism, nor for whether it can continue - or even mention it explicitly.<p>The mechanism seems to be similar to <i>standing on the shoulders of giants</i> - once an improved technology is developed, it can be used for improved search for other technologies.
The search can then be faster, more efficient, can take place in new domains, with greater accuracy - whatever is the the nature of the improvement.<p>But there's another assumptions: that there <i>will</i> be more to discover, and with a constant density. But why should the frequency of potential discoveries be constant, such that if you seek faster, you'll find faster? To be clear, it <i>does</i> seem to be that way... I'm just wondering if there's an argument as to <i>why</i> it's that way, and why it will continue to be that way as we keep searching. There's the assumption of mediocrity - that we are not at a privileged center of the universe - but is there a better argument?
For example, why shouldn't it be that discoveries become exponentially rarer? So that we have to keep searching faster and faster just to maintain a linear rate of progress.<p>EDIT: e.g. consider primes, infinite but become less frequent as you go.<p>(2) I also like Hofstadner's argument that it's not necessarily true that an intelligence is sufficient to understand itself e.g. a giraffe can't understand itself. Of course, we can divide-and-conquer, and create hierarchical understandings, such that we can understand one level in itself, by assuming the concepts below, and ignoring concepts above. But not all things can be neatly composed into modular hierarchies, such that each level is easily understood - though we are biased towards seeing those that can, because that is all that we can see. In other words, perhaps we can one day duplicate a human mind... yet not understand it.<p>(3) There's a fascinating thought in these class notes, that people assume stagnation, and don't like to generalize beyond extrapolating single variables. This is very pragmatic, because it's almost impossible to predict with any fidelity. But it results in a very interesting effect: people are very confident as they stride the same well-trod paths, only ever taking one step away from it, so only finding big wins transitively (by hill climbing). This means that just two or three steps off the beaten track can be miraculous improvements... all you have to do is find it, though that might take an enormous number of attempts.<p>EDIT (4) Re: exponential progress in software (cf. Moore's Law for hardware), there are dimensions of progress that seem to be exponential: the release rate of new software; the productivity due to using others modules (SOTSOG, esp open source); "software is eating the world" as more problems are solved with software; software is being used on more devices, in more places (eg mobile devices). One could argue this is cherry picking, and none of these are equivalent to Moore's Law - but exponential improvement only requires some kind of improvement, that can itself be built upon. NOTE: I don't have figures, so I don't know for sure whether the above are actually exponential, though "eating the world" seems to be. Also found this: <a href="http://multiverseaccordingtoben.blogspot.com.au/2011/06/is-software-improving-exponentially.html" rel="nofollow">http://multiverseaccordingtoben.blogspot.com.au/2011/06/is-s...</a>