As pointed out by others in this thread, this is basically the doomsday argument all over again.<p>It's a deep subject, and I can't pretend I know much about it, but one thing that I'd like to point out is that this kind of reasoning is based of the so-called "self-sampling assumption"[1], and that this concept depends of a choice of reference class.<p>Even if Richard Gott is right, what he means when he's talking about "humans" going extinct is "people that I can identify myself with" going extinct. In other words : he can only make assessments on the existence of people like him.<p>Humans are not perfect. Far from it. If anything, the existence of exceptionally smart people like Einstein or Von Neuman prove us that it's possible to imagine a world where everybody is at least as smart as those two. Arguably, that hypothetical future world may very well be outside of Gott's reference class. Such a world could result from the birth of a new Homo species and the end of Sapiens. It could mean that machines would have replaced us. It could mean lots of things, not all of them being necessarily dreadful.<p>My point being : the doomsday argument is not exactly a prediction about the demise of mankind, but rather one about a dramatic change of it. In a way, it's more apocalyptic in the original sense of the world : not the end of times, but a profound change, a new era or whatever. It's a prediction about the end of our reference class. Or in the Kurzweilian sense, the Singularity.<p>1. <a href="https://en.wikipedia.org/wiki/Self-sampling_assumption" rel="nofollow">https://en.wikipedia.org/wiki/Self-sampling_assumption</a>