To summarize: At this point, humanity is its own greatest extinction risk. If we don't destroy ourselves in the next century, we will almost certainly inherit the stars.<p>For a much deeper treatment of this subject, I recommend <i>Global Catastrophic Risks</i>, edited by Nick Bostrom and Milan Ćirković. The overarching point is straightforward (see the paragraph above), but the details of each threat are interesting on their own.<p>1. <a href="http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom/dp/0199606501" rel="nofollow">http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...</a>
A quote: " ... international policymakers must pay serious attention to the reality of species-obliterating risks."<p>Are these people all completely ignorant of evolution and science? No matter what happens in the future, one or another species-obliterating risk is a certainty. Here's why:<p>1. Our species has existed for about 200,000 years.<p>2. On that basis, and given our present knowledge of biology and evolution by natural selection, it's reasonable to assume that, within another 200,000 years, we will have been replaced by another species who either successfully competed with us, or into whom we simply evolved over time.<p>3. Human beings are a note, perhaps a measure, in a natural symphony. We're not the symphony, and we're certainly not the reason the music exists.<p>4. Based on the above estimate, there will be 10,000 more human generations, after which our successors will no longer resemble modern humans, in the same way that our ancestors from 200,000 years ago did not resemble us.<p>5. We need to get over ourselves -- our lives are a gift, not a mandate.<p>6. I plan to enjoy my gift, and not take myself too seriously. How about you?
I feel a medium-term or maybe even short-term threat (in the next 10-30 years) not of extinction necessarily but of becoming completely irrelevant. People like to jump to the conclusion that artificial super-intelligence will want to eliminate humans. I don't think that is a foregone conclusion at all.<p>However, if (when) super-intelligent artificial general intelligence "arrives", that pretty much makes normal unaugmented humans the relative equivalent of chimps. It means that our opinions and actions are no longer historically relevant. We will be, relatively speaking, obsolete mentally disabled people running along doing relatively stupid things. <a href="http://www.youtube.com/watch?v=I_Juh7Xh_70" rel="nofollow">http://www.youtube.com/watch?v=I_Juh7Xh_70</a><p>In order for our opinions and abilities to actually matter relative to the super-doings and super-thoughts of the new AIs, we really _must_ have this magical nano-dust or something that integrates our existing homo sapien 1.0 brains with some type of artificial super-intelligence.<p>So that is what I am worried about -- will the super-AIs show up before the high bandwidth nano-BCIs (brain-computer interfaces) or before I can afford them.<p>Of course, in the long run there may not be a good reason for AIs to use regular human bodies/brains at all and so that may be phased out for subsequent generations.
The article talks about disasters that could eliminate humanity, but I wonder if humanity is more likely to become extinct in the sense of no longer being "homo sapiens."<p>For example, through a technological singularity or even just through accumulated gene therapy over generations.