Σ$0.02: Perhaps a way to contribute to the world would be to figure out how to get rid of Facebook: <a href="https://stallman.org/facebook.html" rel="nofollow">https://stallman.org/facebook.html</a><p>Σ$0.04: Mark Zuckerberg mentions talk on the threat of AI as fear mungering, however, <a href="https://intelligence.org/" rel="nofollow">https://intelligence.org/</a> (MIRI) argues that advanced AI is a legitimate existential threat. Note that when MIRI speaks of "AI", they are not talking about fancy neural networks, but the fact that since humans are no where near any kind of upper bound on intelligence, and the space of possible Turing machines and self modifying algorithms is overwhelmingly large, plus reams of other arguments you , humans will eventually be able to produce computer programs with the capability of destroying civilisation.
N.B: This is my brief summary of what their stance seems to be, and isn't a direct quotation or paraphrasing of MIRI's official opinion.