A good analogy for AI risk. We'd never visited the Moon before, or any other celestial object. The risk analysis was not "we've never seen life from a foreign celestial object cause problems on Earth, therefore we aren't worried." The risk analysis was also not "let's never go to the Moon to be _extra_ safe, it's just not worth it."<p>The analysis was instead "with various methods we can be reasonably confident the Moon is sterile, but the risk of getting this wrong is very high, so we're going to be extra careful just in case." Pressing forward while investing in multiple layers of addressing risk.