>> “The safety checker: Following the model authors' guidelines and code, the Stable Diffusion inference results will now be filtered to exclude NSFW content. Any images classified as NSFW will be returned as blank. To check if the safety module is triggered programmaticaly, check the nsfw_content_detected flag like so: Potential NSFW content was detected in one or more images. Try again with a different prompt and/or seed."<p>— That’s disappointing and appears conflict my understanding of Stability.AI’s founder claim that limits like these would not be injected into the code; this based on an interview he did here:<p><a href="https://m.youtube.com/watch?v=YQ2QtKcK2dA" rel="nofollow">https://m.youtube.com/watch?v=YQ2QtKcK2dA</a><p>If I am correct, makes me question all the other claims he made, which is unfortunate.<p>___<p>Edit — Here is a direct link to point in the interview I was referring to above:<p><a href="https://youtu.be/YQ2QtKcK2dA?t=701" rel="nofollow">https://youtu.be/YQ2QtKcK2dA?t=701</a>