DeepSeek is a censored product and by that of limited use for anything that might require prompts that are about anything that is somehow "controversial" in the eyes of the CCP. However, the censorship seems to be applied for certain prompts and doesn't seem to be integrated into the model itself as the answers given to such prompts are very similar and generic.<p>Has anybody already been able to successfully use prompt jailbreaking or other tricks to overcome this? It would be interesting to see what DeepSeek actually knows instead of what it is responding.<p>Censoring a model via selective training data or post-training is much more difficult.<p>The possible "solutions" applied to this "problem" (in the eyes of the censors) will be of high importance moving forward.<p>Other gov. actors also have an interest in altering models, let's not forget.