I've been wondering about this for some time, and theoretically, it should (at least to an extent) be possible for AI/ML to break some cryptographic functions -- or more concisely, decrypt some crypto outputs. However, is this proven to be true/false?
Simple answer, no.<p>Trained AI works (in the simplest terms) by incrementally tweaking coefficients in mathematical statements to best approximate a set of data points provided. Generally it does this by being able to measure how close or far away the AI is from the right answer in trainable cases and then minimizing this distance.<p>For this to work in cryptography, we would need to have a formula which already breaks cryptography in a general sense that we can tweak for specific cases using AI.<p>What AI <i>might</i> be able to do is to demonstrate previously undiscovered relationships in number theory by throwing a whole heap of cryptographical relationships at an AI to uncover an undiscovered correlation. The chances of this are pretty small, but an AI might get lucky.