Unless there is end-to-end encryption (which gmail does not support, partially because US government requires Google to monitor accounts supposedly belonging to terrorists etc), the server always "knows" your email in plain text.<p>It seems like processing user email text through a program that actively "works" on the text (v.s. simply encrypting/decrypting/transmitting the text) is generally not considered a privacy concern.<p>I feel like this is a bit tricky given the evolution in ML/AI. By making "feeding user email into any program" acceptable, the chance that rule-breaking incidents (e.g., using email text to train models) is unnecessarily increased imo.