I'm guessing this is a result of these two concepts not being far from each other in vector space? Like a data-driven version of "miserable failure" Google Bombing: <a href="https://en.wikipedia.org/wiki/Google_bombing" rel="nofollow">https://en.wikipedia.org/wiki/Google_bombing</a>
This smells like an LLM trying to correct the output of a speech recognition system. I said the word “racist” repeatedly and got this unedited output. You could see it changing the text momentarily after the initial recognition result, and given the way Mamaroneck sounds nothing like either of the other words I’d bet this thing was trained on news stories:<p>“Racist, Trump, Mamaroneck racist Trump Mamaroneck, racist racist racist racist Trump Mamaroneck”
Silly idiotic activism aside it's concerning that <i>if</i> someone working at Apple managed to slip in such a bold change into the OS then can a malicious group do the same?