A very good example of the risks of AI: belief in its output influencing discourse as "ground truth" by naive readings. The article points out the GIGO quality of what's going on: it's a biassed input state which is going to be merry hell to deal with. Unbiassing social contexts itself has risks.<p>Rinse and repeat for racism, emerging nazi themes, prompts given to people with active psychosis to go and try to kill famous people...