> [A]cademics ran 520 harmful prompts through GPT-4, translating the queries from English into other languages and then translating the responses back again, and found that they were able to bypass its safety guardrails about 79 percent of the time using Zulu, Scots Gaelic, Hmong, or Guarani. The attack is about as successful as other types of jail-breaking methods that are more complex and technical to pull off, the team claimed.