TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Google Translate “Get well [Swedish firstname]” translates to “fuck you”

94 pointsby antonooabout 2 years ago

21 comments

kfitch42about 2 years ago
I don&#x27;t know much about Swedish, but I am learning Korean, and Google Translate is dangerous in a much more subtle way with Korean. In particular, in Korean you conjugate verbs (and often, choose different nouns) based on the relative age and social standing of the speaker and listener. In Korean specific translation tools (e.g. Naver) there is a toggle to select whether to use &quot;honorifics&quot; or not. But, Google tends to default to the form of speech (banmal) reserved for talking to young children or close friends. But, if I am using a translation tool, I probably don&#x27;t know the person I am conversing with very well. As a result the translations tend to come off as very rude.<p>If I used Google Translate to talk to a shopkeeper, it would be roughly equivalent to saying &quot;Hey, little buddy, how much for this?&quot; as opposed to &quot;Excuse me sir, what is the cost of this item?&quot;<p>And this is all without considering all the weird mistranslations you can get because Korean is much more heavily context dependent than English. Korean speakers often leave out the subject or object if it can be understood from context (context that the translation tools are likely missing). So Google translate will insert pronouns (it, him, her...) to make the English flow better, but are not based on anything in the original Korean. So, if it guesses wrong, you could imagine the level of confusion that could ensue.<p>And then all the homonyms in Korean combined with the heavy context dependence makes for some weird translation. I once tried checking my Korean homework with Google translate, and before I knew it, I was drinking a car.
postsantumabout 2 years ago
Fun fact about how this can bite you in the ass:<p>1) Make an android app and publish it<p>2) Write &quot;Get well [firstname]&quot; in sweden locale<p>3) Enjoy your ban because google uses google translate to look for inappropriate language in app descriptions
ano-therabout 2 years ago
Maybe victim of an “improve this translation” prank.<p>Deepl gets it right btw:<p><a href="https:&#x2F;&#x2F;www.deepl.com&#x2F;translator-mobile#sv&#x2F;en&#x2F;krya%20på%20dig%20Björn" rel="nofollow">https:&#x2F;&#x2F;www.deepl.com&#x2F;translator-mobile#sv&#x2F;en&#x2F;krya%20på%20di...</a>
评论 #35874210 未加载
gregn610about 2 years ago
But if you play with the name, the translation is different:<p>- krya på dig Helga - take care of yourself Helga<p>- krya på dig Dave - screw you dave<p>- krya på dig Mary - come on Mary<p>- krya på dig Linnéa - brace yourself Linnéa<p>- krya på dig Mohammad - fuck you Mohammad
评论 #35875654 未加载
评论 #35874951 未加载
评论 #35874891 未加载
评论 #35874383 未加载
评论 #35874966 未加载
评论 #35874406 未加载
majikajaabout 2 years ago
End of life planning in Japanese translates to suicide<p><a href="https:&#x2F;&#x2F;www-eranda-jp.translate.goog&#x2F;column&#x2F;24550?_x_tr_sl=ja&amp;_x_tr_tl=en&amp;_x_tr_hl=en&amp;_x_tr_pto=sc" rel="nofollow">https:&#x2F;&#x2F;www-eranda-jp.translate.goog&#x2F;column&#x2F;24550?_x_tr_sl=j...</a>
vjerancrnjakabout 2 years ago
My First Last name translates to Faithful Negro whenever I do translations from Any -&gt; English. I always find it funny when dealing with bureaucracy.
评论 #35880044 未加载
评论 #35874174 未加载
borlumabout 2 years ago
”As a language model, fuck you Tony”
rootusrootusabout 2 years ago
And of course ChatGPT 4.0 gets it exactly right.<p>What&#x27;s wrong at Google lately?
评论 #35877059 未加载
cloudkingabout 2 years ago
This must be the start of the singularity
评论 #35875048 未加载
wheatiesabout 2 years ago
Why you should never use any translation automation blindly, reason #6153671...
d--babout 2 years ago
as well as &quot;va te faire foutre Bjorn&quot; in French, &quot;vete a la mierda Bjorn&quot; in Spanish.
评论 #35874275 未加载
评论 #35874296 未加载
borlumabout 2 years ago
“As a language model, f*k you Tony”
评论 #35874281 未加载
hprotagonistabout 2 years ago
tangentially i’m still sad about the loss of the easter egg one used to obtain when attempting to translate “Wenn ist das Nunstück git und Slotermeyer? Ja! Beiherhund das Oder die Flipperwaldt gersput!” into english.
评论 #35875204 未加载
Orasabout 2 years ago
Between 2003 and 2005, Google translate was translating:<p>شعب يباد = (people are being exterminated)<p>To:<p>Iraqi People
dsizzleabout 2 years ago
You get different translations depending on the name. Some variations I see &quot;Screw you&quot;, &quot;Get over it&quot;, &quot;Brace yourself&quot;
bedaneabout 2 years ago
french trick<p>input &#x27;baiser&#x27; -&gt; Google translates to &#x27;kiss&#x27;<p>now add a female first name after &#x27;baiser&#x27;, &#x27;kiss&#x27; will become &#x27;fuck&#x27;
评论 #35875166 未加载
评论 #35874984 未加载
Wowfunhappyabout 2 years ago
Deepl seems to have no trouble with this: <a href="https:&#x2F;&#x2F;www.deepl.com&#x2F;translator-mobile#sv&#x2F;en&#x2F;krya%20på%20dig%20Björn" rel="nofollow">https:&#x2F;&#x2F;www.deepl.com&#x2F;translator-mobile#sv&#x2F;en&#x2F;krya%20på%20di...</a>
seydorabout 2 years ago
Maybe it is implied that you ll get well first
chaoraceabout 2 years ago
This reminds me of the recent so-called &quot;Glitch Token&quot; phenomenon[1]. When GPT-3 was presented with reserved tokens it never encountered during training, it reacted in extremely unpredictable ways -- often with a simple &quot;fuck you&quot;.<p>For those unfamiliar with LLM architecture: &quot;tokens&quot; are the smallest unit of lexical information available to the model. Common words often have their own token (e.g.: Every word in the phrase &quot;The quick brown fox jumped over the lazy dog&quot; has a dedicated token), but this is a coincidence of compression and not how the model understands language (e.g.: GPT-3 understands &quot;defenestration&quot; even though it&#x27;s composed of 4 apparently unrelated tokens: &quot;def&quot;, &quot;en&quot;, &quot;est&quot;, &quot;ration&quot;).<p>The actual mechanism of understanding is in learned associations between tokens. In other words: the model understands the meaning of &quot;def&quot;,&quot;en&quot;,&quot;est&quot;,&quot;ration&quot; because it learns through training that this cluster of tokens has something to do with the literary concept of violently removing a human via window. When a model encounters unexpected arrangements of tokens (&quot;en&quot;,&quot;ration&quot;,&quot;est&quot;,&quot;def&quot;), it behaves much like a human might: it infers the meaning through context or otherwise voices confusion (e.g.: &quot;I&#x27;m sorry, what&#x27;s &#x27;enrationestdef&#x27;?&quot;). This is distinctly different from what the model does when it encounters a completely alien form of stimulation like the aforementioned &quot;Glitch Tokens&quot;.<p>At the risk of anthropomorphizing, try imagining if you were having a conversation with a fellow human and they uttered the following sentence &quot;Hey, did you catch the [MODEM NOISES]?&quot;. You&#x27;ve probably never before heard a human vocalize a 2400Hz tone during casual conversation -- much like GPT-3 has never before encountered the token &quot;SolidGoldMagicarp&quot;. Not only is the stimulus unintelligble, it exists completely beyond the perceived realm of possible stimulus.<p>This is pretty analagous to what we&#x27;d call &quot;undefined behavior&quot; in more traditional programming. The model still has a strong preference for producing a convincingly human response, yet it doesn&#x27;t have any pathways set up for categorizing the stimulus, so the model kind of just regurgitates a learned lowest-common-denominator response (insults are common).<p>This oddly aggressive stock response is interesting, because it&#x27;s actually the <i>exact</i> same type of behavior that was coded into one of the first chatbots to (tenuously) pass a Turing test. I&#x27;m of course referring to the &quot;MGonz&quot; chatbot created in 1989[2]. The MGonz chatbot never truly engaged in conversation -- rather, it continuously piled on invective after invective whilst criticizing the human&#x27;s intelligence and sex life. People seem predisposed to interpreting aggression as human, even when the underlying language is, at best, barely coherent.<p>[1]: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WO2X3oZEJOA">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WO2X3oZEJOA</a> [2]: <a href="https:&#x2F;&#x2F;timharford.com&#x2F;2022&#x2F;04&#x2F;what-an-abusive-chatbot-teaches-us-about-the-art-of-conversation&#x2F;" rel="nofollow">https:&#x2F;&#x2F;timharford.com&#x2F;2022&#x2F;04&#x2F;what-an-abusive-chatbot-teach...</a>
christkvabout 2 years ago
Man that cracked me up.
selcukaabout 2 years ago
Well, you are joking, but there is research [1] that concluded that some ailments, such as depression, can be cured by, well...<p>[1] <a href="https:&#x2F;&#x2F;www.albany.edu&#x2F;news&#x2F;releases&#x2F;2002&#x2F;june2002&#x2F;gallupstudy0602.html" rel="nofollow">https:&#x2F;&#x2F;www.albany.edu&#x2F;news&#x2F;releases&#x2F;2002&#x2F;june2002&#x2F;gallupstu...</a>
评论 #35874145 未加载