I would want to see some data on tokenization for some real-world examples. "Je voudrais une pizza" actually translates more directly to "I would like a pizza" which is 5 tokens. But also I think there's some danger here in terms of this might be cherrypicking examples. Spanish is a lot more dense than English or French and might tokenize better. (I see "quiero pizza" is 4 tokens which seems like the right number of tokens to me - "quiero" actually contains "I want <present tense>") You could argue it's 2 or 3 tokens but 4 seems preferable.<p>For diacratics in French or Spanish, diacratics are logically characters. I can't think of an example where it's actually useful to split the letter into a different token but I could see it happening and not being harmful. I do think it's possible French is just weird and just needs more tokens. When I think about how I process French, I probably do treat e.g. "Je l'ai aimé" as a pathological example as 3 tokens when I speak it out loud. But I can also see why you would tokenize it as 6 tokens, I'm not sure that's Anglocentrism so much as it's recognizing a complexity difference between French and English writing.<p>But all this is contrast to how non-roman characters are tokenized at the byte level. That just seems bad and like it's definitely going to make it worse with non-roman languages. There's no point in having tokens that split characters.
Slightly offtopic, but:<p>> One of the models listed above called NLLB (No Language Left Behind) has been open sourced by Facebook allowing for translation for 200 languages.<p>It was not. The model's weights are under CC-BY-NC, which certainly motivates commercial entities to not leave those languages behind. /s
What an interesting aspect I haven't considered before. All the AIs will be trained on the available media - most of which is English.<p>I sometimes wonder what it takes to unseat a lingua franca, but it looks like we won't see that soon. English is set to dominate for a long time.
So what I got from this is that GPT was trained on a dataset that biased in English contents. Is that right?<p>I think even human has to spend extra energy to speak a language they were not born with, no matter how fluent they are in this language. I don't know about natural multilinguals.
You can use their online tool to see how it tokenizes words: <a href="https://platform.openai.com/tokenizer" rel="nofollow">https://platform.openai.com/tokenizer</a>
"Je voudrais une pizza" is better translated to "I would like a pizza"
"I want a pizza" would be "je veux une pizza"
If you think about this from a "language is computation" perspective, it starts to get even more interesting.<p>For example, what would the real-world performance of ChatGPT be if we had trained it predominantly on German or Korean text?<p>Is English actually the best language/structure for this system?
Actually, it is not true. Hilarious<p>Author compares different encoders: for Facebook's NLLB and GPT2. Where did title came from?<p>Another point is that OpenAI changed encoders for chat models. Link: <a href="https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb">https://github.com/openai/openai-cookbook/blob/main/examples...</a><p>Now English is less optimized for tokens usage and other languages are much more balanced. E.g. Ukrainian takes only twice as much tokens, before it had 6 times more tokens
So glad someone took the time to put up some data about it. Since day one, the subpar results for Asian languages has stuck out to me. It's especially true for LLama-derived models, where the output is just abysmal. It's my own pet theory, that bad tokenization is an important reason as to why they suck so much in the first place.<p>It's not just broken grammar, it's a surprising lack of creativity, that English doesn't suffer from. ChatGPT English -> DeepL and fixing the auto-translation gives vastly improved results, than prompting ChatGPT to respond in an asian language.
So for latin languages, they tokenize per word, and somehow for asian languages, it's tokenizing per radical.<p>Of course you'd end up with a lot more tokens. Just tokenize by word regardless of language.
Setting aside the specific choice of tokenizer for GPT models, I'm curious how much difference in performance is made by the features of the human language used to represent the training data. Like if you kept the exact same training corpus and could wave a magic wand and translate it into any language and could create a custom tokenization for each language, would some be more amenable than others to GPT-style language modeling?
I’m finding it amazing that the model comes localized and supports obscure languages and is available. Compare this to traditional software. Or even to web software. Does Google come localized to all of these languages, for example?<p>Yes, there is overhead from localization. So what, this overhead was always there for software.
The French example is strange and shows that the language model has an English bias.<p><pre><code> - “I want a pizza” = 4 tokens
- “Je voudrais une pizza” = 7 tokens
</code></pre>
Why is “want” only 1 token in English, but “voudrais” 4 tokens? Following the French example, would “wants” and “wanted” map to 1 or two tokens?
It is not that tokenization is optimized for English, but rather the other way around perhaps.<p>Take "lampara" or "pantalones" in Spanish for example. English speakers were clever enough to shorten those words to "lamp" and "pants" respectively. And they have done this with many words.<p>Translate text into Spanish and you will see text gets longer and there is more meaning encoded into words.<p>"La mesa" refers to a female table, although tables are not lifeforms and have no sex.<p>To me some languages impose a communication tax. It is taboo because people conflate language and culture and such.