Source code: <a href="https://github.com/PaddlePaddle/LARK/tree/develop/ERNIE" rel="nofollow">https://github.com/PaddlePaddle/LARK/tree/develop/ERNIE</a><p>It seems that the primary difference here is that ERNIE generates a different set of data for the masked LM task that BERT trains on. Rather than masking words arbitrarily it does some preprocessing with some tagging tool to identify segments that can be masked (my Chinese is rusty so this may not be totally accurate).<p>I believe the intuition here is that BERT somewhat expects words to be relatively distinct units of meaning since it masks words individually, but this assumption doesn't hold for Chinese where "words" (characters) are more frequently grouped together to form meaning. I feel this could be applied to English to a lesser extent though, curious if anyone has tried doing a similar thing.