Last week, a vision-language model made the rounds on Twitter and Hacker News (<a href="https://news.ycombinator.com/item?id=40505099">https://news.ycombinator.com/item?id=40505099</a>, made front page). However, the model code and weights were copied from another team's work, MiniCPM-V, without attribution. The original authors have removed their repo and HuggingFace model (<a href="https://x.com/var_epsilon/status/1797628346156945459" rel="nofollow">https://x.com/var_epsilon/status/1797628346156945459</a>) and the MiniCPM authors have reproduced very damning evidence (<a href="https://x.com/zhanga6/status/1797293189378068768" rel="nofollow">https://x.com/zhanga6/status/1797293189378068768</a>). Notably, the model was originally trained on held out examples of ancient Chinese script, which their university recently scanned, and the two models perform identically, which should not be possible because of the uniqueness of the data. I consider this watermarking to be pretty clever!<p>There's plenty of paper plagiarism, but this is the first case I've seen of <i>model</i> plagiarism.
Posted 10 hours ago:<p><i>Llama3V is suspected to have been stolen from the MiniCPM-Llama3-v2.5 project</i> - <a href="https://news.ycombinator.com/item?id=40559746">https://news.ycombinator.com/item?id=40559746</a> (6 comments)