<i>What that means is not only that they learn what it means to be a dog or a cat,</i><p>NO. Please.. they have no idea "what it is like to be a bat" they have been trained to respond to images with a high enough threshold of dog-ness or cat-ness about them.<p>"what it means to be" is not a phrase anyone in this field should be using.<p><i>Then, when you refer to “Lambda”, “ChatGPT”, “Bard”, or “Claude” then, it’s not the model weights that you are referring to. It’s the dataset.</i><p>This I can get behind. "its statistics. whichever method you use, the underlying model is most probably like the input set its derived from, because the quality it reflects is in that dataset"<p>If however, the divergences between them turned out to be interesting, I'd say "its not the dataset, its how people intuit fit against the dataset"