"A lightbulb surrounding some plants" is not English. If a wolf pack is surrounding a camp, we understand what it means. If a wolf is surrounding my camp; does that mean I'm in his stomach? Absurd.<p>"A lightbulb <i>containing</i> some plants," makes sense, not "surrounding". It's too small to surround anything, which humans (and apparently, current AI) understand. Paradoxically, only primitive language models would actually understand the inverted sentences; proper AIs should, like humans, be confused by them; since zero human talks like that.<p>The only reason the Huggingface people (in their Winoground paper) got 90% of humans "getting the answer right" with these absurd prompts because of humans' ability to guess what is expected of them by an experimenter. Do it in daily life instead of a structured test, and see if these same people get it right.<p>It's exactly as if I gave you the sequence, in an IQ-test context: "1 1 2 3" and asked you to give me the next number. You'd give the Fibonacci sequence, because you know I expect it; no matter that it's a stupid assumption to make because the full sequence might as well be "1 1 2 3 1 1 2 3 1 1 2 3", and you don't have enough information to know the real answer. Do we really want AIs that similarly "guess" an answer they know to be wrong, just because we expect it? Or (in number sequence example) AIs that don't understand basic induction/Goodman's Problem?<p>I'd like to add that the author, who keeps referring to himself as a scientist, is in fact a psychology professor. In his Twitter bio, he states that he wrote one of the "Forbes 7 Must-Read Books in AI", which discredits him as a fraud since Forbes can be paid to publish absolutely whatever you ask them to (it's not disclosed as sponsored content, and they're quite cheap, trust me).