I think ChatGPT wasn't able to interpret the final diagram of the double slit experiment, rather than the meme as a whole. I wonder what the output would be if the input was just that diagram.<p>That said, I agree with the overall sentiment: IMHO, LLMs won't ever be able to parse culturally dense and community specific information, like memes, because the goalposts are always moving. That goes for humor in general, I suppose. I think that's a good thing for humans