The "decoder" article says:<p>> The researchers defined 50 percent as success on the Turing test, since participants then couldn't distinguish between human and machine better than chance.<p>54% of GPT-4 conversations were judged to be human, so the "decoder" article says the Turing test has been passed--indeed, it seems more human than human. But the paper says:<p>> humans’ pass rate was significantly higher than GPT-4’s (z = 2.42, p = 0.017)<p>The seeming discrepancy arises because they've run a nonstandard test, in which the meaning of that 50% threshold is very hard to interpret (and definitely not what the "decoder" author claims). The canonical version of Turing's test is passed by a machine that can<p>> play the imitation game so well, that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning<p>The canonical experiment is thus to give the interrogator two conversations, one with a human and one with a non-human, and ask them to judge which is which. The probability that they judge correctly maps directly to Turing's criterion. If the two conversations were truly indistinguishable, then the interrogator would judge correctly with p = 50%; but that would take infinitely many trials to distinguish, so Turing (arbitrarily, but reasonably) increased the threshold to 70%.<p>That doesn't seem to be the experiment that this paper actually conducted. They don't say it explicitly, but it seems like each interrogator had a single conversation, with a human with p = 1/4. The interrogator wasn't told anything about that prior, leading them to systematically overestimate P(human). If every interrogator had simply always guessed "non-human", then they'd collectively have been right more often.<p>Even if the interrogators had been given that prior, very few would have the mathematical background to make use of it. GPT-4 is impressive, but this test is strictly worse than Turing's, whose result has clear and intuitive meaning.