Weak science.<p>They ran a battery of 12 tests and looked for one with a statistically significant difference at the 5% level.<p>If there were no actual differences, how many tests would you need to run to find at least one that by chance shows significance? If you run 20, you’d expect 5% to do so, on average, which is 1. Running 12 and finding 1 is not a significant finding.<p>Then, suspiciously, the only statistically significant cognitive test improvement they found was on trial 5 of a verbal learning test, which showed little to no improvement on any of the other trials. That makes no sense.<p>Finally the brain changes had a p-value of 0.046, which is barely significant by even the weak 5% threshold, and the brain changes weren’t correlated with the test result improvements.<p>Perhaps interesting, but only if replicated.