There is a lot going on in this paper, and I don't have the time to go in deep on it.
The full text is here:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4055506/
The five year follow up is here
http://jama.jamanetwork.com/article.asp ... eid=204643
But my initial impressions are that:
(1) I'm concerned about the serious attrition in the subjects. It is to be expected when you are doing a 10-year follow-up for a group averaging 73yo at baseline. Still, it presents serious statistical challenges when you go from roughly 2800 subjects at base line to 1200 at final follow up (officially 44% of the original sample).
(2) There is a huge disparity between the larger
self reported activities of daily living compared to the objective findings from the actual cognitive testing. Keep in mind that there is no blinding when you have an active cognitive training compared to an untrained control group. That creates a big demand on the subjects to confirm the researchers' bias in favor of the intervention. A long proven risk in non-blinded clinical trials.
(3) While a huge sample is always a benefit in terms of attaining
statistical significance, it needs to be tempered by the potential for identifying
clinical significance. If you threw a million subjects into the study, you would for sure get just about anything you test to be significant, but as the sample size gets increasingly massive, the effect sizes being found significant become so small that the ability to declare those findings clinically significant diminishes.
I have not yet digested how the booster sessions were distributed, but it would be my next item for my review of this paper.
Of the three interventions, memory, reasoning and speed of processing, only the speed training showed up with benefits that I might consider potentially clinical significant. I suspect it is memory and reasoning that those seeking the benefits of training most want to be affected.
I'll try to get back to this paper soon.