Oct 04 2016

Review – Brain Training Games Don’t Work

brain-gamesYesterday I wrote about the literature on so-called “power poses” – the notion that adopting certain poses make you feel more confident and powerful, and therefore change your behavior in certain ways that may be advantageous. Over the last decade psychologists have built up a literature which they claim supports the conclusion that power poses work.

However, a reanalysis of the data suggests that the evidence is flimsy, and in fact may be entirely an illusion created by p-hacking (essentially, loose research methodology).

The primary proponent of power poses, Amy Cuddy, has already built a career on the idea, topped off with a popular TED talk, and so far is sticking by her conclusion. Meanwhile, one of her coauthors, Dana Carney, has already jumped ship and stated publicly she does not think the power pose effect is real.

Brain Training

Today I am going to tell a very similar story, this time about brain training games. Over the last decade psychologists have built up a literature which they claim supports the conclusion that playing certain “brain games” will make you smarter in general, and may even stave off dementia.

Much of this research was supported by, or used to market, brain training games, most notably Lumosity.

Two schools of thought developed in response to the literature. A recent systematic review, which was conducted to settle the dispute, summarizes:

In 2014, two groups of scientists published open letters on the efficacy of brain-training interventions, or “brain games,” for improving cognition. The first letter, a consensus statement from an international group of more than 70 scientists, claimed that brain games do not provide a scientifically grounded way to improve cognitive functioning or to stave off cognitive decline. Several months later, an international group of 133 scientists and practitioners countered that the literature is replete with demonstrations of the benefits of brain training for a wide variety of cognitive and everyday activities. How could two teams of scientists examine the same literature and come to conflicting “consensus” views about the effectiveness of brain training?

This is what I love about science. Two groups of scientists have opposite opinions. They aren’t going to form different institutions, go to war, or simply argue about it endlessly. They review the actual evidence and let the evidence determine the outcome.

At least, that is what will happen eventually. Some stubborn individuals will stick to their guns. The cynical view, expressed about a century ago by Max Planck, is that science advances one funeral at a time. I think the situation is somewhat better today, probably because science advances so much more quickly.

In any case, there are plenty of examples of different groups of scientists working out their differences with evidence. In this case, a group of researchers did a systematic review of published studies. They concluded:

Based on this examination, we find extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve performance on closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance.

This is remarkably similar to what I have concluded from reading the literature over the last 5 or so years. I most recently wrote:

Essentially, engaging in any cognitive task will make you better at that cognitive task and perhaps closely related tasks. The benefits are modest and probably short lived. There is no reason to think from the evidence that any specific brain-training game can improve general cognitive abilities, or that there is a permanent or even long term benefit to brain function. The claims of companies selling such games, therefore, are overhyped and misleading.

This is not a surprise, given that we are reading the same studies. What is a surprise is the group of scientists who concluded (at least prior to this recent review) that there was evidence for a general benefit from brain training games.

To clarify, what we are talking about here is the alleged phenomenon of transfer – if one set of skills will transfer to untrained tasks. There is no controversy that practice improves skills, that those skill can apply to related tasks, and that people can develop some general academic skills, like studying efficiently.

What is in dispute is the notion of “brain training” – that you are somehow improving the general functioning of your brain by playing brain games, that you are getting smarter, rather than just better at performing a specific task.

So how did the other set of scientists come to the, apparently, wrong conclusion? This brings us back to the analogy with the power pose literature.

In both cases it seems that there is a large set of studies of poor methodological quality that show an effect, then progressively fewer studies of higher and higher quality that show smaller or no effect. One set of scientists look at the large group of weak but positive evidence as compelling, while the other group looks at the negative high quality studies as definitive.

This, of course, has been a major point of this blog, and in fact of my entire skeptical career and my promotion of science-based medicine. This pattern is almost universal in the scientific literature, and if you are not familiar with this pattern, and the power of loose methodology to generate false positive results, then you will build your conclusions on sand that will wash away with the first truly rigorous study.

This goes all the way back to N-rays. We are still grappling with the equivalent of N-rays today.

These two recent examples demonstrate, in my opinion, that we need to do a better job training researchers. Scientists need to be more skeptical. These examples also reflect what I hope is a real and increasing trend, that researchers are getting more skeptical.

Specifically, knowledge of p-hacking and how powerful and subtle these effects can be is increasing. We are also learning how important it is to look at the overall pattern in the literature (thanks in part to the work of John Ioannidis).

I also think (admitting my clear bias) that the infiltration of pseudoscience and the work of skeptics to oppose it has been important. For example, the homeopathy, ESP, and acupuncture literature are clear examples of these patterns. In these cases, we have a large number of poorly conducted positive studies that proponents use to claim these effects are real. In every case attempts at rigorous replication fail. There is a direct correlation between the rigor of a study and the magnitude of any effect, with the best studies being negative.

This pattern of research screams, “Remember N-rays – this effect is not real!”

Now we can add (probably) two more alleged phenomena to the N-ray scrap heap, power poses and brain training. They are much more plausible than ESP or homeopathy, but plausibility isn’t enough when the empirical data is negative.

Conclusion

Both of these recent episodes, while they are stories of scientists getting it wrong, are also stories of scientists eventually getting it right. In both cases the core lesson is to be skeptical in precisely the way that leading skeptics have been advocating for years.

The lessons we learned from examining the evidence for ESP and homeopathy apply broadly in mainstream science. Scientists studying plausible hypotheses can make the same mistakes that drive belief in ESP.

The science of doing good science has advanced to the point where we have all the pieces in place. We now have formal scientific analysis that confirms what skeptics have observed – the many forms of p-hacking, the tendency for weak methodology to lead to false positive results, the ability to manufacture an entire scientific belief out of nothing but wishful thinking, and how to expose the reality with properly designed studies.

In the end, only highly rigorous studies tell the true story. Yet, still today we here alternative medicine proponents, for example, arguing that we don’t need fancy science to tell us what works. We have true believers denying that science is necessary to know what is real. Yes you do and yes it is.

20 responses so far