Jun 02 2011

The Decline Effect Revisited

I have written previously about the decline effect – the apparent decrease in the magnitude of a phenomenon as it is scientifically studied. Initially there seems to be a big effect, but the effect size shrinks as further research is done, sometimes shrinking all the way to zero.

This effect was perhaps first observed (or at least named) in psi research. Over the last century psi researchers have come up with a number of research paradigms to demonstrate so-called anomalous cognition. But in every case the initial impressive effect sizes have shrunk to non-existence with repetition. While the standard interpretation of this failure to reproduce results is that the phenomenon is not real, some defenders of psi have tried to argue there is a real metaphysical decline effect at work. It’s not their fault – the phenomenon actually goes away over time.

Writing for Naturenews, Jonathan Schooler makes a similar claim about the interpretation of the decline effect in science in general. To be fair, this is not the focus of his article. He is making an argument for registering all scientific studies, so that publication bias cannot distort the patterns of evidence in the literature. I completely agree. But within his article he writes:

Perhaps, just as the act of observation has been suggested to affect quantum measurements, scientific observation could subtly change some scientific effects. Although the laws of reality are usually understood to be immutable, some physicists, including Paul Davies, director of the BEYOND: Center for Fundamental Concepts in Science at Arizona State University in Tempe, have observed that this should be considered an assumption, not a foregone conclusion.

Perhaps he was just including this notion for completeness, because he does also cover more “prosaic” explanations – but seriously, this is dangerously close to quantum woo. Interacting with a system in order to observe it can affect the behavior of that system (a well-known observer effect). At the quantum level, interaction with a particle causes the collapse of the wave function. Observation, however, is not changing the laws of physics or the way subatomic particles behave – it’s part of their behavior. Extrapolating from quantum mechanics to the notion that the laws of physics may not be immutable is simply a false analogy.

What is Schooler really proposing – that doing research on a drug subtly changes its chemical properties over time? Further, even if this were true (an extraordinary claim for which there is no evidence and which does not flow from quantum mechanics) why would the laws of nature always conspire for such changes to decrease the effect being observed. Why wouldn’t treatments, for example, work better over time – at least sometimes.

The main thrust of Schooler’s article, in my opinion, is almost certainly the answer to the decline effect – it is an artifact of the research.

There are multiple known or highly plausible effects that would conspire together to create the appearance of a decline effect, and Schooler points many of them out. First – when scientists first start to do research into a new phenomenon they have yet to work out all the kinks. Generally the rigor and quality of research improves over time, with increased knowledge, experience, attention from the community, and resources to perform more elaborate research. What this means is that the effect of bias is likely to be greater with earlier studies and then “decline” over time as rigor increases. Researcher bias is known to exaggerate the phenomenon that the researcher is trying to demonstrate. This alone is enough to explain a decline effect.

But there’s more. Publication bias (the specific effect Schooler wants to eliminate) would also create an apparent decline effect. In order to get a paper published on an entirely new idea, you need a fairly large and impressive effect size (at least this helps the probability of getting published). But once a new concept is published, if a follow up study shows a smaller or even non-existent effect, that is interesting because it is a failure to replicate an already published effect. In other words, the probability of getting a paper published might be biased towards larger initial effect sizes and smaller later effect sizes.

Further, the biases of those replicating research are not necessarily the same as the originators of an idea. Someone proposing a new idea is highly motivated for their pet theory to be backed by impressive evidence. But often those replicating the research are motivated to disprove a competing theory, or perhaps are just interested in exploring a possible new finding but don’t have much of a bias one way or the other. They may also be more confident in their ability to get their replication published regardless of outcome.

There doesn’t need to be anything sinister in this effect, and even without researcher bias there are statistical factors at play. Those research outcomes that by chance are more dramatic are more likely to get published, and then regression to the mean takes over and later research is likely to more accurately reflect average or true effect sizes.

It is probable that because of all these factors initial effect sizes are exaggerated. These effect sizes then decline toward the real effect size over time, whatever that is. If effect sizes shrink to nothingness, then the phenomenon is discarded as a dead end (unless, of course, you are a psi or CAM researcher). If a significant effect size persists then the phenomenon is considered to be genuine.

In fact this has been standard in research for decades. The decline effect is just describing something researchers already know but perhaps never bothered to measure. Preliminary research tends to show big effect sizes that often do not hold up with later replications and more rigorous protocols. Researcher bias, publication bias, regression to the mean, and the normal progress of study design are enough to explain this.

There is no need to speculate about the mutability of the laws of nature.

182 responses so far