Jul 02 2007
Breakthrough Science
In response to Friday’s post, daedalus2u wrote the following comment:
“I agree with you that the vast majority of advancement in science is incremental and comes slowly bit by bit. That is the type of science that is best evaluated by peer review, what Thomas Kuhn calls “normal science”.
“Breakthough” type science is not well evalutated by scientific peers.
For example the idea that Helicobacter pylori causes ulcers was not accepted at first. The reason was because everyone “knew” that ulcers were caused by stress and too much acid. A bacteria hypothesis was implausible. Would researchers proposing such a treatment that was counter to “conventional” wisdom receive funding today? Probably not. A non-conventional treatment is considered extraordinary, and so requires extraordinary evidence to be considered worth funding. But attaining extraordinary evidence requires collecting data which requires funding. A catch-22.
My own experience in trying to get my nitric oxide research taken seriously is that potential new treatments can be ignored because people are too busy to evaluate them. Everything is easy to ignore until there is overwhelming evidence. Somethings are ignored even then, for example evolution.”
I tend to disagree with this basic conclusion – that in general truly novel or breakthrough notions in science are treated unfairly, are not accurately assessed by peers, or are simply ignored. The example given by daedalus is the most common recent example I hear, and as Dick Cheney (I am assuming this is a pseudonym) pointed out, it turns out this example is not true. My colleague Kimball Atwood did a nice review of the literature and found that the idea that H. pylori is an important cause of gastric ulcers was met with interest – not rejection – and both interest and acceptance increased right along with the evidence. It’s actually a good and well documented example of how science changes over time in response to new ideas and new evidence.
In fact, although the claim is often made that science as an institution is hostile to truly innovative ideas, no one can come up with really good examples of this – that’s a huge red flag, by the way. “I know this happens, even though I can’t give you any actual examples.”
What about the antioxidant example? Well, as a clinician I lived through the whole antioxidant hype and I have to say I never bought it, and I know many people who didn’t. The idea is very compelling – oxidative stress causes damage, limit the oxidative stress and reduce the damage. But it was always recognized by serious researchers that this is a simplistic model that is only looking at one factor, and that reality is more complex. There was certainly enthusiasm for antioxidants as potential treatments, but this was significantly tempered by the need for empirical evidence. As the clinical evidence started to come back mostly negative, antioxidant enthusiasm rapidly waned and settled down to where it is now. The prevailing opinion is that oxidative stress plays a role in some diseases and may still be a useful target for therapy, but it is not a panacea, it may never pan out to have clinically relevant effects, and there is emerging evidence of a very real down side to antioxidants. This cycle was relatively brief, about a decade and a half, and pretty closely and rapidly tracked with the evidence.
Other typical examples include the germ theory of disease. This did indeed meet with resistance, but this example is from the 19th century – prior to the general adoption of scientific standards within medicine. Bottom line – pre-scientific examples don’t count.
What about relativity, plate tectonics, the big bang, and the meteor wiping out the dinosaurs? Pretty much in every case the same pattern unfolds. New claims are met with initial skepticism – but the healthy kind of scientific skepticism that demands evidence prior to acceptance. The ideas are judged based upon their plausibility and internal consistency. Flaws are probed for, contradictory evidence discussed, and finally scientists point out what observations and experiments would properly test the new ideas. The tests are eventually done, and if the new idea survives the tests they gain supporters, garner more research, and eventually become the conventional wisdom.
This is how science is supposed to work. It also means that for every scientific idea that is now the conventional wisdom at one time it was a new idea and was met with initial but appropriate skepticism and challenges for evidence.
Some ideas may have been slow to gain support but eventually won out – but there are always fairly good reasons. Some ideas, like plate tectonics, were implausible given contemporary knowledge. This means that the bar for evidence was set higher, but eventually they did meet that bar.
It is also true that individual scientists may cling to their pet ideas – they may hold on to false notions despite the rising tide of evidence. But it is increasingly unlikely for the entire scientific community to behave this way. Like any good free-market system, in the aggregate humans make pretty good decisions. Individually we are quirky, together we wield wisdom.
It is worthwhile to point out that there is a huge post-hoc bias in assessing breakthroughs. We tend to look back at history from the vantage point of those new ideas in science that ended up being true. From that perspective, any resistance or skepticism seems to be going against the tide of history, while early enthusiasts seem prescient in their foresight.
However, the successes must be placed in the context of the failures as well. For every idea that turns out to be true, there are many that turn out to be false.
When I was a medical student a surgeon told me a story of the resident who examined a patient with abdominal pain in the ER and then wrote in their assessment that they thought the patient had a gallstone ileus – a rather rare condition. It turned out that the patient did have this rare condition, and the resident looked like a clinical genius. When confronted by the attending, however, the resident confessed that he had made the same prediction for every abdominal pain patient he saw in the previous months and was bound to get lucky eventually. But if you only considered the one case he got right, you could not properly assess his clinical acumen.
Similarly, the scientific community must be judged based upon how they respond to all new scientific claims. They meet all with initial skepticism, a no-nonsense assessment of probability, and demands for discriminating evidence. The community, in free-market style, then tracks pretty closely with plausibility and evidence. Ideas that turn out to be wrong are weeded out and largely forgotten, and we are left with those that survive the process. It is the weeding out that is the key, however, and this requires skeptically questioning and challenging all new ideas.