Oct 04 2011
In April of 2009 an Italian vascular surgeon by the name of Zamboni published the first paper of chronic cerebrospinal venous insufficiency (CCSVI) in which he proposed that blockages in the veins that drain the brain are strongly associated with multiple sclerosis, and further “play(s) a key role in determining the clinical course of the disease.” The paper sparked a controversy that rages still, one I have been following fairly closely here.
Discussion has now been renewed by a just published meta-analysis of CCSVI trials. The authors of the meta-analysis conclude:
Our findings showed a positive association between chronic cerebrospinal venous insufficiency and multiple sclerosis. However, poor reporting of the success of blinding and marked heterogeneity among the studies included in our review precluded definitive conclusions.
In other words – the data are all over the place, making a meta-analysis all but worthless.
Reporting on this story has been variable, but overall not bad. There are two components to the conclusion, and reports can emphasize one, the other, or both equally. The first is that there is an association between CCSVI and MS in the studies that have been done to date. But the second conclusion negates the first – that there is considerable variability in the quality, success of blinding, and outcome of the studies, making it impossible to draw any definitive conclusion.
In my opinion, a meta-analysis was the wrong tool for reviewing this question. Essentially, a meta-analysis combines the results from various trials, treating them as if they were one large trial. The advantage of this approach is that the resulting combined data set has much more power than the individual studies. There is also the hope that by combining multiple trials error and bias will average out and the true signal will come through.
However, there are several weaknesses to meta-analysis. The first is the obvious fact that they are only as good as the trials they combines, following the old rule of garbage in – garbage out. Combining trials may compensate a bit for outliers, but does not correct for the general quality of the studies.
Second, a meta-analysis works best when the studies that are being combined are homogeneous – with similar methods and outcome measures. Otherwise it is difficult to know how to quantify different outcomes. And third, meta-analysis introduces another possible layer of bias into the analysis by which studies it chooses to include.
It is not surprising, therefore, that a 1997 study found that meta-analysis predicts the outcome of later large definitive clinical trials only 65% of the time, which is not much better than chance.
For messy preliminary data a systematic review is perhaps better than meta-analysis to get an idea about what the state of the research is. In this case, we have one researcher, Zamboni, who has generated the initial positive results. Most of the attempts at replicating this research have either had mixed or negative results. There is little consistency among the various studies.
There also appears to be a correlation between lack of blinding and positive outcomes. Zamboni’s original data was unblinded, and dramatically positive. In a follow up study he claims to have addressed the concerns, but critics point out that he gives no indication of how assessments were blinded and how successful the blinding was. Attempts to replicate the data with well-blinded protocols have tended to be negative.
It is a good rule of thumb that when a phenomenon tends to disappear when proper blinding protocols are put into place, then the phenomenon is likely not real.
The current meta-analysis can really only reach one conclusion – that the data is preliminary and mixed, with various degrees of blinding and overall quality, and therefore a meta-analysis cannot reach any conclusion as to the data. The researchers really could have stopped there. Actually doing the meta-analysis was pointless, and generated what the authors acknowledge is likely a spurious outcome. This outcome, however, is likely to add confusion to the reporting of the data.
A better approach to this set of studies is a science-based systematic review, taking into consideration the relationship between the quality of each study (especially the quality of the blinding) and the magnitude of the correlation between CCSVI and multiple sclerosis. The emerging consensus as more studies are being done is that there is no correlation.
5 Responses to “A CCSVI Meta-Analysis”
Leave a Reply
You must be logged in to post a comment.