May 29 2008
I often rant against the generally poor quality of science journalism. There are a few bright spots every now and then (like the recent article on vaccines in Time magazine) but overall reading science news in the mainstream media is a depressing and frustrating affair. At least it gives me material for this blog, which then serves as a means for (albeit modest) damage control.
Also, being the skeptic, I have to question my own observations. Perhaps I am falling victim to confirmation bias – noticing the bad science journalism that confirms my biases and missing good science journalism or dismissing it as “the exception.” I assume that this is happening to some degree, and so try to keep open-minded about the media. Of course some objective systematic information would help me assess the accuracy of my subjective perceptions.
Well, now I have some.
Gary Schwitzer from the University of Minnesota School of Journalism and Mass Communication runs a website called HealthNewsReview.org, the purpose of which is to rate medical news stories in the mainstream media. He has now published an analysis of 500 medical news stories over the past two years. He found that:
In our evaluation of 500 US health news stories over 22 months, between 62%–77% of stories failed to adequately address costs, harms, benefits, the quality of the evidence, and the existence of other options when covering health care products and procedures. This high rate of inadequate reporting raises important questions about the quality of the information US consumers receive from the news media on these health news topics.
About two-thirds of all medical news stories (and these are from the top news outlets) had major flaws. In my opinion this establishes that there is a systematic problem with the quality of medical news journalism in the US. Other countries have similar problems, and I suspect that similarly poor quality is a problem across science news reporting and not isolated to medical news.
Here are the criteria that Schwitzer uses to assess news stories (I just give the categories here, you can read the full description on his website here.
Criterion #1 The availability of the treatment/test/product/procedure
Criterion #2 Whether/how costs are mentioned in the story
Criterion #3 If there is evidence of disease mongering in the story
Criterion #4 Does the story seem to grasp the quality of the evidence?
Criterion #5 How harms of the treatment/test/product/procedure are covered in the story
Criterion #6 Does the story establish the true novelty of the approach?
Criterion #7 How the benefits of the treatment/test/product/procedure are framed
Criterion #8 Whether the story appeared to rely solely or largely on a news release
Criterion # 9 Is there an independent source and were any possible conflicts of interests of sources disclosed in the article?
Criterion #10 Whether alternative treatment/test/product/procedure options are mentioned
He compiles the results of this analysis into a 1-5 star rating scale. Reading through a number of his analyses of specific articles, he seems to do a thorough job and his criteria work well. I do have some constructive criticism, however. He has apparently fallen into the “evidence-based medicine” trap of forgetting to adequately consider prior probability or scientific plausibility. For example, this review of an an article discussing a recent acupuncture study does criticize the article for not putting the data into more of a context (which is good) but his discussion of context does not specifically address plausibility. My reading of this study comes to a different bottom line conclusion from the article being reviewed, and I would have rated the article lower for failing to put the study into proper scientific context.
In fact, I would add evaluation of scientific context/plausibility as a separate criterion worthy of its own rating and discussion.
Schwitzer also discusses what he thinks needs to be done to improve the situation – which is great because it is always a good idea to not just complain but to also offer constructive advice for improvement. He writes:
– Reporters and writers have been receptive to the feedback; editors and managers must be reached if change is to occur.
– Time (to research stories), space (in publications and broadcasts), and training of journalists can provide solutions to many of the journalistic shortcomings identified by the project.
I completely agree. Training is perhaps the most important thing – and right now we seem to be going in the opposite direction, toward journalists with less scientific/medical training.
My complements to Gary Schwitzer for identifying the problem, his efforts at Health News Review, and for bringing this problem to public attention. We definitely have common cause when it comes to the quality of science reporting. But if current trends continue, I should have no worries in the future about finding topics for this blog.
10 Responses to “Bad Medical Journalism”
Leave a Reply
You must be logged in to post a comment.