Archive for the 'Skepticism' Category

Jun 02 2020

Journalism Without Skepticism

A recent interview published in Scientific American is a good case study in what can happen when you have journalism without skepticism.  By skepticism I mean a working knowledge of the discipline of scientific skepticism, which combines our current understanding of the philosophy of science, the nature of pseudoscience, critical thinking, mechanisms of self-deception, deliberate deception, and specific knowledge about individual pseudoscientific and paranormal topics.

The interview was conducted by John Horgan, who I have trashed in the past for criticizing skepticism while demonstrating an almost complete ignorance of it. The subject of the interview was Leslie Kean, a journalist who has written a book on UFOs and another on life after death. Doing a deep dive into these two issues is beyond this one article, and they have already been covered at length here and elsewhere. I want to focus on what the interview itself reveals.

Kean appears to take a solid journalistic approach to these issues, but there is a massive hole in her approach. She does not seem to be aware that there is already a thorough investigation into these questions, showing convincingly in my opinion that they are not genuine. She ignores it because she thinks she already understands it, when she doesn’t – so she is missing the skeptical take on these issues. She is dismissive of skeptics as deniers and as closed-minded. She then goes on to make rookie mistakes, that any well-informed skeptic could have pointed out to her. The result is a repetition of long debunked fallacious arguments, but with a patina of serious journalism.

Continue Reading »

No responses yet

Jun 01 2020

Junk Science in the Courtroom

In the last 20 years I have been called to jury duty several times. Every time I was dismissed almost instantly, once I made it known that I am a professional skeptic. Apparently lawyers fear that kind of skepticism on their juries (at least one side always did). The same is true of many of my skeptical colleagues, so I am not an isolated case. Once my brother said during the process that he wrote an article on the fallibility of human memory and eyewitness testimony. His but barely hit the seat when he was dismissed.

It is unclear how best to interpret these anecdotes, but what is clear is that justice requires facts and needs to align optimally with reality. Falsehoods and pseudoscience do not generally lead to justice. It is for this reason that courtrooms have elaborate rules of evidence, and generally they work well. Even in our adversarial system, you need to use generally valid arguments, you need to back up your statements with evidence, and there are rules of admissibility. Each side provides a check on the other, as a neutral arbiter presides over the process. It is imperfect (because imperfect people are involved) but at least it has a process.

One area where this process has historically had significant problems, however, is in forensic science, and the admissibility of science itself. The main problem, as I see it, is that it is based largely on authority, in both a good and bad way. Each side is allowed to find their own experts, and they can cherry pick experts whose opinion aligns with their needs. Often a non-expert jury is then tasked with sorting it out. There are standards for which expert testimony is admissible, and this has been a controversy unto itself. Here is a good summary:

Prior to 1993, the Frye standard for admitting expert testimony was the prevailing standard for guiding federal and state courts in their consideration as to whether scientific expert testimony should be admitted at trial. Frye v. United States[1]. The Frye standard requires that the proponent of the evidence establish the general acceptance of the underlying scientific principle and the testing procedures. Notably, Frye only applies to new or novel scientific evidence. However, in 1993, following a revision to the Federal Evidence Code by Congress, the Supreme Court of the United States annunciated the new standard in Daubert v. Merrell Dow Pharmaceuticals, Inc.[2] The Daubert inquiry was meant to be flexible and focused on scientific principles and methodology, not conclusions. The Daubert opinion emphasized that the Federal Rules of Evidence governed admissibility and suggested a series of factors a court could consider, but did not establish a test per se. Under Daubert, the admissibility of expert evidence rests squarely within the discretion of the trial court judge. In contrast to FryeDaubert applies to all expert witness testimony.

This article is about the fact that Florida has reverted to the Frye standard recently. This highlights the fact that legal precedent is largely how this is sorted out, and may differ for every state and at the federal level.

Continue Reading »

No responses yet

Apr 27 2020

Psychological Pitfalls and COVID-19

SARS-Cov2 is a challenging little bugger, but in my assessment no match for human science and ingenuity. There are already 1,650 listed scientific articles on COVID-19 and 450 ongoing clinical trials. In short, we are scienceing the shit out of this pandemic and we will get through it. But as I have argued previously, perhaps a bigger threat than the virus itself is human psychology. Crises bring out the best and worst in people, and we are seeing both in spades. Also, a crisis exposes the weaknesses in institutions, and they are being highlighted as well.

That’s why, in medicine, we have something called M and M – morbidity and mortality rounds. The goal of these rounds is to review all negative clinical outcomes in whatever setting is being covered and try to figure out what went wrong. But, importantly, such conferences are not about assigning blame, recrimination, or discipline. It is about improving the system. Was a particular negative outcome unavoidable? Was it precipitated by a personal failure, or rather a systemic failure. And if not a failure per se, is there some systematic change we can put in place to minimize these negative outcomes in the future? Should this be handled by education, by some additional checklist or process, or by reconfiguring the workforce?

For some crises, like the pandemic (or a war, for example), we can’t wait until it’s all over to look back and analyze the systemic shortcomings (although we should do this also, to prepare for the next one). We need ongoing analysis and adjustment. That is what a group of psychologists have done, with respect to common psychological pitfalls and how they might affect our individual response to the pandemic. I like this review because it is square in the tradition of skeptical thinking – it identifies psychological pitfalls so that we can better understand ourselves, and proposes specific adjustments we can do to mitigate them. You can read the full article, but I want to highlight a few of particular interset.

Continue Reading »

No responses yet

Apr 03 2020

A Stupidity Pandemic

As a skeptical science communicator I am constantly walking the line between hope and cynicism. On the one hand, I very much take to heart Carl Sagan’s approach to science – focusing on the absolute wonder of the universe, and celebrating the curiosity and ingenuity of humanity. We have peered into the past, walked on the moon, and decoded many of the secrets of life. Science is a powerful tool that has transformed the world more in the last few centuries than in tens of thousands of years beforehand. And yet, humanity still struggles with the demons of our evolutionary history. We are tribal, superstitious, and capable of surrendering our critical thinking to a charismatic leader.

What this all means is that when we are faced with a challenge, even a crisis, we are capable of meeting it. We can bring the tools of science, philosophy, and politics to bear to solve almost any problem. And yet the extent to which we will fail to do so is a consequence of our own stupidity and lack of critical thinking. There is nothing like a pandemic to reveal all of this – the good and the bad.

On the bright side, there have already been thousands of studies of the novel coronavirus (SARS-COV-2) and the disease it produces, COVID-19. Researchers are already exploring possible treatments and developing a vaccine. Meanwhile, we have solid mechanisms everyone can use to protect themselves and slow the spread of disease. Where implemented properly and in time, these strategies work. Compare this to just 100 years ago, during the 1918 flu pandemic. That pandemic killed at least 50 million people worldwide – and that magnitude was created largely by the world’s collective failure to properly understand and deal with the virus. They had no treatment, no vaccine, and utterly failed to enact adequate public health measures (for sure, this was partly due to the fact that they were fighting a world war and many politicians prioritized the war effort over mitigating the pandemic). Go back a bit further to the black death, which killed a third of Europe, and they did not even understand the nature of the pandemic. Their ignorance made them all but helpful before it.

Today, through science we understand exactly what is going on, down to the molecular level. And we have the methods to quickly (relatively speaking) figure our how best to address it. It is still a challenge, because the pandemic is moving quickly, but all we really have to do collectively is not panic and listen to our own experts. But of course, it’s never that simple. Some people will find a way to screw it up, because humanity is a complex mixture of motivations, biases, and emotions.

Continue Reading »

No responses yet

Mar 17 2020

Being Anti-intellectual During a Pandemic

Published by under Skepticism

In the past I have written a defense of elitism and expertise, and articles exploring the phenomenon of anti-intellectualism. For those who reject science this is a core issue – they must attack expertise, reject consensus, and defend populism as their justification for promoting the idea that the consensus of scientific opinion is wrong. They do so with the same tired and rejected arguments they have for decades, which I guess is in line with their anti-intellectualism.

Recently Michael Egnor, who writes for the anti-science Discovery Institute, and with whom I have tangled before, wrote a stunning defense of anti-intellectualism. He marshaled all the old tropes, which I have already dealt with, but I felt it was especially poignant in the middle of a pandemic. We are actually seeing in real time the consequences of science-denial, of rejecting the advice of experts and basing opinions on your “hunches”, and of approaching reality with a general attitude of anti-expertise populism.

The core of Egnor’s anti-intellectual attack is the notion that – those scientists have been wrong before. First – of course they have. Science is not a crystal ball. It is a set of methods for slowly, painstakingly working out how reality functions. It is full of false hypotheses, dead-ends, mistakes, and occasional brilliance. But mostly it’s careful tedious work, which is then put through the meat-grinder of peer-review. Science is messy, which is why I spend perhaps the majority of my time writing here and on SBM discussing the messiness of science, the pitfalls, the institutional failures, and the changes that many think will help make the institutions of science incrementally better.

Continue Reading »

No responses yet

Mar 16 2020

Perpetual Flying Machine

Published by under Skepticism,Technology

I’m a sucker for perpetual motion machines. I don’t mean that I think they work – they don’t – but they are often intriguing contraptions out of some cyberpunk fantasy. They are also often a bit of a puzzle. How are they supposed to work, and why don’t they? That free energy or perpetual motion machines don’t work is a given, because of the laws of thermodynamics. Energy has to come from somewhere, so for each such claim it’s a fun game to figure out where the energy is actually coming from. This game also helps dispel any notion of continuous or free energy.

A new perpetual motion claim is revealed in an article in the Rob Report. The claims is for an electric plane that will fly mostly with the energy generated by the friction of the flying itself. The idea is that the plane will have rechargeable electric batteries that are used for take-off and landing. But while in flight, the batteries will be recharged by vibrations and the flexing of the wings. The inventor, Michal Bonikowski, who calls his project Eather One, hopes this will yield enough energy to keep the plane flying indefinitely.

The problem with this concept, as with all perpetual motion concepts, is the second law of thermodynamics. Every time you change energy from one state to another, at least a little bit is lost. You can never have 100% efficiency. So the energy you use to propel the plane forward will have to be greater than the energy you harvest from pushing through the air. If you design a mechanism (as in the concept art) for harvesting air friction, the extra resistance from the mechanism will cause the plane to slow by more than using that energy to propel it will increase its speed. The entire process will be a net negative. You would be better off optimizing aerodynamics.

Continue Reading »

No responses yet

Feb 27 2020

Anti-Intellectualism and Rejecting Science

“There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.”
― Issac Asimov

As science-communicators and skeptics we are trying to understand the phenomenon of rejection of evidence, logic, and the consensus of expert scientific opinion. There is, of course, no one explanation – complex psychological phenomena are likely to be multifactorial. Decades ago the blame was placed mostly on scientific illiteracy, a knowledge deficit problem, and the prescription was science education. Many studies over the last 20 years or so have found a host of factors – including moral purity, religious identity, ideology, political identity, intuitive (as opposed to analytical) thinking style, and a tendency toward conspiratorial thinking. And yes, knowledge deficit also plays a role. These many factors contribute to varying degrees on different issues and with different groups. They are also not independent variables, as they interact with each other.  Religious and political identity, for example, may be partially linked, and may contribute to a desire for moral purity.

Also, all this is just one layer, mostly focused on explaining the motivation for rejecting science. The process of rejection involves motivated reasoning, the Dunning-Kruger effect, and a host of self-reinforcing cognitive biases, such as confirmation bias. Shameless plug – for a full discussion of cognitive biases and related topics, see my book.

So let’s add one more concept into the mix: anti-intellectualism – the generalized mistrust of intellectuals and experts. This leads people to a contrarian position. They may consider themselves skeptics, but they do not primarily hold positions on scientific issues because of the evidence, but mainly because it is contrary to the mainstream or consensus opinion. If those elite experts claim it, then it must be wrong, so I will believe the opposite. This is distinct from conspiracy thinking, although there is a relationship. As an aside, what the evidence here shows is that some people believe in most or all conspiracies because they are conspiracy theorists. Others believe only in some conspiracies opportunistically, because it’s necessary to maintain a position they hold for other reasons. There is therefore bound to be a lot of overlap between anti-intellectualism and holding one or more conspiracies, but they are not the same thing.

Continue Reading »

No responses yet

Feb 04 2020

New York Times Goop Fail

This has to be the worst opinion piece I have read in a major news outlet in a long time. The authors, Elisa Albert and Jennifer Block, leave behind them a killing field of straw men and empty containers of metaphorical “Kool Aid.” Here is the short version – they are defending Gwyneth Paltrow’s Goop and the recent Netflix series Goop Lab with all the tropes of pseudoscience they can muster. They wrap them all up in a narrative of female empowerment, and dismiss out-of-hand all the legitimate criticism of the dangerous advice Goop sells as a conspiracy of the “patriarchy.”

Ironically, and sadly, I would argue that Paltrow, and by extension Albert and Block, are exploiting women, making them more vulnerable, and depriving them of true empowerment – which is knowledge. When you give someone misinformation, you are taking away their ability to have informed consent. This is what con artists do. Alternative medicine is frequently a double-con, in which those who promote it are themselves deceived and are just paying the deception forward.

All the talk about the “patriarchy” is also just another version of a conspiracy theory, in which all legitimate counter arguments and evidence are dismissed as part of the conspiracy (as I am sure some will do with this very blog post). Conspiracy theories work best if they contain a kernel of truth, or if they are built around a legitimate historical grievance, as in this case. All you have to do is wipe away all the nuance, and cherry pick the details that serve your narrative.

Let’s dig in to some of the details of the article. They start with a rather blatant straw man:

The show would surely promote “dangerous pseudoscience,” peddle “snake oil,” and be “undeniably awful for society.”

Six episodes of the show finally dropped late last month, and so far civilization seems to be more or less intact.

Right, so because civilization did not instantly collapse, none of the warnings about the dangers of pseudoscience are valid. But they were just getting started and this was a mere warm up. The next paragraph frames the discussion: Continue Reading »

No responses yet

Nov 19 2019

Scientific Fraud in China

Published by under Skepticism

There is plenty of fraud and corruption in the world, even in the halls of science. No one has a monopoly. But there are some hot spots that deserve specific attention. Recently significant concerns have been raised about the published research of Xuetao Cao, a Chinese Immunologist. This story is newsworthy because Cao is not just any immunologist – he is also the President of Nankai University, in Tianjin, China. But more to the point – he is the Chairman of research integrity in all Chinese research. When your head of research integrity is exposed for massive scientific fraud, you have a problem.

Here is a thorough treatment of the evidence for fraud, which covers over 50 published papers. The fabrication of data was noticed because much of it has to do with pictures, of either western blots, gels, flow cytometry images, and microscopy images. There appears to be two general types of fabrication going on. One type results from sending the same sample multiple times through analysis, but treating the data as if it came from different samples. In this case the resulting imaging will be strikingly similar in pattern, but not identical. The second type of fabrication is to simply photoshop copy and paste images.

Either way, the resulting data fabrication is undeniable once it is noticed. The images are simply too similar (and again, sometimes identical) to be genuine data. Once researchers started pouring through Cao’s other papers, the extensive fraud became obvious. When confronted with this revelation online, Cao responded by first standing behind his work, then stating:

Nevertheless, there is no excuse for any lapse in supervision or laboratory leadership and the concerns you raised serve as a fresh reminder to me just how important my role and responsibility are as mentor, supervisor, and lab leader; and how I might have fallen short.

Wow – you see what he just did there? He simultaneously apologized and took responsibility, but only for failure of supervision. So essentially he is throwing all of the people who work for him under the bus. Either way, however, this is really bad for Cao. Even in the best case scenario, all the fraud was perpetrated by others under his watch. Keep in mind, he is in charge of research integrity for all of China, but apparently can’t keep an eye on his own lab. There are certainly famous cases where research assistants were the ones perpetrating the fraud. Another immunologist, Jacques Benveniste, claimed to have evidence of immunological activity from high “homeopathic” dilutions. An investigation found his results to be highly unreliable at least, and likely straight-up fraudulent (although may have been do to really sloppy techniques and bias). But it also appears that the positive results all seemed to come from one lab assistant, Elizabeth Davenas – certainly a disturbing pattern.

Perhaps a similar pattern will emerge from Cao’s lab, but it seems unlikely that an overzealous assistant can be responsible for data fabrication in 50 published studies. This is clearly a systemic problem.

Continue Reading »

No responses yet

Nov 18 2019

Peak Intelligence

Published by under Skepticism

There is an interesting article over at The Conversation asking the question – have humans reached peak intelligence? This is something we have discussed previously on the SGU so I was keen to find out what philosophers think about this question. The core question is this – are there ultimate limits to the ability of humans to think, understand, and hypothesize? If so, are we approaching that limit now? There is also an angle to this the article did not cover directly – is there is limit to our ability to manage complexity (as opposed to just comprehending reality)?

There are different ways to approach this question. From an evolutionary point of view, our ancestors were likely under selective pressure to solve problems of immediate survival, and not to unravel the deep mysteries of the universe. But I don’t think this is ultimately relevant. This is a hyper-adaptationalist approach. It actually doesn’t matter to the ultimate question, because our hands did not evolve to play the piano either. Abilities that evolve for one purpose may be more generally useful. Clearly humans evolved some general cognitive abilities that go way beyond their immediate narrow evolutionary function.

But the broader point is salient – our cognitive abilities are not necessarily unlimited. What if the universe is simply more complex than our brains can comprehend? Take quantum mechanics, for example. The best thinkers we have, specializing in this question, still cannot solve the mystery of duality and apparent non-locality. We have some ideas, but it is possible that our brains are simply not equipped to imagine the true answer. It may be like a cat trying to understand calculus. If this is true, then what would we expect to happen in the course of scientific development? Would we hit a wall?

As they also discuss in the article, I don’t think so. Rather, if we look at the course of scientific development, our ability to do science is progressing, the technology of science, if you will. But at the same time the difficulty, complexity, and subtlety of the problems are increasing. We are having to work harder and harder for progressively smaller returns. Rather than hitting a wall, I agree that we will likely just wade into the molasses. We will keep pushing deeper and deeper into fundamental theories about how the universe works, but progress will become slower and slower. It may never actually stop, but advances will simply come fewer and farther between.

Continue Reading »

No responses yet

Next »