Archive for the 'Skepticism' Category

Dec 13 2024

Podcast Pseudoscience

A recent BBC article highlights some of the risk of the new age of social media we have crafted for ourselves. The BBC investigated the number one ranked UK podcast, Diary of a CEO with host Steven Bartlett, for the accuracy of the medical claims recently made on the show. While the podcast started out as focusing on tips from successful businesspeople, it has recently turned toward unconventional medical opinions as this has boosted downloads.

“In an analysis of 15 health-related podcast episodes, BBC World Service found each contained an average of 14 harmful health claims that went against extensive scientific evidence.”

These includes showcasing an anti-vaccine crank, Dr. Malhotra, who claimed that the “Covid vaccine was a net negative for society”. Meanwhile the WHO estimates that the COVID vaccine saved 14 million lives worldwide. A Lancet study estimates that in the European region alone the vaccine saved 1.4 million lives. This number could have been greater were in not for the very type of antivaccine misinformation spread by Dr. Malhotra.

Another guest promoted the Keto diet as a treatment for cancer. Not only is there no evidence to support this claim, dietary restrictions while undergoing treatment for cancer can be very dangerous, and imperil the health of cancer patients.

This reminds me of the 2014 study that found that, “For recommendations in The Dr Oz Show, evidence supported 46%, contradicted 15%, and was not found for 39%.” Of course, evidence published in the BMJ does little to counter misinformation spread on extremely popular shows. The BBC article highlights the fact that in the UK podcasts are not covered by the media regulator Ofcom, which has standards of accuracy and fairness for legacy media.

Continue Reading »

No responses yet

Nov 05 2024

A Discussion about Biological Sex

Published by under Skepticism

At CSICON this year I gave talk about topics over which skeptics have and continue to disagree with each other. My core theme was that these are the topics we absolutely should be discussing with each other, especially at skeptical conferences. Nothing should be taboo or too controversial. We are an intellectual community dedicated to science and reason, and have spent decades talking about how to find common ground and resolve differences, when it comes to empirical claims about reality. But the fact is we sometimes disagree, and this is a great learning opportunity. It’s also humbling, reminding ourselves that the journey toward critical thinking and reason never ends. On several topics self-identified skeptics disagree largely along political grounds, which is a pretty sure sign we are not immune to ideology and partisanship.

I spent most of the talk, however, discussing the issue of biological sex in humans, which I perceive as the currently most controversial topic within skepticism. My goal was to explore where it is we actually disagree. Generally speaking skeptics don’t disagree about the facts or about the proper role of science in determining what is likely to be true. We tend to disagree for more subtle reasons, although often the reason does come down to a lack of specific topic expertise on questions that are highly technical. The most important thing is that we actually engage with each-other’s arguments and positions, to make sure we truly understand what those who disagree with us are saying so that we can properly explore premises and logic.

Jerry Coyne, author of the book and blog Why Evolution is True, was also at CSICON and gave a talk essentially taking the opposing position to my own. His position is that biological sex in humans is binary, that this is the only scientific position, and anything else is simply ideology trumping science. His talk was after mine so I was very interested in how he would respond to my position. He essentially didn’t – he just gave the talk he was going to give and then included a single slide with his “responses” to my talk. Except, they weren’t responses at all, just a list of standard talking points that really had nothing to do with my talk.

Continue Reading »

No responses yet

Oct 10 2024

Confidently Wrong

How certain are you of anything that you believe? Do you even think about your confidence level, and do you have a process for determining what your confidence level should be or do you just follow your gut feelings?

Thinking about confidence is a form of metacognition – thinking about thinking. It is something, in my opinion, that we should all do more of, and it is a cornerstone of scientific skepticism (and all good science and philosophy). As I like to say, our brains are powerful tools, and they are our most important and all-purpose tool for understanding the universe. So it’s extremely useful to understand how that tool works, including all its strengths, weaknesses, and flaws.

A recent study focuses in on one tiny slice of metacognition, but an important one – how we form confidence in our assessment of a situation or a question. More specifically, it highlights The illusion of information adequacy. This is yet another form of cognitive bias. The experiment divided subjects into three groups – one group was given one half of the information about a specific situation (the information that favored one side), while a second group was given the other half. The control group was given all the information. They were then asked to evaluate the situation and how confident they were in their conclusions. They were also asked if they thought other people would come to the same conclusion.

You can probably see this coming – the subjects in the test groups receiving only half the information felt that they had all the necessary information to make a judgement and were highly confident in their assessment. They also felt that other people would come to the same conclusion as they did. And of course, the two test groups came to the conclusion favored by the information they were given.

Continue Reading »

No responses yet

Aug 30 2024

Accusation of Mental Illness as a Political Strategy

I am not the first to say this but it bears repeating – it is wrong to use the accusation of a mental illness as a political strategy. It is unfair, stigmatizing, and dismissive. Thomas Szasz (let me say straight up – I am not a Szaszian) was a psychiatrist who made it his professional mission to make this point. He was concerned especially about oppressive governments diagnosing political dissidents with mental illness and using that as a justification to essentially imprison them.

Szasz had a point (especially back in the 1960s when he started making it) but unfortunately took his point way too far, as often happens. He decided that mental illness, in fact, does not exist, and is 100% political oppression. He took a legitimate criticism of the institution of mental health and abuse by oppressive political figures and systems and turned it into science denial. But that does not negate the legitimate points at the core of his argument – we should be careful not to conflate unpopular political opinions with mental illness, and certainly not use it as a deliberate political strategy.

While the world of mental illness is much better today (at least in developed nations), the strategy of labeling your political opponents as mentally ill continues. I truly sincerely wish it would stop. For example, in a recent interview on ABC, senator Tom Cotton was asked about some fresh outrageous thing Trump said, criticism of which Cotton waved away as “Trump Derangement Syndrome”.

Continue Reading »

No responses yet

Aug 22 2024

AI Humor

Published by under Skepticism,Technology

It’s been less than two years (November 2022) since ChatGPT launched. In some ways the new large language model (LLM) type of artificial intelligence (AI) applications have been on the steep part of the improvement curve. And yet, they are still LLMs with the same limitations. In the last two years I have frequently used ChatGPT and other AI applications, and often give them tasks just to see how they are doing.

For a quick review, LLMs are AIs that are trained on vast amounts of information from the internet. They essentially predict the next work chunk in order to build natural-sounding responses to queries. Their responses therefore represent a sort-of zeitgeist of the internet, building on what is out there. Responses are therefore necessarily derivative, but can contain unique combinations of information. This has lead to a so-far endless debate about how truly creative LLMs can be, or if they are just stealing and regurgitating content from human creators.

What I am finding is that LLMs are getting better at doing what they do, but have not broken out of the limitations of this regurgitation model. Here is a good example from the New York Times – an author (Curtis Sittenfeld) wrote a short story based on the same prompt given to ChatGPT, and published both to see if readers could tell the difference. For me, I knew right away which story was AI. The author’s story was interesting and engaging. ChatGPTs story bored me before the end of the first paragraph. It was soulless and mechanical. It reminded me of a bad story written by a freshman in high school. It got the job done, and used some tired and predictable literary devices, but failed to engage the reader and lacked any sense of taking the reader on an emotional journey.

This reinforced for me what I suspected from my own interactions – LLMs are getting better at being LLMs, but have not broken out of their fundamental limitations.

Continue Reading »

No responses yet

Jul 19 2024

Deepfake Doctor Endorsements

Published by under Skepticism

This kind of abuse of deepfake endorsements was entirely predictable, so it’s not surprising that a recent BMJ study documents the scale of this fraud. The study focused on the UK, detailing instances of deepfakes of celebrity doctors endorsing dubious products. For example, there is this video of Dr. Hilary Jones used to endorse a snake oil product claiming to reduce blood pressure. The video is entirely fake. It’s also interesting that in the video the fake Jones refers only to “this product” – as if the deepfakers made a generic endorsement (ala Krusty the Clown) that could be then attached to any product.

This trend is obviously disturbing, although again entirely expected. This use of deepfakes is deliberate fraud, and should be treated as such. Public figures have a right to their own identity, including their name and likeness. Laws vary by country and by state, but most have some limited protections for the use of someone’s name or likeness. In the US, for example, there is a limited “right of publicity” which limits the use of someone’s name or likely for commercial purposes without their permission. This can also extend beyond death, with the estate holding the rights. Even imitating a recognizable voice has been successfully sued.

This means that using a deepfake clearly violates the right of publicity – in fact it is the ultimate violation of that right. There are generally three legal remedies for violations – monetary damages, injunctive relief, and punitive damages.

How good are the deepfakes? Good enough, especially if you are viewing a relatively low-res video on social media. And of course they are only getting better. We cannot wait until deepfakes are good enough to fool most people, right now they are high enough quality to constitute fraud. So what do we do about it?

Continue Reading »

No responses yet

Jun 03 2024

Clickbait and Misinformation

Which is worse – clickbaity headlines for news articles that are factually correct, but may be playing up a sensational angle, or straight-up misinformation? It depends on what you mean by “worse”. A new study tries to address this information, with some interesting findings.

Misinformation is an increasingly important topic, one with far reaching implications for society. Our individual lives and our society is increasingly run on information. It is a critical resource, and the ability to evaluate and utilize information may be a determining factor in our quality of life. My favorite example remains Steve Jobs, because he is such a stark example. He was one of the richest people on the planet, with every physical resource at his disposal, and was a titan of an information industry. And yet he died prematurely of a potentially curable disease. He chose to delay mainstream treatment in order to pursue “natural” therapies that were ultimately worthless. We cannot know for sure what would have happened if he did not take this course, but his odds of survival would have been better.

At a societal level the most visible impact that our information ecosystems have deals with politics and public health. We are facing a rather dramatic decision regarding the next presidential election in the US, and this will ultimately be determined by how people are accessing and evaluating information. This has always been the case in a democracy, but I think most people alive today have not experienced a divergence of narrative and opinion as intense as we have today.

We also just when through the worst pandemic in a century, which brought into focus every issue dealing with misinformation. How do we deal with it in an age of social media? How do we balance the interests of making sure people get accurate health information so they can make informed choices, and freedom of speech and the value of open debate? There is no one correct answer, we just have to choose our tradeoffs.

Continue Reading »

No responses yet

May 13 2024

Spotting Misinformation

Published by under Skepticism

There is an interesting disconnect in our culture recently. About 90% of people claim that they verify information they encounter in the news and on social media, and 96% of Americans say that we need to limit the spread of misinformation online. And yet, the spread of misinformation is rampant. Most people, 74%, report that that they have seen information online labeled as false. Only about 60% of people report regularly checking information before sharing it. And a relatively small number of users spread a disproportionate amount of misinformation.

Of course, what is considered “misinformation” is often is the eyes of the beholder. We tend to silo ourselves in information ecosystems that share our worldview, and define misinformation relative to our chosen outlets. Republicans and Democrats, for example, trust completely different sources of news, with no overlap (in the most trusted sources). What’s fake news on Fox, is mainstream news on MSNBC, and vice versa. There is not only a difference in what is considered real vs fake news, but how the news is curated. Choosing certain stories to amplify over others can greatly distort one’s view of reality.

Misinformation is not new, but the networks of how it is created and shared is changing fairly quickly. If we all agree we need to stem the tide of misinformation, how do we do it? As is often the case with big social or systemic questions like this, we can take a top-down or bottom-up approach. The top-down approach is for social media platforms and news outlets to take responsibility for the quality of the information being spread on their watch. Clear misinformation can be identified and then nipped in the bud. AI algorithms backed up by human evaluators can kill a lot of misinformation, if the platform wants. Also, they can choose algorithms that favor quality and reliability over sensationalism and maximizing clicks and eyeballs. In addition, government regulations can influence the incentives for platforms and outlets to favor reliability over sensationalism.

Continue Reading »

No responses yet

May 09 2024

Havana Syndrome Revisited

Published by under Skepticism

Last month I wrote about Havana Syndrome, the claim that a number of American and Canadian diplomats and military personnel were the targets of some sort of directed energy weapon attack causing symptoms of headache, disorientation, nausea, and sometimes associated with an auditory sensation. The point of the article was to do a plausibility analysis, based on what information I could find. I concluded:

“So far it seems that the objective evidence favors the “mass delusion” hypothesis. This is similar to “sick building syndrome” and other health incidents where a chance cluster of symptoms leads to widespread reporting which is followed by confirmation bias and the background noise of stress and symptoms focusing on the alleged syndrome. This explanation, at least, cannot be ruled out by current evidence.”

But I also thought we could not rule out (“rule out” is a strong position) that some of the initial cases may have been a genuine external attack. Part of my point was to caution skeptics about landing prematurely on a skeptical narrative and then biasing any further analysis toward that narrative. Sometimes information is messy, and there are legitimate points on more than one side. Don’t use bad arguments even to defend legitimate skeptical positions.

Specifically, I wondered if Havana Syndrome were more like some prior mass delusions, where there was a core of genuine cases. The Pokemon seizure panic of the 1990s is a good example –  most of the cases were some form of mass delusion, but about 10% were actually photosensitive seizures in susceptible individuals. So, how do we distinguish between 100% mass delusion and just mostly mass delusion? Arguments and evidence that some of the cases were not compatible with external attack or were clearly some form of hypervigilance, anxiety, or delusion do not make the distinction.

Continue Reading »

No responses yet

Apr 16 2024

Evolution and Copy-Paste Errors

Evolution deniers (I know there is a spectrum, but generally speaking) are terrible scientists and logicians. The obvious reason is because they are committing the primary mortal sin of pseudoscience – working backwards from a desired conclusion rather than following evidence and logic wherever it leads. They therefore clasp onto arguments that are fatally flawed because they feel they can use them to support their position. One could literally write a book using bad creationist arguments to demonstrate every type of poor reasoning and pseudoscience (I should know).

A classic example is an argument mainly promoted as part of so-called “intelligent design”, which is just evolution denial desperately seeking academic respectability (and failing). The argument goes that natural selection cannot increase information, only reduce it. It does not explain the origin of complex information. For example:

big obstacle for evolutionary belief is this: What mechanism could possibly have added all the extra information required to transform a one-celled creature progressively into pelicans, palm trees, and people? Natural selection alone can’t do it—selection involves getting rid of information. A group of creatures might become more adapted to the cold, for example, by the elimination of those which don’t carry the genetic information to make thick fur. But that doesn’t explain the origin of the information to make thick fur.

I am an educator, so I can forgive asking a naive question. Asking it in a public forum in order to defend a specific position is more dodgy, but if it were done in good faith, that could still propel public understanding forward. But evolution deniers continue to ask the same questions over and over, even after they have been definitively answered by countless experts. That demonstrates bad faith. They know the answer. They cannot respond to the answer. So they pretend it doesn’t exist, or when confronted directly, respond with the equivalent of, “Hey, look over there.”

Continue Reading »

No responses yet

Next »