Jan 15 2019

Dunning Kruger and GMO Opposition

I have written extensively about GMOs (gentically modified organisms) here, and even dedicated a chapter of my book to the topic, because it is the subject about which the difference between public opinion and the opinion of scientists is greatest (51%). I think it’s clear that this disparity is due to a deliberate propaganda campaign largely funded by the organic lobby with collaboration from extreme environmental groups, like Greenpeace.

This has produced an extreme, if not a unique, challenge for science communicators. Also – there are direct implications for this, as the political fight over GMO regulation and acceptance is well underway. The stakes are also high as we are facing challenges feeding a growing population while we are already using too much land and there really isn’t more we can press into agriculture. (Even if there are other ways to reduce our land use, that does not mean we should oppose a safe and effective technology that can further reduce it.)

A new study published in Nature may shed further light on the GMO controversy. The authors explore the relationship between knowledge about genetics and attitudes toward GMOs.

In a nationally representative sample of US adults, we find that as extremity of opposition to and concern about genetically modified foods increases, objective knowledge about science and genetics decreases, but perceived understanding of genetically modified foods increases. Extreme opponents know the least, but think they know the most.

Continue Reading »

Comments: 0

Jan 14 2019

Our Memories Work Backwards

One more piece to the memory puzzle seems to be falling into place. The question is – what steps do our brains go through when recalling a memory? Researchers have been focusing on visual memory, because it is easiest to model and image, and they have found that memories are recalled in a reverse of the process by which they are formed.

A recent study in Nature Communications replicates the overall findings of a previous study published in PNAS. Both studies looked at the visual system and found essentially the same thing.

When we perceive an object, first our brain receives an image from the retina. By the time this image gets to the visual cortex some basic image processing has already occurred at the subcortical level. Then the cortex puts the image together, sharpens up contrast and lines, interprets size and distance, shadows and movement, etc. The brain then tries to find a match in its catalogue of known things. Once a match is found, actually, that information is then communicated back down to the more basic visual layers and the image is adjusted to enhance the match – lines are filled in, extraneous details are suppressed, assumptions of size and distance are adjusted.

Then the now identified object is sent to even higher brain areas (higher in this network) to afford meaning to the object. If your brain thinks the object has agency, this connects to the emotional centers in order to remember what you feel about the object. Connections are also made to memories about the object. Let’s call these thematic memories. So our brains build the image up from basic details, to complex shapes, then to known objects, and finally to feelings, connections, meaning and memories.

But what about when you recall the object that you previously saw? Both of these studies, using visual memories, found that the brain works backwards. First the thematic areas of the brain light up, then progressively more basic areas of visual processing. Media reporting on these studies emphasize that this is backward from how visual memories are made in the first place. However, this is only sort-of true. Remember – even when perceiving things, information goes simultaneously from the details to the themes, but then back down from the themes to the details. Perception and memory formation is bidirectional.

Continue Reading »

Comments: 0

Jan 11 2019

Predicting Brexit

This study has a fairly narrow focus, but it does relate to an interesting topic. A new analysis finds that the betting market predicted the Brexit vote an hour before the financial market.

This says something about the efficiency of these respective markets in processing and reacting to information. The authors also conclude that if the financial markets were optimally efficient they should have predicted the result of the Brexit vote two hours before they did.

OK, this is more than a bit wonky, but what I really want to discuss is the more basic concept of predictive markets as it relates to crowdsourcing and big data. The idea is that a lot of people in the aggregate may be better at either making decisions or reflecting emerging trends than any individual or small group. This gets interesting when you compare crowdsourcing like this to individual experts.

Continue Reading »

Comments: 0

Jan 10 2019

Children and Screen Time

Most parents worry about how much time their children are spending in front of computer screens, smartphones, and other electronic devices. This is a reasonable worry – this is a fairly dramatic cultural change, and the experience is different than what most of today’s parents experienced when they were children.

Pediatricians have also been warning about excessive screen time, which has been linked to obesity. But current research and recommendations are getting more nuanced, and pediatric organizations have recently walked back or altered their recommendations.
A recent review published in the BMJ found:

We found moderately strong evidence for associations between screentime and greater obesity/adiposity and higher depressive symptoms; moderate evidence for an association between screentime and higher energy intake, less healthy diet quality and poorer quality of life. There was weak evidence for associations of screentime with behaviour problems, anxiety, hyperactivity and inattention, poorer self-esteem, poorer well-being and poorer psychosocial health, metabolic syndrome, poorer cardiorespiratory fitness, poorer cognitive development and lower educational attainments and poor sleep outcomes. There was no or insufficient evidence for an association of screentime with eating disorders or suicidal ideation, individual cardiovascular risk factors, asthma prevalence or pain. Evidence for threshold effects was weak. We found weak evidence that small amounts of daily screen use is not harmful and may have some benefits.

The evidence is weak, and correlational only. This means we cannot conclude that screen time causes obesity, anxiety, or other issues. It may be, for example, that children who are sedentary for other reasons are both overweight and engage in sedentary activities, many of which involve screen time.
Based on this review, The Royal College of Paediatrics and Child Health said that there is insufficient evidence to conclude that screen time in itself is “toxic.”

Continue Reading »

Comments: 0

Jan 08 2019

Misunderstanding Dunning-Kruger

There is, apparently, an increase recently in interest in the Dunning-Kruger effect. The Washington Post writes about this recently, making the obvious political observation (having to do with the current occupant of the White House). It’s great that there is public interest in an important psychological phenomenon, one central to critical thinking. I have discussed DK before, and even dedicated an entire chapter to discussing it in my book.

Unfortunately the Post misinterpret the DK effect in the common way that it is most often misinterpreted. They write:

Put simply, incompetent people think they know more than they really do, and they tend to be more boastful about it.

and

Time after time, no matter the subject, the people who did poorly on the tests ranked their competence much higher. On average, test takers who scored as low as the 10th percentile ranked themselves near the 70th percentile. Those least likely to know what they were talking about believed they knew as much as the experts.

The first sentence makes it seem like the DK effect applies only to people who are “incompetent.” This is wrong on two levels. The first is that the DK effect does not apply only to “incompetent people” but to everyone, with respect to any area of knowledge. To be fair the author also writes, “it is present in everybody to some extent,” but this does not really capture the reality, and is undone by the sentences above. Second, the effect applies not just in the range of incompetence, but even for average or moderately above average competence.

Continue Reading »

Comments: 0

Jan 07 2019

Crowdfunding Quackery

A recent study in The Lancet highlights a disturbing trend – cancer patients using crowdfunding sites to pay for worthless and misleading fake cancer treatments, like homeopathy. They found that in June of 2018 there were 220 active GoFundMe campaigns for “alternative” treatments for cancer.

In this study, which focused specifically on homeopathy (which is 100% complete snake oil), 38% were seeking to use homeopathy in addition to conventional treatment, 29% instead of conventional treatment, and 31% after conventional treatment had failed. The authors, Snyder and Caulfield, were appropriately concerned about these trends.

At this point the most common question to ask is, “What’s the harm.” Well, it is extensive and severe – let me elaborate. In 2017 a study looked at cancer patients, their use of alternative treatments, and their survival. They found that overall if you used alternative treatments you were 2.5 times as likely to die during the study. For the most treatable cancers, like breast cancer, the risk of death was almost six times higher. That is a massive increased death rate. This increased risk of death was controlled for how sick the patients were. The most likely contributor to the increased death rate was delay in conventional treatment.

Continue Reading »

Comments: 0

Jan 04 2019

Asimov’s Predictions for 2019

Published by under General
Comments: 0

In 1984 science fiction writer Isaac Asimov wrote an article for the Toronto Star making predictions for 2019. I thought that was an odd date to pick, but as The Star explains, 1984 was 35 years from the publication of the book by that name, so they wanted to look 35 years into the future.

I am interested in futurism, which is notoriously difficult, but it is an excellent window onto the attitudes, assumptions, and biases of the people making the predictions. Asimov’s predictions are no exception, but they are particularly interesting coming from a professional futurist, and one with a reputation for being particularly prescient.

What did he get right, and what did he get wrong, and why? He focused on what he considered to be the three biggest issues for the future: “1. Nuclear war. 2. Computerization. 3. Space utilization.” I think this list itself reflects his bias as a science-fiction writer. They are reasonable, but he could have chosen medicine, agriculture, transportation, or other areas.

In any case, on nuclear war he was pessimistic in a way that was typical for the height of the cold war, and prior to the collapse of the Soviet Union. He said if we have a nuclear war, civilization is over, so not much more to say about that. Instead he just wrote:

“Let us, therefore, assume there will be no nuclear war — not necessarily a safe assumption — and carry on from there.”

He spent most of the article focusing on the impact of computers on society. This was a frequent topic of his fiction. He famously was correct in his prior visions of the future in the broad brushstrokes of – computers will get more powerful, more intelligent, and more important to civilization. But he also famously got the details wrong, imaging giant computers running things. He missed the trend toward smaller, ubiquitous, and embedded computers.

Continue Reading »

Comments: 0

Jan 03 2019

Magic Can Increase Belief in Pseudoscience

Magicians play a significant role in the skeptical movement. They have, as Liam Neeson famously said, a particular set of skills. They are very adept at deception, using techniques that have been honed through trial and error over centuries. It is a great example of cultural knowledge. Having the ability to deceive others, purely for entertainment and with informed consent, also makes them adept at detecting the use of the same techniques for nefarious purposes. This, essentially, has been James Randi’s entire career.

But at the same time some stage magicians make skeptics uncomfortable by not being entirely upfront with their audience. Now, I am not suggesting that all magicians tell their audience how the tricks are done, and I completely understand the need to create a mystique as part of the performance. However, I have seen skilled magicians (like Randi or Banachek) perform amazing tricks with complete candor about the nature of those tricks, without diminishing the entertainment value.

Magicians typically create a narrative by which they “explain” their tricks to the audience. A magician, for example, could say, “I am using sleight of hand.” Or they could say (or strongly imply), “I have true psychic ability.” The Amazing Kreskin falls into this latter category. There are also those like Uri Geller who (sort of) pretend they are not doing magic at all, but have special powers.

In the gray zone are those like Derren Brown. Their narrative is not that they are psychic but that they are using psychological manipulation on their audience – reading microexpressions, influencing their decision-making, or reading body-language. This narrative is as much BS as the psychic one, used as part of the magic experience and for misdirection. You can read and influence people to some degree, but these techniques are not reliable enough to support a performance. Typically mentalists use standard sleight of hand and then pretend to use psychological techniques.

Continue Reading »

Comments: 0

Dec 21 2018

Radical Political Views Correlates with Poor Metacognition

The usual caveats apply – this is one study in a limited context showing only correlation and using a psychological construct. I also have to be careful because the study confirms what I already believe. Having said all that, it is interesting and is probably telling us something about people with extreme political views, especially when other research is considered.

The study involves individuals with radical political beliefs, as measured by a standard questionnaire. It has already been established that those with more extreme beliefs espouse greater confidence in their knowledge and beliefs. However, it is not clear how much this is due to an overconfidence bias vs a failure of metacognition. In other words – do people who are overconfident about their political beliefs like to portray themselves to others as being confident, or do they simply lack insight into the correctness of their own beliefs (a metacognitive failure). The current study tests the latter factor.

The researchers, lead by Steven Flemming at University College London, looked at, “two independent general population samples (n = 381 and n = 417).” He gave them a challenge in which they had to estimate the number of dots on two images, and decide which one had more. They also had to say how confident they were in their judgement. Further, if they got the answer wrong, they were given further information in the form of another image with dots which should have helped them improve their estimate. They were then asked to restate their confidence.

The study found that those with more radical political views indicated higher confidence in their choices, even when they were wrong, and less of a tendency to update their confidence with new information. In other words – you might say they are opinionated and stubborn.  This comes as absolutely no surprise if you have ever interacted with someone with extreme political views.

What this study cannot tell us about is the arrow of cause and effect. One possibility is that those who lack the metacognitive ability to properly assess and correct their own confidence levels will tend to fall into more extreme views. Their confidence will allow them to more easily brush off dissenting opinions and information, more nuanced and moderate narratives, and the consensus of opinion.

Continue Reading »

Comments: 0

Dec 20 2018

The Really Worst Pseudoscience of 2018

This is a continuation of my previous post, but I am not going to simply add to the list. Rather, I am going to discuss how the general phenomenon of pseudoscience has continued to evolve in 2018. There were certainly many candidates for specific pseudosciences I have not yet covered on this list – the raw water nonsense, flat-earthers, anti-GMO propaganda, more alternative medicine and free energy claims, and a continuation of all the pseudosciences from previous years.

It is important to address specific claims, drilling down to individual facts and arguments, but it is also important to step back and look at the cultural and institutional patterns behind those specific claims.

The real story over the last few years is that of fake news. This is actually a multi-headed monster, with completely fake news stories, biased and distorted news, and real news dismissed as fake. What these variations all have in common is the blurring of the lines between valid and invalid, legitimate and fake, fact and opinion, skepticism and denial, and expertise vs elitism.

Distinguishing real from fake has always been a challenge, and there is also the demarcation problem – there is often a fuzzy line between the two, not a clear bright line. Also, experts make mistakes, the consensus of opinion is sometimes wrong, there is bias and fraud in science, corporations often put their thumb on the scale – and people, in general, are flawed, so their institutions are also flawed. For these and other reasons, most of the things you think you know are wrong, or at least incomplete, distorted, misleading, or flawed.

Continue Reading »

Comments: 0

Next »