One time I would like to be wrong in my pessimism about some corporation claiming a huge breakthrough over a short time period. This could just be confirmation bias, but there seems to be a rash of companies over-hyping and over-promising on major breakthroughs. Just yesterday I wrote on SBM about an Israeli company that claims it will cure cancer within a year (Umm… No.).
Now today I see a news report of a company CEO claiming they will have fusion energy in a couple of years with commercialization in five years.
“The notion that you hear fusion is another 20 years away, 30 years away, 50 years away—it’s not true,” said Michl Binderbauer, CEO of the company formerly known as Tri Alpha Energy. “We’re talking commercialization coming in the next five years for this technology.”
I think the appropriate reaction to such a claim is extreme skepticism. The reasons are both general and specific. The general reasons I also covered in my SBM post. They include the fact that companies often have an incentive to overhype what they can deliver – primarily to raise funding. If you want someone to invest millions of dollars in your company, it helps if they think you are on the cusp of a breakthrough, and over a timeline that investors like. The “5 years” claim seems to be standard. I guess that is the most VC companies are willing to wait to make their huge profits.
In the research world we joke about the “5-10 years” claims for breakthroughs, which is linked to funding cycles. Essentially, researchers are claiming what they will achieve over the next grant cycle.
Psychologists have come to recognize that, because of the complexity of human emotion and behavior, we are often motivated to engage in activity which produces the exact opposite effect that we intend. If you are fearful of losing someone, you may become clingy and possessive, driving them away.
The same is true on a societal level – interventions designed to have one effect may have the opposite effect if we are not careful. A classic example is the “scared straight” approach to public service announcements – it doesn’t work. In fact, it may have the opposite of the intended effect. Warning kids about the dangers of alcohol, for example, may just romanticize alcohol use and suggest that it is more popular or common than it is, creating social pressure to use. This is the main idea behind the social norming approach – tell kids, instead, statistics about how few of their peers are getting drunk regularly, reducing the social pressure to use.
This overall pattern is fairly consistent in the literature (although, of course, researching such questions is complex and the details matter to the outcome). Another recent example is a study which finds that fat shaming obese people does not motivate them to lose weight, which is sometimes the motivation (or at least the justification) of the person doing the fat shaming. Rather, fat shaming leads to more weight gain.
Climate change has certainly been a hot topic over the last year, so where do we stand in terms of public perception? Well – “The Energy Policy Institute at the University of Chicago and The AP-NORC Center conducted a national survey of 1,202 adults in November 2018 to explore Americans’ views on climate change, carbon tax and fuel efficiency standards.” Overall the survey suggests the public is inching toward greater acceptance of man-made climate change, but let’s delve into the numbers.
First, the majority of Americans, 71%, think that global warming is happening. Of those that think it is happening, 60% say it is mostly due to human activity, 12% that it is mostly natural, and 28% that it is evenly mixed. So that means that 62% (88% of 71%) of Americans accept anthropogenic climate change to some degree. That is a solid majority, but not as solid as the science or the consensus of scientists.
Predictably, these numbers vary dramatically according to political affiliation. For Democrats, the percent that believe global warming is happening and think that it is mostly or partly human-caused is 86/82. For independents the numbers are 70/64, and for Republicans its 52/37. So it seems that part of the problem is a knowledge deficit, for even among Democrats the rate of acceptance is lower than for the scientific community. But a larger part of this resistance is ideological.
For those who have recently changed their minds into accepting the reality of climate change (regardless of cause) 76% say it is because of recent extreme weather events, and 57% because of personal observations of local weather. Meanwhile, 63% say it is because of arguments in favor of climate change (they did not ask specifically about scientists or the scientific consensus). So essentially recent converts are relying more on anecdote than the science.
Things get more interesting when we get to questions about possible solutions to climate change. Previous research found that people who deny anthropogenic climate change were motivated primarily by “solution aversion” – they deny the problem because they don’t like the proposed solutions. This fits with much far right rhetoric, that climate change is a conspiracy by which liberals seek to expand the government and take control of the energy industry.
Old ideas die hard. The first extinct hominids found were Neanderthals, and our cultural conception of them was formed in the 19th century, a time rife with parochial attitudes toward “primitive” peoples. The first Neanderthal skeleton also happened to suffer from crippling arthritis, giving it a hunched over posture.
The cultural notion that our closest relatives were brutish and primitive became deeply embedded. Certainly, this idea has been moderated significantly in the last century, but not completely expunged. Meanwhile, paleontologists have discovered more and more evidence that Neanderthals were just a different breed of human. They were fully bipedal, so their gait was modern. They were more robust than Homo sapiens, because they were adapted to Europe’s Ice Age. But robustness should not be confused with brutishness.
Prof Clive Finlayson, director of the Gibraltar Museum, has a recent commentary on BBC’s website in which he punctures this outdated view of our closest cousins. He points out that this biased view of Neanderthals affects not just public perception, but scientific thinking. There have been many assumptions of Homo sapiens superiority, and that Neanderthals were essentially replaced by us through direct competition.
So this also reflects a common misconception about evolution itself, that “survival of the fittest” is always what determines which species endure, and is all about being more advanced and superior. Finlayson points out that we cannot neglect the factor of luck, which may, in fact, often be dominant. Homo sapiens may not have been objectively superior to Neanderthals in any specific way, but were simply better adapted to a changing environment. Neanderthal robustness, an advantage during glacial periods, may have been a hindrance during a warming climate. Sapiens may simply have inherited a better trade-off of features for that period in time.
Psychologists are increasingly using virtual reality (VR) in their psychological experiments. It’s very convenient – they can create whatever environment they want with total control over all visual and auditory variables. It’s also safe, so they can study how people respond in traffic without the risk of subjects getting run over.
The meta-question for this research, however, is whether people will respond the same in VR as they do in physical reality? That is a research question unto itself, with implications for all other VR-based research.
Based on my personal experience in VR I would guess that it depends. Current VR technology is of sufficient resolution and fidelity that it successfully tricks the brain – your brain believes that you are in the environment you see. I say “your brain” because you consciously know it is VR and not meat-space, but your brain incorporates the visual and auditory sensory streams into its construction of reality as if they were real. So, consciously you may know the difference, but subconsciously you don’t.
Perhaps the best demonstration of this is the Planck Experience – a fun little VR demonstration in which you walk out onto a planck 40 floors high on a virtual skyscraper. You know you are safe in a room, but your subconscious brain buys the visual construction and your emotions react as if you are about to die.
A recent experiment adds a bit more data to this question. Researchers used VR to test yawning. There is a phenomenon of contagious yawning – mammals will yawn when they see other mammals yawn, about 30-60% of the time. So the researcher put subjects in VR where they were exposed to virtual yawning. Sure enough – the yawns were contagious 38% of the time, in the range of previous research.
One of the “holy grails” of neuroscience is the ability to scan a brain and create a complete detailed map, including all networks and connections. Scientists use several techniques, all with their own drawbacks, and the process is very slow – it can take a year to completely scan a single fly brain. A collaboration of scientists, however, report in Science that they have developed a new technique that can accomplish a detailed scan of an entire fly brain (or a section of mouse cortex) in 2-3 days.
The team has been described as an “Avengers” type collaboration, and it is impressive. Specialists provided the prepared fly brains. Two different types of microscopy were combined (that’s really the new bit), along with a third imaging technique. Finally, computer specialists had to figure out how to combine all the data like puzzle pieces into an image. The result was a complete map of a fly brain in three days, which is an impressive leap forward.
The core innovation of the new technique is to use a combination of expansion microscopy and lattice light-sheet microscopy. Expansion microscopy is pretty much what it sounds like – the brain sample is expanded, retaining the relative positions of neurons and connections, but creating more space to facilitate imaging. Expansion is done chemically, similar to injecting an expanding gel into a specimen. The researchers expanded their samples four-times to provide optimal results. The potential problem with this technique is that it may introduce artifacts giving spurious results, so anyone using it has to be careful and validate their techniques (by reproducing known outcomes, for example).
This expansion technique was combined with the lattice light-sheet microscopy. This is a complicated setup that illuminates the specimen with high energy thin sheet of light, only that part of the specimen that is in focus to the microscope, keeping all the out-of-focus parts dark. Finally, this is all combined with fluorescence microscopy, which tags specific biological structures (such as certain amino acids) with fluorescent molecules. This way only certain cell types or certain connections or structures can be imaged and mapped. Specifically they used confocal microscopy, which provides better resolution and contrast.
The question at the core of science communication and the skeptical movement is – how do we change opinions about science-related topics? That is the ultimate goal, not just to give information but to inform people, to change the way they think about things, to build information into a useful narrative that helps people understand the world and make optimal (or at least informed) decisions.
I have been using the GMO (genetically modified food) issue as an example, primarily because the research I am discussing is using it as a topic of study. But also – GMO opposition is the topic about which there is the greatest disparity between public and scientific opinion. A new study also looks at attitudes toward GMOs, specifically, with the question of – is a convert from GMO opponent to supporter more persuasive than straightforward GMO support?
The study uses clips from a talk by Mark Lynas, an environmentalist who converted from GMO opponent to supporter. They found:
The respondents each were shown one of three video clips: 1) Lynas explaining the benefits of GM crops; 2) Lynas discussing his prior beliefs and changing his mind about GM crops; and 3) Lynas explaining why his beliefs changed, including the realization that the anti-GM movement he helped to lead was a form of anti-science environmentalism.
The researchers found that both forms of the conversion message (2 and 3) were more influential than the simple advocacy message. There was no difference in impact between the basic conversion message and the more elaborate one.
This makes sense – prior research shows that it is more effective to give someone a replacement explanatory narrative than just to tell them that they are wrong. However, it is very difficult to say how generalizable this effect is.
I have written extensively about GMOs (gentically modified organisms) here, and even dedicated a chapter of my book to the topic, because it is the subject about which the difference between public opinion and the opinion of scientists is greatest (51%). I think it’s clear that this disparity is due to a deliberate propaganda campaign largely funded by the organic lobby with collaboration from extreme environmental groups, like Greenpeace.
This has produced an extreme, if not a unique, challenge for science communicators. Also – there are direct implications for this, as the political fight over GMO regulation and acceptance is well underway. The stakes are also high as we are facing challenges feeding a growing population while we are already using too much land and there really isn’t more we can press into agriculture. (Even if there are other ways to reduce our land use, that does not mean we should oppose a safe and effective technology that can further reduce it.)
A new study published in Nature may shed further light on the GMO controversy. The authors explore the relationship between knowledge about genetics and attitudes toward GMOs.
In a nationally representative sample of US adults, we find that as extremity of opposition to and concern about genetically modified foods increases, objective knowledge about science and genetics decreases, but perceived understanding of genetically modified foods increases. Extreme opponents know the least, but think they know the most.
One more piece to the memory puzzle seems to be falling into place. The question is – what steps do our brains go through when recalling a memory? Researchers have been focusing on visual memory, because it is easiest to model and image, and they have found that memories are recalled in a reverse of the process by which they are formed.
When we perceive an object, first our brain receives an image from the retina. By the time this image gets to the visual cortex some basic image processing has already occurred at the subcortical level. Then the cortex puts the image together, sharpens up contrast and lines, interprets size and distance, shadows and movement, etc. The brain then tries to find a match in its catalogue of known things. Once a match is found, actually, that information is then communicated back down to the more basic visual layers and the image is adjusted to enhance the match – lines are filled in, extraneous details are suppressed, assumptions of size and distance are adjusted.
Then the now identified object is sent to even higher brain areas (higher in this network) to afford meaning to the object. If your brain thinks the object has agency, this connects to the emotional centers in order to remember what you feel about the object. Connections are also made to memories about the object. Let’s call these thematic memories. So our brains build the image up from basic details, to complex shapes, then to known objects, and finally to feelings, connections, meaning and memories.
But what about when you recall the object that you previously saw? Both of these studies, using visual memories, found that the brain works backwards. First the thematic areas of the brain light up, then progressively more basic areas of visual processing. Media reporting on these studies emphasize that this is backward from how visual memories are made in the first place. However, this is only sort-of true. Remember – even when perceiving things, information goes simultaneously from the details to the themes, but then back down from the themes to the details. Perception and memory formation is bidirectional.
This study has a fairly narrow focus, but it does relate to an interesting topic. A new analysis finds that the betting market predicted the Brexit vote an hour before the financial market.
This says something about the efficiency of these respective markets in processing and reacting to information. The authors also conclude that if the financial markets were optimally efficient they should have predicted the result of the Brexit vote two hours before they did.
OK, this is more than a bit wonky, but what I really want to discuss is the more basic concept of predictive markets as it relates to crowdsourcing and big data. The idea is that a lot of people in the aggregate may be better at either making decisions or reflecting emerging trends than any individual or small group. This gets interesting when you compare crowdsourcing like this to individual experts.