Archive for the 'General Science' Category

Apr 01 2019

Capturing The Most Deadly Day On Earth

Published by under General Science

“When I saw that, I knew this wasn’t just any flood deposit,” DePalma said. “We weren’t just near the KT boundary—this whole site is the KT boundary!”

Unless something really unexpected happens, this is likely to be the science news story of the year, and will be on everyone’s short list for the science news of the decade. It may be the paleontological news of the century. It’s easy to get excited when news like this breaks, and maybe I’m overcalling it, but you can decide once you hear the news, if you haven’t already.

A young PhD candidate, Robert DePalma, has found a massive fossil deposit that seems to have been laid down on the actual day the asteroid hit and wiped out 99.9999% of living things and 75% of species on Earth.

The New Yorker tells the whole story in great detail, and the whole thing is worth a read, but here is the quick version. It is now clearly established that 66 million years ago a several mile wide asteroid impacted the earth at 45 thousand miles per hour, impacting near the Gulf of Mexico creating what is now called the Chicxulub crater. The impact sent tons of debris into the air, into orbit, and even around the solar system. Hot rock rained back down onto the Earth, setting fire to most of the plant life, poisoning the atmosphere, and blocking out the sun plunging the Earth into a toxic deep freeze.

It’s hard to imagine anything surviving that day or the following weeks and months, but some life squeaked through and eventually evolved into the modern assemblage of life, including humans.

Continue Reading »

No responses yet

Mar 22 2019

Get Rid of “Statistical Significance”

Published by under General Science

A new paper published in Nature, and signed by over 800 researchers, adds to the growing backlash against overreliance on P-values and statistical significance. This one makes a compelling argument for getting rid of the concept of “statistical significance” altogether. I completely agree.

Statistical significance is now the primary way in which scientific results are recorded and reported. The primary problem is that it is a false dichotomy, and further it reduces a more thorough analysis of the results to a single number and encourages interpreting the results as all or nothing – either demonstrating an effect is real or not real.

The primary method for determining significance is the P-value – a measure of the probability that the results obtained would deviate as much as they do or more from a null result if the null hypothesis were true. This is not the same as the probability that the hypothesis is false, but it is often treated that way. Also, studies often assign a cutoff for “significance” (usually a p-value of 0.05) and if the p-value is equal to or less than the cutoff the results are significant, if not then the study is negative.

When you think about it, this makes no sense. Further, the p-value was never intended to be used this way. It is only the human penchant for simplicity that has elevated this one number to the ultimate arbiter of how to interpret the results of a study.

The consequences of this simplistic analysis is that the interpretation of study results are often misleading. The authors, for example, looked at 791 articles in 5 journals and found that half of them made wrong conclusions about the results based on overinterpreting the implication of “significance”.

Continue Reading »

No responses yet

Mar 14 2019

Climate Change and the Role of Uncertainty

Published by under General Science

As a physician you have to develop a certain comfort level with uncertainty. The simple fact is – we don’t know everything. The human body is extremely complex, and there are over 7 billion people on the planet representing a great deal of variation. Our data is incomplete and largely statistical, and we have to apply that to specific decisions about an individual patient. This means we have to make the best recommendations we can with the information we have, be honest about our level of uncertainty, and convey the range of possible outcomes based on various decisions.

It’s often helpful to think in terms of “clinical pathways,” – what are the different possible paths an illness can take, given what we know and what we don’t know, and how will our diagnostic and therapeutic interventions alter those possible pathways?

Perhaps because I live this every day, I find it easy to accept the logic of action on climate change. We don’t know exactly what will happen. The climate system is complex, and there are known unknowns. One of the big ones is climate sensitivity – what is the precise relationship between the level of CO2 in the atmosphere and the degree of warming. The lower the climate sensitivity the better, in terms of how much warming will result from the CO2 we have and are releasing.

But there are other variables as well, including human action. We don’t know how stable the Greenland and Antarctic iceshelves really are, for example. There are multiple feedback loops and tipping points, and the potential for cascading effects. So yes – climate models are just that, models. They are not a crystal ball that will tell us what will happen. They are our best guess at what might happen.

Global warming deniers use this uncertainty as an excuse to do nothing (doing nothing always seems to be their goal, regardless of the justification). As a physician, that logic is painful. If I am not sure that my patient has a serious condition, that is not a reason to do nothing, it creates an imperative to do something. The specific intervention is then based largely on a risk vs benefit analysis. And often, as with global warming, acting early is key. You definitely want to find that tumor when it is small and before it has metastasized.

Continue Reading »

No responses yet

Feb 22 2019

New Info on The Cause of the Dinosaur Extinction

Published by under General Science

There is little doubt that an asteroid impact, the one that formed the Chicxulub crater in the Caribbean Sea, was the primary cause of the K-Pg extinction event, the one that saw the end of non-avian dinosaurs. But there is continued debate about the role of massive volcanic eruptions at about the same time in the Deccan Traps, in what is modern day India (on the almost exact opposite side of the planet).

Some of this debate can be settled by more precise dating of the three relevant events (mass extinction, asteroid impact, Deccan Traps). A new study adds more precise dating of the volcanic eruptions, shedding some light on the whole question.

The varying hypotheses about how these two events relate to the mass extinction include the notion that the asteroid impact was the main event, and the volcanic eruptions a minor player. In this view the asteroid impact caused the mass extinction, and there would have been no mass extinction without it. At the other end of the spectrum is the belief that gases released from the volcanoes at the Deccan Traps caused global climate change, poisoned the atmosphere, and was the primary driver of the mass extinction. By the time the asteroid hit the show was over, or at most the impact served as a coup de grace for an extinction even already well underway.

The reality likely falls somewhere between these two extremes. One idea is that the impact and the eruptions were a one-two punch for life on Earth. The gases caused global warming, causing species to adapt to a warmer climate and also causing significant stress, getting the mass extinction under way. But then the asteroid hit, causing global cooling. The warm-adapted animals could not rapidly adapt to the cold, and the minor extinction event became a mass extinction.

Continue Reading »

No responses yet

Feb 18 2019

Warning About Big Data in Science

Published by under General Science

At the AAAS this past weekend Dr Genevera Allen from Rice University in Houston presented her findings regarding the impact of using machine learning algorithms on evaluating scientific data. She argues that it is contributing to the reproducibility problem.

The core problem, which I have discussed many times before, is that if scientists do not use sufficiently rigorous methods, they will find erroneous patterns in their data that are not really real. Sometimes this amounts to p-hacking, which results from methods that may seem innocent but tweak the statistical results in order to manufacture significance. This could be something as innocuous as analyzing the data as you go and then stopping the study when you reach statistical significance. Or, similarly, if you initially plan on testing 100 subjects, and the results are not quite significant, you may decide to enroll another 20 subjects in the hopes that you will “cross the finish line.”

Here is another issue that is similar to what Allen is warning about. Let’s say a doctor notices an apparent pattern – my patients with disease X all seem to have red hair. So they review their patient records, and find that indeed there is an increased probability of having red hair if you have disease X. That all seems perfectly cromulent – but it isn’t. Or we could say that such a correlation is preliminary, and needs to be verified.

The reason for this is that the physician may have just noticed a random correlation in their patient population, a statistical fluke. Every data set, such as a patient population, will have many such spurious correlations by chance alone. If you notice such a random correlation, that doesn’t make it a real phenomenon, even if you then count the numbers and do the math. You haven’t tested the hypothesis that the correlation is real, you just confirmed an observation of a chance clumpiness in the data.

Continue Reading »

No responses yet

Feb 07 2019

Can We All Agree the Earth Is Warming Yet?

Published by under General Science

The last four years were the four warmest years on record (in the order 2016,2017,2015 and 2018). Since 1880 the average surface temperature on Earth has risen by about 1 degree C, 0.79 degrees above the 20th century average. At the same time global ice is decreasing, especially in the arctic which is losing 12.8% per decade. Sea level is also rising – in 2014 the average sea level was 2.6 inches above the 1993 average.

These numbers are all clear, but are abstract for most people. If scientists didn’t tell us the planet was warming on average, we wouldn’t see it in our daily lives. This made it easier to engage in politically motivated denial. The science is also complex, which leaves a lot of room for rationalization. You can focus on different types of measurement (surface temperatures vs atmospheric, for example) or on the need to adjust the raw data to account for historical changes in measurements. You could play games with the statistics to manufacture an illusory “pause” or focus on the uncertainty.

To be clear, carefully examining the details is critically important. The problem is when you do so with an agenda other than objectively describing reality. There is enough wiggle room to convince yourself of whatever it is you want to believe.

The needle appears to be moving, however, not because the science has become more solid, but because the effects have become more obvious. Extreme weather events are also significantly increasing, by about 40% since 1950. Just in the past year we had record breaking fires on the West coast, record breaking tornadoes in my home state of CT, and now record breaking polar vortex driven cold in the midwest. We are seeing more powerful hurricanes, like Hurricanes Florence and Michael.  There are record breaking heat waves around the world, with Death Valley having the warmest month ever recorded on Earth last July.

Continue Reading »

No responses yet

Jan 28 2019

Climate Change Survey

Published by under General Science

Climate change has certainly been a hot topic over the last year, so where do we stand in terms of public perception? Well – “The Energy Policy Institute at the University of Chicago and The AP-NORC Center conducted a national survey of 1,202 adults in November 2018 to explore Americans’ views on climate change, carbon tax and fuel efficiency standards.” Overall the survey suggests the public is inching toward greater acceptance of man-made climate change, but let’s delve into the numbers.

First, the majority of Americans, 71%, think that global warming is happening. Of those that think it is happening, 60% say it is mostly due to human activity, 12% that it is mostly natural, and 28% that it is evenly mixed. So that means that 62% (88% of 71%) of Americans accept anthropogenic climate change to some degree. That is a solid majority, but not as solid as the science or the consensus of scientists.

Predictably, these numbers vary dramatically according to political affiliation. For Democrats, the percent that believe global warming is happening and think that it is mostly or partly human-caused is 86/82. For independents the numbers are 70/64, and for Republicans its 52/37. So it seems that part of the problem is a knowledge deficit, for even among Democrats the rate of acceptance is lower than for the scientific community. But a larger part of this resistance is ideological.

For those who have recently changed their minds into accepting the reality of climate change (regardless of cause) 76% say it is because of recent extreme weather events, and 57% because of personal observations of local weather. Meanwhile, 63% say it is because of arguments in favor of climate change (they did not ask specifically about scientists or the scientific consensus). So essentially recent converts are relying more on anecdote than the science.

Things get more interesting when we get to questions about possible solutions to climate change. Previous research found that people who deny anthropogenic climate change were motivated primarily by “solution aversion” – they deny the problem because they don’t like the proposed solutions. This fits with much far right rhetoric, that climate change is a conspiracy by which liberals seek to expand the government and take control of the energy industry.

Continue Reading »

No responses yet

Dec 14 2018

More Evidence Organic Farming is Bad

Published by under General Science

I know I have been hitting this topic frequently recently, but I can’t ignore a major study published in Nature. The study is not just about organic farming, but about how we use land and implications for climate change, specifically carbon sequestration. The core idea is this – when we consider land use and its impact on the climate, we also have to consider the opportunity cost of not using the land in a more useful way. This echoes a previous study by different authors I discussed five months ago, and a review article by still different authors I discussed three months ago.

There certainly does now seem to be a growing consensus that we have to think very carefully about how we use land in order to minimize any negative impact on the environment, and specifically limit carbon in the atmosphere driving climate change.

The new study essentially argues that we need to use land optimally. If land is well suited to growing corn, then we should grow corn. If it is better suited for forestation, then we should allow forests to grow there and not convert it to farmland. Forests sequester a lot more carbon than farmland, and this is a critical component to any overall strategy to mitigate climate change. The authors calculate that land use contributes, “about 20 to 25 per cent of greenhouse gas emissions.”

If we put the various studies I have been discussing together, a compelling image emerges. First, we need to consider that we are already using all the best farmland to grow crops. Any expansion of our farmland will by necessity be using less and less optimal land for farming. This translates to a greater negative impact on the climate. However, our food production needs will grow by about 50% by 2050.

This is a strong argument, in my opinion, against biofuels. We need that land to grow food, not fuel – unless we can source biofuels from the ocean or industrial vats without increasing land use.

Continue Reading »

No responses yet

Nov 26 2018

New Pew Survey About GMOs

Published by under General Science

The Pew Research Center has recently published a large survey regarding American’s attitudes toward food, including genetic modification, food additives, and organics. There are some interesting findings buried in the data that are worth teasing out.

First, some of the top line results. They found that 49% of Americans feel that genetically modified organisms (GMOs) are bad for health, while 44% said they were neutral, and 5% said they were better. So the public is split right down the middle over the health effects of GMOs. The 49% who feel that GMOs are bad for health is up from 39% when they gave the same survey in 2016 – so unfortunately, we have lost ground on this issue.

Breaking these numbers down, we find that women are a little more likely to fear GMOs as a health risk than men, 56% compared to 43%. I suspect this is due primarily to differences in how anti-GMO messages are marketed, and the general marketing of pseudoscience to women (the Goop effect). This is also significant because women are more likely to make food purchasing decisions for their families.

Even more interesting is the relationship between science knowledge and fear of GMO’s – among those with a high degree of science knowledge, 38% thought GMOs had health risks, while 52% of those with a low degree of science knowledge thought so. The same pattern is seen through all the subquestions about GMOs. For example, 49% of those with a high degree of science knowledge believe GMOs have the potential to increase the global food supply, while only 20% of those with a low degree of science knowledge believe this.

Continue Reading »

No responses yet

Nov 20 2018

Possible New Branch of Eukaryotes Defined

Scientists report in Nature the indentification of two new species of Hemimastigophora, a predatory protist. What makes the paper newsworthy is that the authors are arguing that their genetic analysis suggests Hemimastigophora, currently categorized as a phylum, should instead be its own suprakingdom.

To make sense of this let’s review the basic structure of taxonomy, the system we use to categorize all life. All known life is divided first into three domains, the bacteria, archaea, and eukaryotes. Bacteria and archaea do not have a nucleus, while eukaryotes are larger and have a nucleus which contains most of their DNA.

Eukaryotes are divided into kingdoms, including plants, animals, fungus, protozoa, and chromista (algae with a certain kind of chlorophyll). Kingdoms are then divided into phyla, which are essentially major body plans within that group.

This is a simplified overview, because there is a lot of complexity here, with suprakingdoms, subkingdoms, and further breakdowns. Further, there is a lack of consensus on how to exactly divide up these major groups. Even in the cited paper, the authors say there are 5-8 “suprakingdom level groups” within the domain eukaryotes. The number of kingdoms depends on which scheme you use, and how you interpret the existing evidence.

The reason for uncertainty is that we have not yet done a full genetic analysis on every known group. Further, when we discover new species that lie outside of the existing scheme, we have to rethink how different groups are actually related.

Continue Reading »

No responses yet

« Prev - Next »