Archive for March, 2019

Mar 28 2019

Robotic Pets

Published by under Technology

I warn frequently about the folly of trying to predict the future. Obviously we need to do this to some extent, but we always need to be aware of how difficult it is. It is especially hard to predict how people will use technology, even if the technology itself is inevitable. Until devices are in the hands of actual people out there in the world, who try to incorporate the tech into their daily lives, we just can’t know how it is going to shake out.

So, having said that, I am going to make a prediction about how people are going to incorporate future technology. I think robotic pets are going to be increasingly popular as the technology advances. At least I am going to build what I think is a strong case for this prediction. The risk is that there is something I cannot anticipate that will be a deal-killer. Feel free to try to shoot this down and bring up points, but hear me out first.

From a neurological point of view, I do not see any obstacle to people bonding fully with robotic pets. Neuroscience has clearly established that the human brain has certain algorithms that it uses to assign emotional significance to things, to form emotional attachments, and to respond to emotional signaling. In order for the full suite of emotional responses to be in play, being alive is simply not required. That is not how our brains work.

Our visual systems, for example, sort the world into two categories – things that have agency, and things that do not have agency. Having agency means that our brains infer that they are able to act with their own will and purpose. They infer this from how objects move. If they move in a way that cannot be explained simply as passive movement within an inertial frame of reference, then they must be moving on their own. Therefore they have agency.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 26 2019

That’s Not a Witch Hunt

Every time I heard someone use the term “witch hunt” recently I was reminded of that quote from Indigo Montoya from The Princess Bride – “You keep using that word. I do not think it means what you think it means.” With the recent release of the Mueller report, many news outlets feel obliged to interview people on the street about their opinions. This is an inane practice that provides no useful information, just cherry-picks random opinions. Every single time I heard the term “witch hunt”, it was used incorrectly.

It’s not just random people who do not understand the term. Because Trump has used the term over 260 times and counting to refer to the Mueller probe, many political commentators have also been using the term – mostly incorrectly. Dana Milbank, for example, wrote in the Washington Post:

Just because Trump says something, however, doesn’t automatically mean it’s wrong. The treatment of Trump by special counsel Robert S. Mueller III and other investigators does have characteristics of a witch hunt. This is because Trump has characteristics of a witch.

So says a leading authority on the history of witchcraft, Thomas J. Rushford, history professor at Northern Virginia Community College in Annandale. In an anthropological sense, Trump “is really quintessentially a witch figure,” the professor tells me, and if what is happening to Trump is a witch hunt, “it is only in a good sense, that is, this is society policing the boundaries that they believe to be ethically and morally right.”

But there is no witch hunt “in a good sense.” This misunderstands the essence of what a witch hunt is. The logic here is that if Trump is analogous to a witch, then the investigation was a witch hunt. Or, on the other side, if Trump is innocent of collusion, then by definition the investigation to determine whether or not he is guilty is a witch hunt. One random interviewed person even said that because the probe found no evidence of collusion it was a “failed witch hunt.”

Continue Reading »

Like this post? Share it!

No responses yet

Mar 25 2019

What Good Journalism Looks Like

It’s refreshing to encounter a well-researched piece of excellent journalism that is not afraid to communicate an accurate picture of the subject. The headline of this article reads, “Naturopaths are snake-oil salespeople masquerading as health professionals,” by Gary Nunn writing for the Guardian.

He begins:

When I began researching and conducting interviews for a feature about naturopaths, I was doggedly determined to keep an open mind. Journalism 101 dictates balance: a fair hearing to both sides. My commitment was to present the unbiased truth; I was about to embark on a learning journey, as journalists often do.

Here’s the thing – many journalists confuse the need to approach a topic with a fair and open mind with the piece itself being “balanced.” However, if the topic itself is asymmetrical, then this leads to a false balance. Rather, the piece should reflect reality, not an arbitrary conclusion that both sides are equal.

Another trap is to justify this false balance by saying – I’ll let the readers (or viewers) decide. This standard makes sense for a news piece, rather than an opinion piece, but is often misapplied. It’s OK to give information without drawing firm conclusions from that information, and let the reader draw their own conclusions. But this approach requires a lot of context. In science journalism, it’s better to let experts give their analysis. Further, this editorial approach is not a justification for false balance. These are independent variables.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 22 2019

Get Rid of “Statistical Significance”

Published by under General Science

A new paper published in Nature, and signed by over 800 researchers, adds to the growing backlash against overreliance on P-values and statistical significance. This one makes a compelling argument for getting rid of the concept of “statistical significance” altogether. I completely agree.

Statistical significance is now the primary way in which scientific results are recorded and reported. The primary problem is that it is a false dichotomy, and further it reduces a more thorough analysis of the results to a single number and encourages interpreting the results as all or nothing – either demonstrating an effect is real or not real.

The primary method for determining significance is the P-value – a measure of the probability that the results obtained would deviate as much as they do or more from a null result if the null hypothesis were true. This is not the same as the probability that the hypothesis is false, but it is often treated that way. Also, studies often assign a cutoff for “significance” (usually a p-value of 0.05) and if the p-value is equal to or less than the cutoff the results are significant, if not then the study is negative.

When you think about it, this makes no sense. Further, the p-value was never intended to be used this way. It is only the human penchant for simplicity that has elevated this one number to the ultimate arbiter of how to interpret the results of a study.

The consequences of this simplistic analysis is that the interpretation of study results are often misleading. The authors, for example, looked at 791 articles in 5 journals and found that half of them made wrong conclusions about the results based on overinterpreting the implication of “significance”.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 21 2019

Marcelo Gleiser Talks Science and Philosophy

Published by under Logic/Philosophy

Marcelo Gleiser is an astrophysicist and science popularizer. I have not read any of his works previously and was therefore not familiar with him. He recently won the Templeton Prize, of which I am not a fan. The prize is for:

The Templeton Prize honors a living person who has made an exceptional contribution to affirming life’s spiritual dimension, whether through insight, discovery, or practical works.

Many past winners were given the award for trying to align science and religious faith, which to me is a hopeless cause. This usually results in an attempt to use science or philosophy to prove a particular religious belief, an endeavor that always fails. It’s fair to say, then, that I had negative expectations when I saw this headline in Scientific American:

Atheism Is Inconsistent with the Scientific Method, Prize-Winning Physicist Says.

Here we go, I thought, another Templeton Prize winner trying to disprove atheism. But I read the interview with an open mind to see what he actually had to say, reminding myself of the principle of charity. I was pleasantly surprised. I have to say I found nothing I could disagree with.

First, that headline is misleading (I know, shocker). Gleiser is not an atheist, but only because he is an agnostic. He explains that the notion of whether or not a god exists is beyond evidence, and therefore the only scientific opinion one can have is agnosticism. You cannot know that God, or any particular god, does not exist in a scientific way.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 19 2019

The Gambler’s Fallacy

One of the core concepts in my book, The Skeptics’ Guide to the Universe, is that humans are inherently good at certain cognitive tasks, and inherently bad at others. Further, our cognitive processes are biased in many ways and we tend to commit common errors in logic and mental short-cuts that are not strictly valid. The human brain appears to be optimized by evolution to quickly and efficiently do the things we need to do to stay alive and procreate, and this has a higher priority than having an accurate perception and understanding of reality. (Having an accurate perception of reality has some priority, just not as much as efficiency, internal consistency, and pragmatism, apparently.)

One of the things humans are not generally good at is statistics, especially when dealing with large numbers. We have a “math module” in our brains that is pretty good at certain things, such as dealing with small numbers, making comparisons, and doing simple operations. However, for most people we quickly get out of our intuitive comfort zone when dealing with large numbers or complex operations. There is, of course, also a lot of variation here.

We give several examples to illustrate how people generally have poor intuition for statistics and certain kinds of math, and how our understanding of math runs up against our cognitive biases and flawed heuristics. These common examples include the fact that we have a poor intuitive grasp of randomness.

Probability also seems to be a challenge. How many people would you have to have in a room before having a >50% chance that two of them share the same birthday (not year, just day)? The answer is a lot less than most people guess – it’s just 23. We tend to underestimate how probabilities multiply when making multiple comparisons. This is why we are inappropriately amazed at coincidences. They are not as amazing as we naively think. The probability of someone winning the lottery twice is also a lot higher than you might think.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 18 2019

Sugary Drinks Linked to Heart Disease

A new study adds confirmation to what we have already been seeing in the data – drinking a lot of sugar-sweetened drinks, like soda, is linked to an increased risk of heart disease and death in men and women. This may seem obvious, but it is worth repeating precisely because it is a pretty straightforward bit of health advice that tends to get lost in the noise of bad health advice.

For example, during my visit a few years ago to Google I noted that the company tries to offer a healthy environment for its workers, providing the space and time to exercise, and a freely available snack room filled with healthful snacks. However, their refrigerator was filled with drinks that were sweetened with “all natural cane sugar” and none with artificial sweetener. This is backwards, falling for recent health fads and the appeal-to-nature fallacy. It doesn’t matter if sugar comes from sugar canes, sugar beets, is raw, natural, non-GMO, organic, or whatever. In the end it is all crystalized sucrose. And it’s really no different than high fructose corn syrup.

What matters is how many calories you are consuming from concentrated simple sugars. We evolved to like the taste of sweetness because simple carbohydrates provide much needed calories and glucose. We evolved in a calorie-limited environment, and so seek out high-calorie food. But we then used technology to hack our love of sweet foods. It didn’t take modern technology either. Native Americans figured out how to get syrup from maple trees, and that innovation is linked to a spike in various diseases, such as tooth decay, obesity, diabetes, and heart disease. Honey is another low-tech source of concentrated sugar.

But nothing beats table sugar or similar sources of concentrated calories and sweetness. We have also become accustomed to certain foods being sweet, such as our beverages. Sugar-sweetened beverages are now a significant course of empty calories and excess carbohydrates. One 12 oz can of Coke or similar soda is 140 calories. If you drink 72 oz per day, which is a typical amount to drink, that’s 840 calories – every day. That’s massive. An average daily caloric need is about 2,000 calories, so you are already almost half way there. Even if you have just one can per day, that’s enough calories to equal 14.6 pounds in one year.

You could, of course, decrease your food consumption to compensate, but then you are decreasing food with actual nutritional benefit.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 14 2019

Climate Change and the Role of Uncertainty

Published by under General Science

As a physician you have to develop a certain comfort level with uncertainty. The simple fact is – we don’t know everything. The human body is extremely complex, and there are over 7 billion people on the planet representing a great deal of variation. Our data is incomplete and largely statistical, and we have to apply that to specific decisions about an individual patient. This means we have to make the best recommendations we can with the information we have, be honest about our level of uncertainty, and convey the range of possible outcomes based on various decisions.

It’s often helpful to think in terms of “clinical pathways,” – what are the different possible paths an illness can take, given what we know and what we don’t know, and how will our diagnostic and therapeutic interventions alter those possible pathways?

Perhaps because I live this every day, I find it easy to accept the logic of action on climate change. We don’t know exactly what will happen. The climate system is complex, and there are known unknowns. One of the big ones is climate sensitivity – what is the precise relationship between the level of CO2 in the atmosphere and the degree of warming. The lower the climate sensitivity the better, in terms of how much warming will result from the CO2 we have and are releasing.

But there are other variables as well, including human action. We don’t know how stable the Greenland and Antarctic iceshelves really are, for example. There are multiple feedback loops and tipping points, and the potential for cascading effects. So yes – climate models are just that, models. They are not a crystal ball that will tell us what will happen. They are our best guess at what might happen.

Global warming deniers use this uncertainty as an excuse to do nothing (doing nothing always seems to be their goal, regardless of the justification). As a physician, that logic is painful. If I am not sure that my patient has a serious condition, that is not a reason to do nothing, it creates an imperative to do something. The specific intervention is then based largely on a risk vs benefit analysis. And often, as with global warming, acting early is key. You definitely want to find that tumor when it is small and before it has metastasized.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 12 2019

Robots Learning to Walk

Researchers at the USC Viterbi School of Engineering have developed a robotic limb with artificially intelligent control that learns how to walk by trying to walk. This may seem like a small thing, but it represents a fascinating trend in AI and robotics – shifting more and more to a bottom up rather than top down approach to programming.

This recent advance is very incremental, but worth pointing out. The researchers tried to designs a limb based on biological principles. Rather than programming the limb with the processes necessary to walk, including dealing with difficult terrain and recovering from a trip, they developed an algorithm that will learn how to walk and adapt by trying to do it. This type of learning algorithm from scratch is nothing new, but the researchers claim this is the first time it was applied to this particular task.

The results were impressive – the robot was able to learn how to walk within minutes. Because the learning is mostly trial and error, different iterations of this algorithm will hit upon different solutions, so different robots might have distinctive gaits.

The first thing I thought of when I read this news item is – what about Boston Dynamic’s Big Dog? This is a four-legged robot about the size of a large dog developed as a pack mule for the military, and capable of handling rough terrain. Watch the video – it’s impressive. I tried to find out how much of the Big Dog walking algorithm is learned vs programmed, but what I found is that “it’s proprietary.” But the consensus of opinion seems to be that it is partly both, a lot of developed walking algorithms but maybe incorporating some learning AI. If true the USC robotic limb would be the first fully self-learning walking robot algorithm, as they claim.

Continue Reading »

Like this post? Share it!

No responses yet

Mar 11 2019

Another Theory of Everything – Oh My!

Published by under Pseudoscience

These are always amusing, but I do admit to a little bit of guilt. My concern is that the individuals involved may be diagnosable, and is it really fair to publicly criticize their “work.” But then I realize I cannot diagnose people from afar, and they placed their work in the public arena, so it’s fair game.

What I am talking about are extreme cranks, and a particular flavor of cranks that believe they have developed what is derogatorily called a “theory of everything.” These are theories that attempt to explain the ultimate nature of reality – of space, time, fundamental forces, and even the meaning of life – but are not truly scientific. Such individuals have always existed in some form, and the internet has given them a new venue to rapidly spread their bizarre claims.

The now iconic example of the extreme theory-of-everything internet crank is “the time cube guy.” He became famous (as an internet meme) for his endlessly scrolling webpage filled with incoherent technobabble, peculiar fonts and formatting, and boasts about how much smarter he was than famous scientists. For many this was their introduction into the world of crankery. Many scientists were already very familiar, however, being on the receiving end of occasional massive tomes of self-published nonsense, eager for their attention.

A new crank theory of everything is making the rounds, at least within skeptical corners of the internet – Dan Winter, who is pushing his theory – Phase Conjugate Fractality: HOW Gravity is CAUSED. (formatting in the original)

Continue Reading »

Like this post? Share it!

No responses yet

Next »