Jan 08 2016

What Is Bayes Theorem?

I have written a little about Bayes Theorem, mainly on Science-Based Medicine, which is a statistical method for analyzing data. A recent Scientific American column has some interesting things to say about it as well. I thought a brief overview would be helpful for those who are not sure what it is.

This statistical method is named for Thomas Bayes who first formulated the basic process, which is this: begin with an estimate of the probability that any claim, belief, hypothesis is true, then look at any new data and update the probability given the new data.

If this sounds simple and intuitive, it’s because it is. Psychologists have found that people innately take a Bayesian approach to knowledge. We tend to increment our beliefs, updating them as new information comes in.

Of course, this is only true when we do not have an emotional investment in one conclusion or narrative, in which case we jealously defend our beliefs even in the face of overwhelming new evidence. Absent a significant bias, we are natural Bayesians.

That is really the basic concept of Bayes Theorem. However, there are some statistical nuances when applying Bayes to specific scientific situations. There are two additional wrinkles to a Bayesian analysis that I think are worth pointing out.

The first is that Bayes begins with a prior probability. This is one of the things I really like about Bayes – it expressly considers the probability that a claim is true given everything we know about the universe, and then puts new evidence into the context of that prior probability.

This approach is inherently skeptical. It means I would require more evidence before believing someone probably saw Bigfoot than that they saw a deer. When you get into the math even a little, it helps put evidence into even better perspective.

Here is Bayes formula: P(B|E) = P(B) X P(E|B) / P(E). In this formula “P” = probability, “B” = belief, and “E” = evidence. Translated into English the formula means the probability of the belief given the new evidence = the probability of the belief absent the new evidence time the probability of the evidence given that the belief is true divided by the probability of the evidence absent the belief.

Don’t worry too much about the math, I know formulas can tend to make people’s eyes glaze over. Here is what this means in practice – the implication of new evidence depends heavily on the prior plausibility. This is mathematically saying what skeptics have been saying for years, that a weak study with slightly positive evidence for ESP is not convincing evidence that ESP is real because it changes the very low prior probability by only a little.

This is an important realization because it counters what we often refer to as the frequentist fallacy – the notion that because there is statistically significant evidence for a hypothesis the hypothesis must be true, no matter how slight the effect and improbable the hypothesis given everything else we know about reality.

Stated another way, we can ask, what are the odds that the hypothesis is true vs that the new evidence is wrong? That is exactly what Bayes seeks to calculate.

As the Scientific American article points out, physicians are very familiar with this question, because we face it on a regular basis in our jobs. The example the author give is very revealing: if you have a test that is 99% accurate (by which he means 99% sensitive and specific), and you test for a condition that is present in 1% of the population, what does a positive test mean? You may be surprised to learn that a positive outcome on a 99% sensitive and specific test for this condition only carries a 50% probability that the patient actually has the disease.

This is because of false positives. If even 1 in 100 tests is false positive, but only 1 in 100 people have the disease, then a false positive result is as likely as a true positive, hence a 50% predictive value of a positive test.

Doctors have to be specifically trained to think in this new way, not how accurate a test is but what is the predictive value is of a positive or negative test, given everything we know about the disease and the patient.

In the same way we can ask – what is the predictive value of a positive outcome in a research study for ESP? Given what we know about the high incidence of false positive outcomes in science, and the extremely low prior probability of rewriting the laws of physics, the answer should now seem obvious.

Bayes also shows mathematically why confirmatory tests are so powerful. In the medical example, a second test of the same accuracy if it is positive now has a 99% chance of being a true positive, because the prior probability has increased from 1% to 50%.

In science, replication is the key. When a result can be consistently replicated the Bayesian probability that it is a real effect becomes high.

The second aspect of probability that Bayes helps us understand is the importance of considering alternative hypotheses. The conspiracy theorist, for example, is impressed when they find information that supports their conspiracy narrative. What they are failing to consider is two thing: what is the predictive value of that fact, and closely related to that, is that fact also consistent with any alternative explanations?

If the fact in question is consistent with a hundred different interpretations, then it does not much affect the probability of any one of those hundred explanations.

This is where confirmation bias comes in – if you only consider your own hypothesis, then positive tests (correlations, coincidences, etc) can seem very compelling. If you are unconsciously seeking out positive correlations then the illusion of confirmation can be powerful because you won’t be aware of either the negative correlations or all the other possible explanations for the apparent correlations.

Bayes slices through all of this by organizing information into a fairly simple formula and giving us specific (and often counter-intuitive) predictive values.

The primary criticism of a Bayesian approach, and one I hear often and in many contexts, is that we don’t always know the prior probability, and in fact estimates of prior probability may simply reflect our current bias. There is some truth to this. It may take scientific judgement to decide how likely it is that something is true.

In some contexts, like disease frequency, we have a specific answer. We can know with high reliability what the prevalence of a disease is in a specific population – we can put a solid number on prior probability. In other contexts, however, it’s hard to put a number on it. What is the probability that ESP is real?

However, even in these situations Bayes is still very useful. First, we can plug representative or likely numbers into a Bayes calculation and see what happens. For example we could ask, if we think that there is a 1% chance that ESP is real, then what will be the post probability given this new evidence?

In other words, Bayes can still tell us how much a new study changes the probability of a phenomenon being real, whatever we think the prior probability is. In fact we can calculate how much the prior probability would change, without having to commit to any specific prior probability.

What we find is that probability changes much less than what we may think given, for example, high levels of statistical significance. Bayes shows that statistical significance is deceptive and tends to overestimate the sense of how likely it is that a hypothesis is true.

Conclusion

Bayesian analysis is an important concept for any scientist and skeptic to understand. It is extremely practical, and is already used (whether or not it is explicitly named) in professions that need to deal with probability in a practical way, such as in medicine.

Bayes Theorem makes explicitly clear several skeptical principles, including the need to consider predictive value, the impact of false positives, the need to consider alternative hypothesis, and the need to put statistical significance into its proper context.

In many ways a Bayesian approach to knowledge is a skeptical approach.

57 responses so far

57 Responses to “What Is Bayes Theorem?”

  1. edamameon 08 Jan 2016 at 10:47 am

    To risk being pedantic it is simply a theorem of probability theory, regardless of any applications it might have in statistics. The arguments start when we consider how important the theorem is in practice. After all, it is one among infinite theorems from probability, all of which have the potential to be very useful when thinking rigorously about uncertainty, rational belief updates, etc..

    This theorem has recently been discovered by the skeptical movement, who (like its early advocates such as Jaynes) see it as a panacea of sorts, for instance to guard against things like p-hacking (which it doesn’t: http://datacolada.org/2014/01/13/13-posterior-hacking/). Or as a universal method for being rational. Which is, at best, circular.

    I see it as one useful tool among many. In fact, I see it as more useful than many other tools, in that the equation comes up fairly frequently in practice in useful calculations with real data. So I don’t mean to be a poop in the champagne. 🙂 I’m just wary of the recent flood of people acting like it is the universal acid for irrational thought.

  2. Neilon 08 Jan 2016 at 11:54 am

    “This is because of false positives. If even 1 in 100 tests is false positive, but only 1 in 100 people have the disease, then a false positive result is as likely as a true positive, hence a 50% predictive value of a positive test.”

    I’m just curious what your approach is in medicine with respect to the above. I think the example they give related to the statement above is somewhat misleading. Obviously we aren’t pulling people off the street at random and testing them for diseases (which is the testing condition under which the above is statistically true), they are coming in with a problem(s) that you are testing for, so their symptoms fit into the priors. I know this depends on the test, disease possibilities, and range and severity of symptoms, but wouldn’t it be reasonable to assume with a high degree of confidence that a test that’s 99% accurate that comes back with a positive result for a disease that has symptoms that align with those that the patient possesses is most likely a true positive? Can you think of a scenario with the above testing accuracy that you’d bet a lot on the single test being true given a set of symptoms? I know you would dig deeper and run independent tests in your practice, but I’m just curious as to what your intuitive Bayesian approach would be if that’s all you had to go on with a single test and a diagnosis based on symptoms.

  3. Ori Vandewalleon 08 Jan 2016 at 11:59 am

    I have had similar experiences to edamame. I’ve encountered a number of people who behave as if the act of quantifying their belief as prior probability somehow makes their belief more rational. They will attempt to win arguments by saying, “My prior for conclusion X is low, therefore conclusion X is false and you are wrong.”

    All this is not to say that Bayes’ theorem isn’t true and useful, but it means I am initially wary of people (low prior for respecting!) who identify themselves as Bayesians. We should value the Bayesian approach when useful, but not discount other tools for rational thought, as edamame said.

  4. bendon 08 Jan 2016 at 12:12 pm

    Neil, two thoughts:
    1) You’re right. Very few tests are given entirely indiscriminately. Whatever leads a physician to recommend a test would be accounted for in the frequency in the example. The word, “population,” in the example can be taken to mean the population of patients who meet the criteria for a given test. So, say since a particular condition is more common in women over 50 than in the general population, a test may be appropriate. But that condition still has a frequency less than 100% and the test is less than 100% accurate. The 99% accuracy and the 1% prevalence are reasonable numbers.
    2) There’s a general perception that tests are being ordered much more indiscriminately now than they have in the past. The theory is that doctors order tests even with low chances of identifying problems to preempt malpractice litigation in case the extremely unlikely is realized. To the extent that this is true, it certainly increases the frequency of false positives.

  5. bendon 08 Jan 2016 at 12:16 pm

    ” There’s a general perception that tests are being ordered much more indiscriminately now than they have in the past. The theory is that doctors order tests even with low chances of identifying problems to preempt malpractice litigation in case the extremely unlikely is realized.” I wanted to clarify, that I don’t know whether this is the case or not. But I know many who are convinced that it is.
    Also, Steven, thanks for this fantastic explanation of Bayesian analysis. It had just the right amount of detail (long enough to be more than superficial but not too long for a morning read).

  6. blu28on 08 Jan 2016 at 12:43 pm

    The medical example would apply most strictly to screening tests, without any prior symptoms and drawn from the full population. That is why general screening tests are often much less useful than the lay public supposes.

  7. jayaravaon 08 Jan 2016 at 12:50 pm

    Thanks. This is the first explanation of Bayesian probability that I’ve understood. It does seem like a very useful approach. It seems to encapsulate elements of David Hume’s skepticism about miracles – he said the likelihood of testimony that a miracle happened must be higher than the probability that it did not and that the witness is mistaken or lying.

    There is a typo at the beginning of para 11: “States another way…” you mean “Stated another way…” I think.

  8. Johnnyon 08 Jan 2016 at 1:13 pm

    My impression then is that for Bayes’ theorem to be useful, there has to be a certain amount of knowledge already acquired in a particular field. Otherwise there is no (or little) prior probability to take into the calculation. Would this be correct?

  9. steve12on 08 Jan 2016 at 1:17 pm

    I must admit: I have great intentions about doing more Bayesian analyses but poor applications. I always go back to my good ole’ frequentist buddies.

    The argument I hear the most against using Bayes is discussed above:
    “estimates of prior probability may simply reflect our current bias.”

    I really think this is overblown. In the course of peer review we fight about and must justify the methods, the statistical tests chosen, the decisions to do this & that to the data, etc.

    Why would the priors be any different? You would have to justify your choice of a prior just like everything else. The way I’ve heard people talk about it, Bayesian approaches mean using any prior you like and the rest of the scientific community has no recourse.

  10. DrNickon 08 Jan 2016 at 1:31 pm

    edamame – “his theorem has recently been discovered by the skeptical movement, who (like its early advocates such as Jaynes) see it as a panacea of sorts… Or as a universal method for being rational.”

    This seems like a straw man to me. You may see some of this in Facebook comments, but I don’t know any serious skeptics with even a basic knowledge of probability theory who see Bayes Theorem as a panacea. As you say, it’s one of a number of useful tools that may be employed when appropriate. I don’t think Dr. Novella would disagree.

    I also think your argument that “After all, it is one among infinite theorems from probability, all of which have the potential to be very useful when thinking rigorously about uncertainty, rational belief updates” is fallacious. Bayes Theorem does not just have the potential to be useful, it has proven itself to be useful in many specific situations, some of which Dr. Novella discusses in his post. Indeed, it has proven itself more useful than the vast majority of actual alternative theories, not to mention your infinite set of hypothetical ones.

  11. Sherringtonon 08 Jan 2016 at 1:42 pm

    There is one aspect of Bayesian probability I have never quite understood. Let’s imagine you want to know, for example, if hypnosis can reduce pain. You can look at previous studies to come up with a probability, but for each of those studies you need to have a criterion for what it means for it to “work.” If the control group ranks a stimulus as 8 (on a scale of 1 – 10) and the hypnotized group ranks it as 6, is this an effect? Of course, the way this is traditionally dealt with is null hypothesis testing — but the problems with that is one of the factors that are making people consider the Bayesian approach.

  12. Neilon 08 Jan 2016 at 2:03 pm

    bend,

    I’m pretty sure “population” in this example is taken to mean the most general description, not specific to people that constitute a group with symptoms consistent with a particular disease or who think they may have a disease. I’m not sure how you’d go about assigning a population accurately the way you’re suggesting.

  13. Fourieron 08 Jan 2016 at 2:22 pm

    Hi all,

    I think Bayes’ theorem is a vital part of understanding the process of Science. Not because people need to be able to do the mathematics whenever they make a decision, but because people need to understand that a scientist evaluates any claim not just based on the new evidence presented, but also based on the prior. In fact, I often ask a question based on Bayes’ theorem as part of the interview process for our more senior software development roles at my current employer.

    I posted a lecture series on Understanding Science on YouTube, and one of those was on Bayes’ Theorem. It’s very basic (i.e. for non-experts) so don’t expect any pedantically accurate use of terminology or anything, but I think it’s really clear.
    https://www.youtube.com/watch?v=zRtwRKGx1aA

  14. Pete Aon 08 Jan 2016 at 3:30 pm

    Fourier, Thanks for the link. I particularly enjoyed your examples of Carl Sagan’s dragon and ‘organic gravity’.

  15. Andreason 08 Jan 2016 at 4:19 pm

    In the scenario Neil brings up: a person walks into the doctor’s office with some symptoms, how do you modify the prior? Well, the symptoms are just more evidence, call it E1. The outcome of any tests that are run, call that E2. To a first approximation we can treat E1 and E2 as independent given the underlying disease D. Then the overall probability estimate for having D is
    P(D|E1, E2) = P(D) x P(E1|D) x P(E2 | D) / P(E1, E2).
    That’s equivalent to saying that the population prior P(D) is replaced by first applying Bayes using only the symptom evidence, and then applying Bayes again using the test result as evidence.
    The doctor probably does the first step intuitively based on all the cases they have seen, though maybe that should be quantified more formally too.

    BTW, it’s important to realize that the denominator P(E) in Bayes is typically gotten by running the numerator for all possible outcomes: P(E) = P(D) x P(E | D) + P(not D) x P(E | not D). So you have to consider alternative explanations explicitly, just as Steve said.

  16. Charonon 09 Jan 2016 at 12:20 am

    Ah, Bayes’ theorem. The thing that proves as nonsense a pillar of the American justice system. “Innocent until proven guilty.” But if P(guilty)=0 to begin with, no amount of evidence can change the result 🙂

    (I guess “Population average prior until proven guilty” just doesn’t have the same ring…)

  17. BillyJoe7on 09 Jan 2016 at 1:44 am

    SN: “If even 1 in 100 tests is false positive, but only 1 in 100 people have the disease, then a false positive result is as likely as a true positive”

    This is not actually correct as stated. Obviously, it should read:
    If 1 in 100 tests are false positive, and 99 in 100 tests are true positive but only 1 in 100 people have the disease, then a false positive result is as likely as a true positive.

  18. BillyJoe7on 09 Jan 2016 at 1:47 am

    Jayarava,

    “There is a typo at the beginning of para 11: “States another way…” you mean “Stated another way…” I think.”

    I’ll see your one typo and raise you three typos. 🙂
    (Not that it matters)

  19. BillyJoe7on 09 Jan 2016 at 1:54 am

    The Bayesian is always updating his knowledge.
    The Frequentist is always starting from scratch.

  20. BillyJoe7on 09 Jan 2016 at 2:14 am

    Neil,

    “I know this depends on the test, disease possibilities, and range and severity of symptoms, but wouldn’t it be reasonable to assume with a high degree of confidence that a test that’s 99% accurate that comes back with a positive result for a disease that has symptoms that align with those that the patient possesses is most likely a true positive?”

    (I would say “more likely to be a true positive”).

    There is an interesting article at SBM on this very topic:
    https://www.sciencebasedmedicine.org/lyme-testing/#more-40406

  21. jt512on 09 Jan 2016 at 3:04 am

    BillyJoe7 wrote:

    The Bayesian is always updating his knowledge.
    The Frequentist is always starting from scratch.

    Well put. Almost Confuseus-worthy. Can we work it into a haiku?

  22. jt512on 09 Jan 2016 at 3:05 am

    Ugh. Messed up the blockquote.

  23. tmac57on 09 Jan 2016 at 11:46 am

    The thing that I find frustrating, is that when people use (misuse?) Bayes informally (and probably unconsciously), such as in conspiracy theories, ESP, UFO’s, and even in politics, they may be operating from a vast database of bogus ‘facts’, so their priors are firmly rooted in nonsense, rumors, urban legends, lies, propaganda, and BS, then filtered through a faulty reasoning mechanism.
    So for them, it is perfectly obvious to start from a prior that, for example, ESP is real because it is widely accepted that everyone has it, but most have not fully developed their abilities. It is a matter of faith in their community, that this is true.

  24. ccbowerson 09 Jan 2016 at 12:14 pm

    “If 1 in 100 tests are false positive, and 99 in 100 tests are true positive but only 1 in 100 people have the disease, then a false positive result is as likely as a true positive.”

    BJ7 – I’m not sure what you are ‘correcting’ here. The preceding paragraph tells us the specificity and sensitivity of the test as well as the prevalence, so your correction is redundant.

  25. ccbowerson 09 Jan 2016 at 12:15 pm

    “The Bayesian is always updating his knowledge.
    The Frequentist is always starting from scratch.”

    I get the point, and it is a pithy quote, but is also a strawman. Science has progressed quite well with this “frequentist” statistical approach, and if the quote were really true, it wouldn’t have. Information from scientific inquiry update our body of knowledge, and the questions we ask and the experiments and observations that are done are how we constantly update. It does make a good point, that maybe we should be incorporating this updating more formally in the statistics, and this change does seem to be taking place, where appropriate.

  26. jt512on 09 Jan 2016 at 2:20 pm

    @ccbowers:

    Your argument is a straw man too. The question is not how well science has done. It is how well it could do if it employed a better statistical paradigm.

    There has been an inverse relation between the dependence of a scientific field on statistics and the success of the field (a fact which, as a statistician, I am greatly embarrassed). In fields such as physics and chemistry, which have strong theoretical foundations and rather clean data, statistics plays an auxiliary role, not a central one. These fields have produced consistently valid results.

    In contrast, fields such as medicine and experimental psychology have weaker theoretical foundations and are hence more reliant on empiricism. Moreover, their data are inherently messy. Hence statistics plays a central role. These fields have failed to produce consistently valid results: there is evidence (reported repeatedly on this blog) that 50 to 80 percent of their published findings are false or highly exaggerated. With all the shenanigans that go on in these fields, the blame cannot all be laid on frequentist statistics. But frequentist statistics, which (unlike Bayesian statistics) are inherently prone to misinterpretation, allow a theory to be substantiated without making quantitative (or even qualitative!) predictions, use language that literally implies more “confidence” or “significance” than results deserve, and exaggerate the evidence in favor of a theory, aren’t helping matters.

  27. tmac57on 09 Jan 2016 at 2:31 pm

    jt512- ” use language that literally implies more “confidence” or “significance” than results deserve, and exaggerate the evidence in favor of a theory, aren’t helping matters.”

    That’s an interesting observation that I had never thought of in that context.

  28. BillyJoe7on 09 Jan 2016 at 2:51 pm

    ccbowers,

    “BJ7 – I’m not sure what you are ‘correcting’ here. The preceding paragraph tells us the specificity and sensitivity of the test as well as the prevalence, so your correction is redundant”.

    On a topic that is often difficult to understand, I thought it was important to state every relevant fact leading to the conclusion in the summarising sentence.
    You need all three facts…

    – 1 in 100 tests are false positive
    – 99 in 100 tests are true positive
    – 1 in 100 people have the disease

    …to conclude that a false positive result is as likely as a true positive.
    That’s all I meant to say.

  29. ccbowerson 09 Jan 2016 at 4:05 pm

    “Your argument is a straw man too. The question is not how well science has done. It is how well it could do if it employed a better statistical paradigm.”

    Not only did I not say otherwise, I actually agreed with this. (Pardon the double negative.) That is what I was saying in that last sentence. How was that interpreted as a strawman?

    As far as your comparisons between fields in science are, I don’t necessarily disagree, but I think the situation is often exaggerated, and much of the reliance on statistics is due to the nature of study and the data itself (as you seem to state). I hear similar arguments used by people to denigrate an entire field, as if it is a fundamental problem with scientists or the science, as opposed to it being an inherently difficult topic. The so-called soft sciences are studying human behavior which is much more unpredictable and complicated than particles are. I’m not saying that there isn’t any problem, but that the problem is often mischaracterized and exaggerated.

    Regardless of the field, we need to get questions, the study design, the statistics, and interpretation right. That is as true in psychology and economics, as it is in chemistry and particle physics.

  30. ccbowerson 09 Jan 2016 at 4:30 pm

    “It is how well it could do if it employed a better statistical paradigm.”

    Now that I think about it, that is an interesting counter-factual to ponder. There is the sentiment, but is there evidence, that we were being held back this whole time? Or is it just becoming more obvious as the low-hanging fruit questions are addressed. I have heard a lot more arguments than evidence. Or is it an argument that we have constrained our questions and therefore study designs, to fit already common existing statistical approaches?

  31. Pete Aon 09 Jan 2016 at 6:14 pm

    BJ7, I see what you mean, and something in Dr Novella’s article doesn’t see quite right to me. I tend to think that if a test is 99% accurate then the 1% of fails consists of both false positives and false negatives, therefore, if the false positives are 1 in 100, the accuracy of the test must be less than 99%. E.g. if a thermometer product is stated to have no more than 1% error (it is 99% accurate) at 100 degrees then it will read between 99 and 101 degrees: a difference range of 2%, not 1%. Testing a large sample of these thermometers should reveal that 50% read low and 50% read high (NB: there is no such thing as a perfectly accurate thermometer so I haven’t created a false dichotomy).

    jt512, Thanks for the link: some of the Confuseus statements had me in tears of laughter 🙂

  32. Damloweton 09 Jan 2016 at 6:30 pm

    @BJ7
    In that,
    – 1 in 100 tests are false positive
    – 99 in 100 tests are true positive
    – 1 in 100 people have the disease

    does that include false negatives and true negatives? (I understand that you are not supposed to be able to prove a negative) .

    Would there be a portion of subjects tested that the test shows they don’t have said condition, but actually do? Wouldn’t that be a false negative? And would that change the statistics slightly? If 1 in 100 actual positive cases tested showed 99 positive and 1 negative. Would that mean:

    9 in 1000 tests are false positive
    990 in 1000 tests are true positive
    10 in 1000 tests prove positive for the disease
    1 in 1000 tests show negative results for an actual disease.

    Be gentle, this is just sitting on the edge of my comprehension! ;7)

    Damien

  33. Damloweton 09 Jan 2016 at 6:31 pm

    Pete A, beat me by that much!

    Damien

  34. Pete Aon 09 Jan 2016 at 7:35 pm

    ccbowers, There isn’t a “one size fits all” in science, and in my humble opinion, I hope that science will never be reduced down to a simple rule book and checklists that are indistinguishable from a religion.

    Science is three essential and wonderful things: a vast library of accumulated knowledge that is backed by evidence; it is self-correcting in the light of new evidence (an adaptive system); it is a huge toolbox of exquisitely-crafted tools that can be used for testing, measurement, self-correction, solving new problems, creating new and fun things, and perhaps most of all, for creating new exquisite tools that ever expand the awesome capabilities of the toolbox!

    The “one size fits all” approach always boils down to a fallacy of division and/or a fallacy of composition. E.g. A beautifully hand-crafted chair is not the result of Bayes Theorem applied to wood, or to the applied science of carpentry. Bayes Theorem is just one of the plethora of tools in the toolbox, but the only tools that apply to the whole toolbox are the methods for self-correction.

  35. Pete Aon 09 Jan 2016 at 7:57 pm

    Damien, A few years ago I read a very interesting article explaining the influence of false positives and false negatives on health screening tests — showing that they are usually counter-productive — but stupidly, I failed to add it to my bookmarks. This article (and its several links) might be worth reading:
    https://en.wikipedia.org/wiki/False_positives_and_false_negatives

  36. BillyJoe7on 09 Jan 2016 at 8:14 pm

    Damlowet,

    Consider the following:
    We have a diagnostic test that has a sensitivity of 99% and a specificity of 99% for a disease with a prevalence of 1%.

    A sensitivity of 99% means:
    – that the test is positive in 99 out of 100 people who have the disease.
    This is the true positive rate.
    – that the test is negative in 1 out of 100 people who have the disease.
    This is the false negative rate.

    A specificity of 99% means:
    – that the test is negative in 99 out of 100 people who do not have the disease.
    This is the true negative rate.
    – that the test is positive in 1 out of 100 people who do not have the disease.
    This is the false positive rate.

    A sensitivity of 99% and a specificity of 99% for a disease with a prevalence of 1% means that:
    Out of a population of 10,000:

    100 will have the disease (prevalence is 1%)
    – 99 of these will test positive (sensitivity/true positive rate is 99%)
    – 1 of these will test negative (false negative rate is 1%)

    9900 will not have the disease (10,000 – 100)
    – 9801 of these will test negative (specificity/true negative rate is 99%)
    – 99 of these will test positive (false positive rate is 1%)

    Therefore, the total number of positive tests is 198 out of which:
    – 99 have the disease
    – 99 do not have the disease.
    Meaning that if you test positive, you have only a 50% chance of having the disease.

  37. ccbowerson 09 Jan 2016 at 8:30 pm

    “ccbowers, There isn’t a ‘one size fits all’ in science, and in my humble opinion, I hope that science will never be reduced down to a simple rule book and checklists that are indistinguishable from a religion.”

    That is precisely the point. To compare contrast physics to social sciences can be interesting, but peoples’ conclusions often miss the point. Different questions require different approaches, and therefore different tools. What makes them all science is a systematic approach to answering the questions, and the self corrective process to get better and better answers (among other things). I think the religion comment is a nonsequitur though, as I don’t see how that is a good analogy to where science could go.

  38. Pete Aon 09 Jan 2016 at 8:33 pm

    Damien, You’ve jogged my memory, it was explanations by Professor David Colquhoun, here are just two of them:
    http://www.dcscience.net/2014/03/24/on-the-hazards-of-significance-testing-part-2-the-false-discovery-rate-or-how-not-to-make-a-fool-of-yourself-with-p-values/

    An investigation of the false discovery rate and the misinterpretation of p-values:
    http://rsos.royalsocietypublishing.org/content/1/3/140216

  39. Ian Wardellon 09 Jan 2016 at 8:36 pm

    Steven Novella asserted:

    “[T]hat weak study with slightly positive evidence for ESP is not convincing evidence that ESP is real because it changes the very low prior probability by only a little”.

    This is really tiresome . .

    ESP cannot be assigned a very low probability. Unless we presuppose materialism — a metaphysical position which seems to me to be simply untenable — science leaves out the existence of consciousness in its description of reality. This includes our normal perceptions from our 5 main senses. Yes, we can describe the neural correlates of a conscious experience such as a visual perception. But, even in principle, we cannot derive the experience itself, even from a thorough scientific understanding of the brain. So, if normal perceptions are in principle inexplicable, how on earth can we claim that extrasensory perceptions have a very low prior probability? I suggest only by assuming philosophical materialism. And materialism is simply not compatible with the existence of consciousness. We *know* it is false. Or at least those who can actually *think* know that it is false.

    Indeed not only does ESP not have a very low prior probability, but in fact the converse is true — namely it has a very high prior probability. This is because it has been reported throughout human history and across all cultures. Not to mention personal experiences and the experiences of friends. The reason why it’s not accepted is because of the prevailing western metaphysic (essentially materialism).

  40. Pete Aon 09 Jan 2016 at 8:43 pm

    ccbowers wrote: “I think the religion comment is a nonsequitur though, as I don’t see how that is a good analogy to where science could go.” That is exactly where science will go in the USA if the DiscoTute’s Intelligent Design Creationism gains traction in school science lessons.

  41. Pete Aon 09 Jan 2016 at 9:12 pm

    Ian Wardell, what you’ve just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul. (Adapted from Billy Madison).

    Your endless bleating about our 5 main senses, both here and throughout your blog, is just one of the many things that makes it abundantly clear that you are totally and utterly lost.

    You have become so hopelessly detached from reality that there is no point whatsoever in attempting to refute your claims using logic, science, and empirical evidence. You can’t even keep your promises/threats to stay away from this blog.

  42. ccbowerson 09 Jan 2016 at 9:27 pm

    Damien,

    The discussion of sensitivity, specificity, and prevalence are basic terms in statistics. It may be difficult to wrap your head around them if they are new to you. I will stick a couple links below that cover the topic if you are interested in the basics.

    The basic point is that even for very accurate tests, results will appear to be misleading if you are looking for things with very low prevalence (i.e., what you are looking for is rare). This can be unintuitive, unless you have conceptual understanding of why. For example, if a disease is rare, even a 99% “accurate” test will have more false positives than true positives- because true positives are rare.

    This is why screening for things indiscriminately is a bad idea. To counter this, a test for a disease is used when there is clinical suspicion of the disease (which should raise the prevalence within that subset). Also, there are sometimes a series of tests: the first is a screening test with high sensitivity (may not have the specificity desired), and this is followed a more specific confirmatory test.

    This is why we use the terms like “positive predictive value,” PPV, as it tells us the likelihood that a positive test is true positive. This takes the prevalence into account.

    These links discusses the terms conceptually:

    http://ceaccp.oxfordjournals.org/content/8/6/221.full
    https://onlinecourses.science.psu.edu/stat507/book/export/html/71

    This discusses how prevalence impacts PPV:

    http://www.med.uottawa.ca/sim/data/Sensitivity_and_Prevalence_e.htm

  43. BillyJoe7on 09 Jan 2016 at 9:54 pm

    It seems Ian wants to go back to the pre-scientific era.

    The whole idea of science is to sift the objective facts from unverifiable personal experiences, the reason being that there was a different view of the world for everyone who based this on their personal experience. Science was the solution to this problem. And it doesn’t matter if these personal experiences are similar across cultures and history (which they aren’t), otherwise we’d have to accept alien abduction and the verity of all mystical and revelatory experiences despite their mutual incompatibility.

    And science is necessarily materialistic/physical/natural, otherwise you could always say “a miracle happened here” and stop searching instead of continuing to search for a materialistic/physical/natural explanation. We would still believe in the ether.

    To believe that consciousness is separate from the brain is to believe that, no matter how damaged the brain, consciousness remains fully intact – it just can’t reveal itself through the medium of the damaged brain. But where does consciousness go when we fall asleep? That consciousness should still be there fully intact, if Ian is correct. It just couldn’t reveal itself through a brain that has, unfortunately, gone to sleep. But we all know that this is false, otherwise we’d be aware of it when our brains finally awakened in the morning.

    Anyway, Ian is right about one thing…this is tiresome.

  44. RickKon 09 Jan 2016 at 10:27 pm

    Ian said: “This is because it has been reported throughout human history and across all cultures.”

    When your evidence is stronger than the evidence for fairies, magic spells, demons and ghosts, then maybe you’re onto something.

  45. ccbowerson 10 Jan 2016 at 12:29 am

    “And it doesn’t matter if these personal experiences are similar across cultures and history (which they aren’t)”

    I’d like to elaborate on this point. It “doesn’t matter” how common a belief is from the perspective of what is true or not, but it does reflect on the nature of human beings. Other things that persist to some degrees across cultures and time: cognitive biases, logical fallacies, poor intuitions about probabilities, susceptibilities to optical illusions etc, etc. A general lack of awareness of a blind spot is common across people, but that does not mean that it doesn’t exist.

    Without proper explanation, the vast majority of people get the Monty Hall problem wrong, and fail to understand even after explanation. That does not mean that their initial answer (most commonly that switching doesn’t matter) is correct. People are systematically biased in certain ways, because of our biology, common experiences, and is related to our evolutionary history. It is not hard to imagine this attachment to something beyond the material world. Look at Ian, he can’t imagine it not being true. Essentially an argument from credulity.

  46. ccbowerson 10 Jan 2016 at 12:41 am

    A better example of phenomenon that exists across cultures and times is sometimes referred to as the “night hag.” It is the idea that there is a supernatural being that comes at night and sits on victims chest and prevents them from moving. There are at least dozens of different versions of this across cultures all around the world, and appears to be an attempt to explain a known phenomenon called sleep paralysis.

    By Ian’s line of reasoning, this explanation of a supernatural being sitting on peoples’ chests is compelling evidence of the explanation being true, and of supernatural beings existing. Instead the common explanations come from the common experiences of people making sense of the world with human brains.

    https://en.wikipedia.org/wiki/Night_hag

  47. Steve Crosson 10 Jan 2016 at 10:17 am

    Ian Wardell said:

    ESP cannot be assigned a very low probability. Unless we presuppose materialism — a metaphysical position which seems to me to be simply untenable — science leaves out the existence of consciousness in its description of reality.

    Science has NEVER had to presuppose materialism. The further back in history you go, the more likely it is that scientists/experimenters actively assumed/believed that something supernatural existed. Even now, many scientists still believe in some form of religion. Even if they compartmentalize their research to a certain extant, you can’t honestly claim that many researchers are not at least open to the possibility of the supernatural.

    The reason that ESP can and must be assigned a very low probability is simple. Throughout history, uncountable phenomena have initially been assumed to have supernatural causes. To date, EVERY SINGLE ONE that has been explained has been found to be entirely natural. Certainly, there are some things that we don’t fully understand (yet), but based purely on the available evidence, the odds appear to be EXTREMELY HIGH that the actual cause will turn out to be material — regardless of how mysterious it currently appears.

    ESP may be only a subset of all potential supernatural phenomena, but the exact same patten has always occurred. Many, many people have claimed to have ESP experiences, but every single explained event has turned out to be either delusion or fraud. Even after hundreds of years of trying, no one has ever presented good evidence for ESP of any kind.

    Obviously, there are still things that we don’t fully understand but when comparing the probability of natural versus supernatural causes, thousands (more likely millions) of well understood NATURAL events (once assumed to be supernatural) compared to literally NO verified SUPERNATURAL events is the reason that any rational person assigns a low prior probability to ESP.

    As ccbowers pointed out, your entire belief is simply an argument from credulity, i.e. “I don’t understand it, therefore magic”. Along with the rest of the body of scientific knowledge that you don’t understand, you don’t understand either probability or Bayes Theorem.

  48. Steve Crosson 10 Jan 2016 at 10:44 am

    Wish there was a way to edit comments. Above, “extant” should be “extent”. Apparently, autocorrect “fixed” my fat-fingering.

  49. SteveAon 11 Jan 2016 at 8:22 am

    Ian Wardell: ““[T]hat weak study with slightly positive evidence for ESP is not convincing evidence that ESP is real because it changes the very low prior probability by only a little”.

    This is really tiresome . . ”

    Straight back atcha….

  50. mlegoweron 11 Jan 2016 at 11:29 am

    Unfortunately, while Bayesian updating is incredibly intuitive and most people would agree, tons of economics and psych research shows that people are really bad Bayesians (or not Bayesian at all) when it comes to actual decision making and doing the math. I wish I had a good cite for this.

    Also, my understanding is that most serious statisticians view the “Bayesian vs. frequentist” debate as tired and unimportant.

    At its worst, Bayesianism can leave one open to accusations of hubris and closed-mindedness, as “priors” can be viewed as a surreptitious way to drastically limit the possible states of the world. Reporting sensitivity to different priors, as Dr. Novella suggests, inoculates somewhat, but then it is less clear what we learn from the study. The naturalists with one extreme prior have their posterior and the true believers with the other extreme prior have their (quite different) posterior and never the twain shall meet.

  51. jt512on 11 Jan 2016 at 3:03 pm

    @mlegower:
    Bayesian inference only tells us how our prior should change in light of new evidence. Two people with wildly differing priors will have wildly differing posteriors, and that’s how it should be. If your prior beliefs are deluded, Bayesian inference won’t undelude them for you. On the other hand, if the incoming evidence is unbiased, then, as it accumulates, everybody’s posterior’s will converge, regardless how discrepant their priors were, and that’s how it should be, too.

  52. Pete Aon 11 Jan 2016 at 10:23 pm

    jt512, The tools in the toolbox of science and mathematics are very useful when used appropriately within their intended area of operation (scope) — always RTFM. Misusing the tools will produce a similar mess of the user as trying to open a can of baked beans with a chain saw.

  53. brive1987on 17 Jan 2016 at 4:46 pm

    Dr Richard Carrier PhD once (in)famously used Bayes to conclude: “With the evidence we have, the probability Jesus existed is somewhere between 1 in 12,500 and 1 in 3”

    Seriously.

  54. jt512on 17 Jan 2016 at 8:53 pm

    What do you think is wrong with Carrier’s calculations?

  55. brive1987on 18 Jan 2016 at 6:01 am

    Who said there was a miscalculation? What’s infamous is the meaningless result.

  56. Pete Aon 18 Jan 2016 at 1:44 pm

    brive1987, I think Richard Carrier presented a wonderful example of what Gord Pennycook[1] describes as “ontological confusion”. The probability that Jesus existed is an epistemic probability, whereas whether or not Jesus actually existed has an ontic probability of either 1 (existed) or 0 (did not exist). There is no such thing as an ontic probability that is somewhere between 1 and 0: a person cannot partially exist! As I’ve often stated, statistics become increasingly meaningless as the sample size reduces below 30.

    1. On the reception and detection of pseudo-profound bullshit. — by Gord Pennycook.

  57. kymhon 28 Jan 2016 at 7:39 pm

    For those who remember a bit of probability …

    P(some event) means the probability of some event occurring.

    We have P(person has condition) = 1/100.

    Bayes is about conditional probabilities, i.e. the probability of some event once it is given that some other event has occurred.

    If we look at the test for the condition, this is really saying that if somebody has the condition then the probability of the test detecting the condition for that person is 99/100.

    In other words:

    P(test positive GIVEN person has condition) = 99/100

    Bayes theorem states (for events A and B): P(A GIVEN B) = P(B GIVEN A) x P(A) / P(B).

    We want to know what is the probability, given a positive test, that a person has the condition, i.e. P(person has condition GIVEN test positive). So

    P(person has condition GIVEN test positive)
    = P(test positive GIVEN person has condition) x P(person has condition) / P(test positive)
    = 99/100 x 1/100 / P(test positive)
    = 99/10000 / P(test positive)

    To work out the probability of a positive test we note that it can arise in either of two ways: a real positive or a false positive. Mathematically:

    P(test positive) = P(real positive) + P(false positive)
    = P(person with condition has positive test) + P(person without condition has positive test)
    = P(person has condition) x P(person with condition tests positive)
    + P(person without condition) x P(person without condition tests positive)
    = 1/100 x 99/100 + 99/100 x 1/100
    = 198/10000

    So P(person has condition GIVEN test positive) = 99/10000 / 198/10000 = 1/2.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.