Jan 07 2011

Bem’s Psi Research

The Journal of Personality and Social Psychology plans to publish a series of 9 experiments by Daryl Bem that purport to show evidence for precognition. This has sparked a heated discussion among psychologists and other scientists – mainly is it appropriate to publish such studies in a respected peer-reviewed journal and what are the true implications of these studies? I actually think the discussion can be very constructive, but it also entails the risk of the encroachment of pseudoscience into genuine science.

Peer-Review

Before I delve into these 9 studies and what I think about them, let me explore one of the key controversies – should these studies be published in a peer-reviewed journal? This question comes up frequently, and there are always two camps: The case in favor of publication states that it is necessary to provoke discussion and exploration. Put the data out there, and then let the community tear it apart.

The other side holds that the peer-reviewed literature holds a special place that should be reserved only for studies that meet certain rigorous criteria, and the entire enterprise is diminished if nonsense is allowed through the gates. Once the debate is over, the controversial paper will forever be part of the scientific record and can be referenced by those looking to build a case for a nonsensical and false idea.

Both points have merit, and I tend to be conflicted. I agree that it is good to get controversial topics out there for discussion. In this case, as we will see, I think the discussion has been particularly fruitful. But there is also significant harm in publishing studies just to provoke discussion. Peer-review is an implied endorsement. Journals can mitigate this by publishing an accompanying editorial (which will be done in this case), and that is a reasonable measure. But seriously flawed studies that are published for this reason still end up as part of the official record and will cause mischief for years, even decades.

John Maddox, editor-in-chief at the time of Nature magazine, fell prey to this fallacy. He published a highly flawed review that was positive of homeopathic research. He did it to spark critical discussion. What he found was that mainstream scientists ignored it, and homeopaths used it as propaganda to promote quackery. He later commented that it was his worse editorial decision. One can argue that the cautionary principle is more important in medical and biological journals, but this is a minor point, in my opinion. Even the basic scientific literature can have immense practical implications.

We also have to recognize that we live in the age of mass democratized information. The gatekeepers can no longer control the dissemination of information. So when studies get published in the peer-reviewed literature, that information is now out there and will be used and exploited by every sector. The mass media will use it for sensational headlines. Charlatans will use it to sell their goods. And believers in nonsense will use it as propaganda to promote their beliefs. After the controversy has died down and perhaps even forgotten, the studies will be there as an enduring source of confusion.

Essentially, at this time my position is that individual decisions need to be made regarding specific papers, and that the accompanying editorial is a good way to mitigate the controversy. But even this is not adequate. What I propose is that peer-reviewed journals who wish to publish controversial studies for the sake of discussion should have a special section in the journal for doing so. This section will be outside the bounds of peer-review, and can be used explicitly for the purpose of getting controversial studies out there for scientific discussion. It will be clear that publication in this section is not in any way an endorsement of quality, and articles published there will not be included in the official scientific literature. You can call this section “Discussion and Controversies”, or something similar, and I imagine they would be popular sections for scientific journals. Also, such a section would free up editors from having to make agonizing decisions about publishing controversial papers like Bem’s.

Bem’s Psi Studies

Bem’s approach to these 9 studies which he has been conducting over the last 10 years is interesting. He took standard social psychology protocols, and then reversed them to see if there was influence back in time. For example, researchers have had subjects practice words, and then later perform memory tests using the practiced words and new words. Not surprisingly, words that were previously practiced are easier to remember than novel words. Bem conducted this study in reverse – he had subject perform memory tests, and then later had them practice with some of the words. He found that subjects tended to perform better with words they would later practice.

Of course, if this result is real and not due to an artifact of statistics or trial execution, that would imply that the future can influence the past – a reversal of the arrow of cause and effect. This is, by everything we currently know about physics and the way the universe works, impossible. It is, at least, as close to impossible as we can get in science. It is so massively implausible that only the most solid and reproducible evidence should motivate a rational scientist to even entertain the idea that it could be real.

Previously I have argued, along the lines of “extraordinary claims require extraordinary evidence,” that any claims for a new phenomenon (not just psi or paranormal, but anything new to science), in order to be accepted as probably true, should meet several criteria. The studies showing evidence for this new phenomenon should show:

1- A statistically significant effect

2- The effect size should also be significant, meaning that it is well beyond the range of statistical and methodological “noise” that studies in that field are likely to generate. (This differs by field – electrons are more predictable and quantifiable than the subjective experiences of people, for example.)

3- The results should be reproducible. Once you develop a study design, anyone who accurately reproduces the study protocol should get similar results.

The above is a minimum – it’s enough to be taken seriously and to justify further research, but also is no guarantee of being correct. It’s also nice if there are plausible theories to explain the new phenomenon, and if these theories are compatible with existing theories and knowledge about how the world works. Such theories should have implications that go beyond the initial phenomenon, and should be no more complex than is necessary to explain all data.

How do Bem’s results stack up to the above criteria? Not well. It is important to add that in order to be taken seriously, experimental results should meet all three basic criteria simultaneously. Bem’s results only meet the first criterion – statistical significance (which I will discuss more below). The effect sizes are tiny. For example, in the word test described above subjects were correct 53% of the time, when 50% is predicted by chance.

That is a small fluctuation, and for a social psychology study, in my opinion, does not deserve to be taken seriously. Even subtle problems with the execution of the study (and one or more such problems are almost always found when study protocols are investigated first hand) can result in such small effect sizes. Essentially, that is within the experimental noise of social psychology studies.

You can also look at it this way – there are hundreds of ways to bias such studies and skew the results away from statistical neutrality. When you have effect sizes that are 20-30% or more, such biases should be easy to spot and eliminate from the protocol. But as the effect sizes get smaller and smaller, you get diminishing returns in terms of locating and eliminating all sources of bias. When you get down to only a few percent difference, it is essentially impossible to be confident that every source of subtle bias has been eliminated.

That is the reason for the third criterion, replication. It is less likely that the same sources of bias would be present when different researchers perform the same protocol independently. Therefore, the more a protocol has been replicated with similar results, the smaller an effect size we would take seriously. There is no magic formula either – but we can reasonably accept somewhat smaller effect sizes with replication. Even then, 1-3% is hard to ever take seriously in a psychology experiment. There can still be sources of bias inherent to the protocol, and history has shown these can be subtle and evade detection for years. And, it should be noted, even with large effect sizes, we still wait for replication before granting tentative probability to a new phenomenon.

Bem’s research so far has failed the replication criterion. There have been three completed attempts to replicate part of Bem’s research – all negative so far. Other studies are ongoing.

So at this time we have a series of studies with tiny effect sizes that have not been replicated, and in fact with negative replication so far. Regardless of the nature of the phenomenon under study, this is not impressive. It is preliminary at best and very far from the kind of evidence needed to conclude that a new phenomenon is probably real and deserves further research. If we add that the new phenomenon is also probably impossible, that puts the research into an even more critical context.

Evidence-Based vs Science-Based

Perhaps the best thing to come out of Bem’s research is an editorial to be printed with the studies – Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi by Eric Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, & Han van der Maas from the University of Amsterdam. I urge you to read this paper in its entirety, and I am definitely adding this to my filing cabinet of seminal papers. They hit the nail absolutely on the head with their analysis.

Their primary point is this – when research finds positive results for an apparently impossible phenomenon, this is probably not telling us something new about the universe, but rather is probably telling us something very important about the limitations of our research methods.

This is a core bit of skeptical wisdom. It is supreme naivete and hubris to imagine that our research methods are so airtight that a tiny apparent effect in studies involving things as complex as people should result in rewriting major portions of our science textbooks. It is far more likely (and history bears this out) that there is something wrong with the research methods or the analysis of the data.

Wagenmakers and his coauthors explain this in technical detail. They write:

Here we discuss several limitations of Bem’s experiments on psi; in particular, we show that the data analysis was partly exploratory, and that one-sided p-values may overstate the statistical evidence against the null hypothesis. We reanalyze Bem’s data using a default Bayesian t-test and show that the evidence for psi is weak to nonexistent.

This sound remarkably similar to what we have been saying over at Science-Based Medicine – and is related to the difference between evidence-based medicine and science-based medicine. In fact, this paper, if applied to medicine, would be a perfect SBM editorial. They expand their primary point, writing:

The most important flaws in the Bem experiments, discussed below in detail, are the following: (1) confusion between exploratory and confirmatory studies; (2) insufficient attention to the fact that the probability of the data given the hypothesis does not equal the probability of the hypothesis given the data (i.e., the fallacy of the transposed conditional); (3) application of a test that overstates the evidence against the null hypothesis, an unfortunate tendency that is exacerbated as the number of participants grows large.

The first point is critical, and one you probably have heard me make many times. Preliminary research is preliminary, and most of it turns out to be wrong. But entire fields have been based upon unreliable preliminary research.

The second point is a technical one regarding p-values, which are often misinterpreted as the chance that the phenomenon is real. This is not the case – it’s just the chance that the data would have turned out the way it did given the null hypothesis is true. Relying on p-values tends to favor rejection of the null-hypothesis (concluding the phenomenon is real). It is more appropriate, as we have explicitly argued on SBM, to use a Bayesian analysis – what is the probability of a phenomenon being true giving this new data. This type of analysis takes into account the prior probability of a phenomenon being true, which means the more unlikely it is, the better the new data has to be in order to significantly change the likelihood of rejecting the null hypothesis. In other words – extraordinary claims require extraordinary evidence.

The third point is a technical one about statistical analysis. It is noteworthy that none of the peer-reviewers on the Bem studies were statisticians – a practice that perhaps journals need to correct.

Conclusion

In the final analysis, this new data from Bem is not convincing at all. It shows very small effects sizes, within the range of noise, and has not been replicated. Further, the statistical analysis used was biased in favor of finding significance, even for questionable data.

But examining Bem’s studies is a very useful exercise, and I am glad that the conversation has taken the direction it has. I am particularly happy with the Wagenmakers editorial, which is an endorsement of exactly what we have been arguing for in skeptical circles in general, and at Science-Based Medicine in particular.

It further demonstrates the utility for science and scientists in addressing fringe claims. The lessons to be learned from Bem’s research about the methodological limitations of research and how to interpret results, apply to all of science, especially the social sciences and medicine (which also deal with people as the primary subjects of research). We can and should take these lesson when dealing with acupuncture, EMDR therapy, homeopathy – and further into even mainstream practices within medicine and psychology.

Bem has unwittingly performed a useful set of experiments, by conducting careful research into claims that we can confidently conclude are impossible, he has exposed many aspects of the limitations of such research. He has also sparked a discussion of the purpose and effectiveness of peer-review.

Share

58 responses so far

58 Responses to “Bem’s Psi Research”

  1. Kawarthajonon 07 Jan 2011 at 11:01 am

    Steve, I have a question for you about replication. You seem to be saying that research should not be published unless the results are replicated. How will scientiests know to replicate a given study, unless the research is published in a peer-reviewed journal? How do they find out about it?

  2. Steven Novellaon 07 Jan 2011 at 11:06 am

    I did not exactly say that studies should not be published until they have been replicated. I acknowledged the dilemma, and proposed one solution.

    But – preliminary research can also be presented at meetings as abstracts and posters. Researchers can collaborate prior to publication.

    I also think it’s OK to publish preliminary research – it just needs to be clearly labeled as such. The bigger problem is the abuse of preliminary research as if it were confirmatory. This stems from the fact that in the minds of some, especially the lay public, “peer-reviewed” is one big category. If it’s published in the peer-reviewed literature, then it’s vetted reliable evidence – but this is clearly not true. Perhaps we need to make the distinctions more explicit.

  3. daedalus2uon 07 Jan 2011 at 11:34 am

    I agree with much of what you say. Statistics is not my forte, so I will leave statistical arguments to others. The issue of causality is not established in quantum mechanics. Local realism does seem to be not just “not proven” but actually precluded.

    http://en.wikipedia.org/wiki/Local_realism

    that is quantum mechanics does not exhibit local realism.

    I agree that there is great danger that woo-masters will latch onto this and use it to defraud people, as they are already doing with other concepts from quantum mechanics and with concepts from every other aspect of science, economics, medicine, religion, and every other activity that humans do.

    I think the way to deal with fraud and misuse is not by trying to restrict the scope of science, but to hold people accountable for their fraud and misuse of everything. Unfortunately so much of human interactions is tied up with fraud, self-delusion and misuse.

    I think editors of journals could go a long way to defusing situations like this by explicitly stating what their criteria for publication are, they the “peer review” process does not guarantee the result is correct or not-flawed or not mistaken or not fraudulent, but that there are no glaring obvious flaws. That every scientific paper is a communication from one scientist expert-in-the-field to another scientist expert-in-the-field and non-scientists should not presume they understand it without becoming an expert-in-the-field themselves.

    It is the Dunning–Kruger effect. People who are not an expert-in-the-field don’t have the knowledge base to evaluate their expertise and so they over estimate how expert they are. What is especially unfortunate is that humans evaluate the expertise of others based on things like charisma, projected confidence, perceived sincerity. “The secret of success is sincerity. Once you can fake that you’ve got it made.” Jean Giraudoux

  4. Karl Withakayon 07 Jan 2011 at 11:35 am

    Psychic powers such as mind reading, thought projection, and remote sensing are implausible enough, but the various phenomenon that appear to involve sending information back in time are a clear violation of general relativity.

    We are talking about an extraordinary claim in the extreme here, literally on the same order as homeopathy.

    Publication of such results should be approached for the perspective of, “Help me figure out where I went wrong.” rather than “Look what I’ve demonstrated!”.

    It’s interesting that the experiment on predicting random pictures only worked with erotic pictures, which screams to me either random artifact or anomaly hunting, or that some psychic powers are only useful for finding porn. Makers of slot machines that use erotic images on the wheels should beware.

  5. Karl Withakayon 07 Jan 2011 at 12:08 pm

    The other equally implausible option other than a violation of causality would be a Mentat (or Milo from Fringe S3E3, “The Plateau”) like ability to extrapolate the future based on keen observation of all the various subtle input conditions.

    The “random” phenomenon is the tests were generated by computers, which can only approximate randomness unless keyed off a quantum event.

    Bem is obviously credulous of psychic abilities:

    “What I showed was that unselected subjects could sense the erotic photos,” Dr. Bem said, “but my guess is that if you use more talented people, who are better at this, they could find any of the photos.”

    This is his guess, based on his belief psychic powers are real, and not on any scientific basis whatsoever.

  6. Scott Youngon 07 Jan 2011 at 1:11 pm

    A problem with many controversial studies (as well as run-of-the-mill ones) that later are proven to be wrong or not replicated is that the review process failed in the first place. This can happen, obviously, for a number of reasons, including laziness or carelessness, inappropriate skills or pre-selected (by the authors facilitated by the journal’s review process) biases of the reviewers. This “psi” study is making it to press because the reviewers did not correctly evaluate how and why the experiments were run, and in which order they were presented. One doesn’t even need to invoke small effects to toss this one…

  7. cwfongon 07 Jan 2011 at 1:29 pm

    Without even reading the material, someone who accepts that we live in an indeterminate universe would know to a virtual certainty that nothing is exactly determinable in advance, not even (or especially) the exact path of an electron, and know (or should know) that time in a choice making world goes only in one direction.
    These various psi concepts are essentially paradoxical. They evoke the exactitude of choice where choice would not exist if such exactitude were possible.

  8. Karl Withakayon 07 Jan 2011 at 2:30 pm

    Without getting into a discussion of free will…

    Some would argue that we live in a macro-deterministic universe and that it is generally only indeterminate on a quantum level, Schrödinger’s Cat notwithstanding.

    If one accepts that the universe is macro-deterministic, that doesn’t mean the future is precisely predictable in practice.

    Even a macro-deterministic universe can still be essentially chaotic and the number of variables involved too numerous (and mostly unseen) to make exactly determinable predictions essentially impossible in practice.

    The fact that the further out you try to predict the motion of the planets, the more inaccurate you prediction is doesn’t mean the motion of the planets isn’t deterministic.

    Also, of course, a determination/prediction of future events becomes part of the present factors leading to future events and should be accounted for in the prediction. That accounting then becomes an additional factor that should be accounted for in any prediction, and so on…

    The problem beyond the impracticality of being able to compile enough data and account for all the variables required to make an exactly accurate prediction of future events (and having the processing power to process the data) is the observer/ predictor is part of the system and affects the system by their actions.

  9. Jim Shaveron 07 Jan 2011 at 2:43 pm

    At the Committee for Skeptical Inquiry, James Alcock wrote a good article describing the many issues and flaws in Bem’s published research paper.

    http://www.csicop.org/specialarticles/show/back_from_the_future

    Bem responded to Alcock’s critique, and Alcock further responded to that response.

    http://www.csicop.org/specialarticles/show/response_to_alcocks_back_from_the_future_comments_on_bem

    http://www.csicop.org/specialarticles/show/response_to_bems_comments

    From Alcock’s second article:

    “For all his sound and fury, this seriously flawed set of experiments is still a seriously flawed set of experiments. If Bem wants the scientific world to pay attention to his claims of psi, he must first produce meaningful data from a well-designed, well-executed and well-analyzed experiment. Neither excuses for careless research, nor angry defences of it, will achieve this; he must simply do it right.”

    Reminds me of Shakespeare. “It is a tale told by an idiot, full of sound and fury, signifying nothing.”

  10. Karl Withakayon 07 Jan 2011 at 2:48 pm

    “Even a macro-deterministic universe can still be essentially chaotic and the number of variables involved too numerous (and mostly unseen) to make exactly determinable predictions essentially impossible in practice.”

    That should be

    “…the number of variables involved too numerous (and mostly unseen) to make exactly determinable predictions possible in practice.”

    or

    “…the number of variables involved too numerous (and mostly unseen), making exactly determinable predictions essentially impossible in practice.

  11. petrucioon 07 Jan 2011 at 2:49 pm

    Lack of statistical understanding is a huge problem, even amongst scientists, and specially amongst the media.

    Statistically significant only means that the effect seen is ‘unlikely’ to be caused by change (p-value says how unlikely. Conventionally, a 5% change of ‘luckyness’ factor is common). With thousands of published studies, hundreds of false positives are expected.

    And statistically significant does not mean statistically important. The media frequently says that statistically significant studies that say something raises the chances of causing cancer, without mentioning how much that raise is. You can have all the statistical certainty you want, if that means a 0.5% increase in chance of ABC cancer from drinking XYZ, it makes no sense to go out of your way to stop drinking XYZ. The most important variable is often left out.

    But in this specific Psi study, no matter what the increased factor is, in my opinion, even if it is just 0.5%, it would be hugely important if it was a certain effect.

    But as around 5% of positive studies are actually false positives (maybe more considering publication bias), that’s another reason replication is so important, besides bias removal. Half a dozen studies replicating the same 1% results with a 5% chance of ‘luckiness’ have a 0,0000015625% chance of being all positive. We would need at least that much to start taking seriously such game changing hypothesis.

  12. sonicon 07 Jan 2011 at 3:02 pm

    The problem with statistical analysis is that one can often find a test that produces the answer desired.
    This is even true of Bayesian analysis (it’s all in the priors).
    If an effect is small, it becomes difficult to impossible to separate it from the noise.
    This is often used to say the effect doesn’t exist, which is also a mistake.
    Peer reviewed doesn’t mean correct- never has and never will. Since this will always be the case, I think it best to recognize that ‘peer reviewed’ does not mean correct and ‘not peer reviewed’ does not mean wrong.
    Those arguments are fallacious, however.

  13. cwfongon 07 Jan 2011 at 3:09 pm

    “…the number of variables involved too numerous (and mostly unseen), making exactly determinable predictions essentially impossible in practice.”

    There is an implicit assumption here that the future state of anything is nevertheless inevitable. Which, even if the macro-determinstic concept had any real usefulness, would not be the required case.

  14. Karl Withakayon 07 Jan 2011 at 3:47 pm

    “There is an implicit assumption here that the future state of anything is nevertheless inevitable. Which, even if the macro-determinstic concept had any real usefulness, would not be the required case.”

    That is exactly what I am implying with the concept of a macro-deterministic universe; baring the rare Schrödinger’s Cat, the precise future state of non-quantum phenomenon is inevitable.

    Given an initial state and a set of input criteria for a closed system, the output is predetermined and inevitable without outside interference. I crank the handle, a marble rolls, and eventually the mouse trap closes. Where do you disagree with this position?

  15. Hubbubon 07 Jan 2011 at 4:23 pm

    Steve,

    Do you think you could get Wagenmakers for an interview?

  16. cwfongon 07 Jan 2011 at 4:47 pm

    You can’t bar the reoccurrence, albeit expected to be rare, of that hypothetical cat. And when you assign inevitability to the universe, you also assign what some would call its final cause.

  17. cwfongon 07 Jan 2011 at 5:02 pm

    IOW, macro determinism is (for some at least) the concept of an adequate determinism, where the future becomes the highly probable but falls short of the immutably designed inevitable.

  18. Karl Withakayon 07 Jan 2011 at 5:20 pm

    I didn’t mean to prohibit Schrödinger’s Cats or exclude their possibility, I meant that in their absence, the precise future state of non-quantum phenomenon is inevitable, and that since the macro-universe is rarely affected by Schrödinger’s Cats, it is generally deterministic. Even with the cats, once that cat has either died or lived, the events from that point on are inevitable and deterministic until the next cat comes along.

    I’m also saying those cats are likely to not have much effect in the overall script of the universe post big bang (due to both their rarity and likely minimal impact when they do occur), meaning that if you could snapshot a copy of the universe and let the 2 copies play out, the differences between them over the following 1, 1000, 1000000, etc years, would likely be very minimal, if detectable at all.

    “And when you assign inevitability to the universe, you also assign what some would call its final cause.”

    That assertion sounds like an argument from consequence rather than a logical argument against inevitability in the universe.

  19. Karl Withakayon 07 Jan 2011 at 5:29 pm

    Just to be clear, at no point did I eve intend to imply a design anywhere, just a concept of consequence of state.

  20. Enzoon 07 Jan 2011 at 5:34 pm

    Good point made about the dilemma of publishing questionable papers in peer-reviewed journals.

    I read so many low-quality articles that I wish were never published. It bloats the field and clutters the path to evidence-driven progress. And, as you mention, it becomes available to be indiscriminately referenced.

    Before publication, papers are reviewed by relevant experts in the field. I just don’t see the use of published rubbish for legitimate scientists. The only “discussion” is typically to ridicule the paper. A lay person probably doesn’t even have access to a peer reviewed journal.

    If an editor thinks a topic might be interesting but lacks plausibility, then they can just cover the topic as a news item in the journal and link to the researcher’s webpage or something along those lines. There should be no referenceable paper put out on the subject.

  21. cwfongon 07 Jan 2011 at 6:30 pm

    Generally deterministic, or adequate, is of course what I was suggesting as logically antithetical to the psi hypotheses.

    BUT, as to those cats or no cats models, I disagree that “if you could snapshot a copy of the universe and let the 2 copies play out, the differences between them over the following 1, 1000, 1000000, etc years, would likely be very minimal, if detectable at all.”
    On the basis that causation is not lineal but comes at you from all directions, with exponential effect, the differences over time would arguably approach the maximal.

  22. tmac57on 07 Jan 2011 at 9:39 pm

    You can call this section “Discussion and Controversies”, or something similar, and I imagine they would be popular sections for scientific journals.

    How about “Arts and Entertainment” or “People” just to give readers a feel for how seriously they should take such offerings?

  23. sonicon 07 Jan 2011 at 10:05 pm

    Schrödinger’s equation is deterministic and one quantum interpretation that is deterministic is the ‘many worlds’ interpretation. (everything that could happen does).
    Probabilities only enter into the situation when a ‘collapse’ hypothesis is forwarded.
    How many universes make up the multi-verse and how different they are is a subject of speculation (as currently there is no way to contact the other universes to find-out).

  24. cwfongon 07 Jan 2011 at 11:32 pm

    sonic, I’m going to have to differ with you here (unless in some way this is satirical or sardonic). Everything that could happen does also means also that every accident that could happen happens, or that indeterminacy happens which is paradoxically predestined.

  25. cwfongon 08 Jan 2011 at 12:18 am

    Ad in the end everything that could happen does because everything that couldn’t happen doesn’t.

  26. BillyJoe7on 08 Jan 2011 at 1:50 am

    cwfong,

    I know you have me on ignore (well, sort of – I’m referred to as twimc or something) and I know we (well, under your alias, artfulD) have had this discussion before and that you will know that my view is basically the same as Karl’s, but let me just answer this question since Karl hasn’t as yet (opportunistic I know, but kettle black and all that ;) ):

    BUT, as to those cats or no cats models, I disagree that “if you could snapshot a copy of the universe and let the 2 copies play out, the differences between them over the following 1, 1000, 1000000, etc years, would likely be very minimal, if detectable at all.”
    On the basis that causation is not lineal but comes at you from all directions, with exponential effect, the differences over time would arguably approach the maximal.

    This is “The Butterfly Effect”, no?
    If I decided to base my every decision on the tick of a gieger counter, I guess my future will have become completely indeterminate (in theory as well as in practice). Also, and more fundamentally, the random decay of radioactive atoms could produce an element of indeterminacy at the macroscopic level by interacting in probablistic time with other atoms. That this effect is not likely to be of much significant over even long time periods is attested to by the regularity we actually find in nature. By your assessment, I guess the planet Earth should have long ago veered off into the void between the galaxies. No?

  27. sonicon 08 Jan 2011 at 11:19 pm

    cwfong-
    No, I mean what I say-
    http://plato.stanford.edu/entries/qm-manyworlds/
    (Everything that could happen is described by the schrodinger’s equation)
    While it seems this maybe an extreme reaction to the notion of ‘randomness’, we must realize that the ‘collapse of the wave function’ is discontinuous mathematically speaking.
    Given the desire to make the universe explicable through mathematics, it might make sense to go this way.
    (Personally I prefer the Born interpretation, but it doesn’t bother me that the universe includes chance and freedom- in fact it seems the universe I’m living in does include those features.)

  28. sonicon 08 Jan 2011 at 11:57 pm

    An interesting take on this subject–

    http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/

    So the notion that ‘published in peer reviewed’ means ‘true’ is clearly wrong.
    I will remember this when someone brings out the ‘published in peer-reviewed’ argument.

  29. cwfongon 09 Jan 2011 at 12:13 am

    sonic,
    “Everything that could happen” is not deterministic, unless you’ve found that the only things that could ever happen would be predetermined. And I don’t get that at all from Schrodinger, he of the probability wave and the indeterminacy principle.

    What I meant to point out as well was that the phrase you used was tautological, as are mathematical truths in general. Except this isn’t even that. You might as well say, it is what it is.
    Or as some say in Australia, shite happens.

    And yet by your own reckoning you live in a universe where ongoing choice is the determinant factor.

  30. cwfongon 09 Jan 2011 at 1:14 pm

    sonic,
    And then there’s this from Wikipedia, which applies to some earlier commentary as well:
    “According to some,[citation needed] quantum mechanics is more strongly ordered than Classical Mechanics, because while Classical Mechanics is chaotic, quantum mechanics is not. For example, the classical problem of three bodies under a force such as gravity is not integrable, while the quantum mechanical three body problem is tractable and integrable, using the Faddeev Equations. This does not mean that quantum mechanics describes the world as more deterministic, unless one already considers the wave function to be the true reality. Even so, this does not get rid of the probabilities, because we can’t do anything without using classical descriptions, but it assigns the probabilities to the classical approximation, rather than to the quantum reality.
    Asserting that quantum mechanics is deterministic by treating the wave function itself as reality implies a single wave function for the entire universe, starting at the origin of the universe. Such a “wave function of everything” would carry the probabilities of not just the world we know, but every other possible world that could have evolved. For example, large voids in the distributions of galaxies are believed by many cosmologists to have originated in quantum fluctuations during the big bang. (See cosmic inflation and primordial fluctuations.) If so, the “wave function of everything” would carry the possibility that the region where our Milky Way galaxy is located could have been a void and the Earth never existed at all. (See large-scale structure of the cosmos.)”

  31. BillyJoe7on 09 Jan 2011 at 8:22 pm

    sonic,

    “While it seems this [many worlds interpretation of Quantum Mechanics] maybe an extreme reaction to the notion of ‘randomness’…”

    As the article says, the MWI interpretation is an attempt to remove randomness and “action at a distance”. It is probably the most popular interpretation amongst qantum physicists.

    ——————————–

    cwfong,

    ““Everything that could happen” is not deterministic…”

    If everything that could happen does happen (the MWI), surely we have excluded randomness. The “many worlds” evolve deterministically. Otherwise point to where randomness enters the picture.

    “Everything that could happen does [happen] also means that every accident that could happen happens, or that indeterminacy happens which is paradoxically predestined.”

    I’m not sure we can apply the Godelian nightmare here. When we say “everything that could happen does happen” we mean that every possible outcome of the probability wave actually exists in some parallel world. Whether it is true or not, it is a successful way to overcome randomness (and “action at a distance”). It seems bizarre to pedantically interpret that phrase as meaning that the very thing that you’re excluding with this interpretation is necessarliy included.

  32. BillyJoe7on 09 Jan 2011 at 8:25 pm

    cwfong,

    “Or as some say in Australia, shite happens.”

    The origin of this phrase, according to Wikipedia, is unknown but was first put into print by an American in 1983. It was obviously in general use long before this time as the author was writing about slang. Nevertheless it is a popular phrase in Australia although I have never used the term myself.

  33. cwfongon 09 Jan 2011 at 8:31 pm

    http://dilbert.com/strips/comic/2011-01-09/

  34. BillyJoe7on 09 Jan 2011 at 8:44 pm

    sonic,

    “…it doesn’t bother me that the universe includes chance and freedom- in fact it seems the universe I’m living in does include those features”

    It does “seem” as if the universe includes freedom but that is just an illusion produced by our brain. Free will, in fact, could not work as an evolutionary “good trick” because it does not produce optimal outcomes. The mechanism that evolved to solve problems posed by the environment is one that depends on inputs from the environment producing appropriate reactions in the brain and hence appropriate outputs by the body controlled by that brain. There is not even any basis for free will. What does it consist of? It can’t be based on those inputs, otherwise it’s not free. If the “free will” agent is aware of the imputs and the appropriate response but then chooses randomly whether to follow the appropriate response or to veto it, how is that freewill. It is “free” in the sense of being random, but how is it “will” in the sense of determining what to do. You might as well toss a coin.

    Maybe that’s what you meant though.

  35. BillyJoe7on 09 Jan 2011 at 8:51 pm

    cwfong,
    http://dilbert.com/strips/comic/2011-01-09/

    Well I have to disagree with Dilbert.

    :D

    Clarity is not what makes people angry. It is obfuscation.
    (Not guilty of that yet in this thread but I’m sure I can hold my breath :D )

  36. sonicon 09 Jan 2011 at 10:05 pm

    BillyJoe7-
    It is possible to set-up an experiment in which one has a choice as to whether an electron will act as a wave (demonstrate interference) or as a particle (no interference).
    The outcome of these experiments depends on the set-up of the experiment and the choice made by the experimenter.
    As of today, there is no known mathematics that accurately predicts the choice of the experimenter. Perhaps this is unfortunate- after all, if there were then we could solve the equation for all future choices made by experimenters and would then be able to know all future scientific results. As we don’t, I would suggest that the notion that these choices are not free is not based on evidence, but rather a statement of premise.
    But it is not logical to prove a statement by restating a premise.

    I don’t know if ‘free-will’ evolved or not, but I am sure that optimal outcomes are not required by current evolutionary theory. Quite the contrary.

    The basis of free will is of course experience- a short read for you–

    http://nobelprize.org/nobel_prizes/physics/laureates/1954/born-speech.html

    cwfong-
    Read the article I linked to.

  37. cwfongon 09 Jan 2011 at 11:17 pm

    sonic, I read the article about Born and the 1954 Nobel prize.

    Notably this quote from his speech:
    “A philosophy in which the notions of chance and freedom are fundamental seems to me preferable to the almost inhuman determinism of the previous epoch – but that is no scientific argument.”

    I’d say it’s close to both. It’s a testable hypothesis that chance allows the evolution of the choice making function, and the null hypothesis that an optional choice mechanism would evolve without prior possibility of chance and prove the universe itself was an inhuman God.

  38. BillyJoe7on 10 Jan 2011 at 2:19 am

    sonic,

    “I don’t know if ‘free-will’ evolved or not, but I am sure that optimal outcomes are not required by current evolutionary theory. Quite the contrary.”

    Of course I meant optimising the outcome or improving the outcome towards the optimal.

    ” I would suggest that the notion that these choices are not free is not based on evidence, but rather a statement of premise.
    But it is not logical to prove a statement by restating a premise.”

    I was providing logical arguments against freewill.

    Firstly, if freewill is free, it cannot be based on inputs into the brain from the environment (ie information). Otherwise it’s not free (if you have a different concept of free I would be happy to consider it). But, if freewill is free of any inputs from the environment (free of information), then all we have left to base the will on is something analagous to a coin flip. But, how can a coin flip be the basis for will? Again, if you have a different concept of will, I will be hgappy to consider it.

    Secondly, I was providing an argument based on evolution. Brains evolved as a survival mechanism. The idea being that, in order for organisms to survive changes in the environment (which includes especially the behaviour of competitors), brains evolved to take inputs from the environment, process them, and provide an appropriate output. This ensures that the output is matched to the input (that the organism flees predators; that the organism seeks prey). If freewill is simply a random veto of the approprate output provided by the brain, the chance that survival is improved every time freewill is invoked freewill is about 50%. Brains have surely required better odds that than to have survived (random vetoing of the brain’s decision to flee a predator is not likely to be conducive to long term survival).

  39. BillyJoe7on 10 Jan 2011 at 2:22 am

    ….strike out the second mention of freewill in the second last sentence

  40. davidsmithon 10 Jan 2011 at 12:49 pm

    Steve said,

    that would imply that the future can influence the past – a reversal of the arrow of cause and effect. This is, by everything we currently know about physics and the way the universe works, impossible.

    Not quite. Some physicists have been theorising and experimenting with time-reverse effects, see – http://discovermagazine.com/2010/apr/01-back-from-the-future/article_view?searchterm=Tollaksen&b_start:int=0

    The effect sizes are tiny.

    Cohen’s d values are reported in Table 7 of Bem’s paper. They range from 0.09 to 0.42. The largest effect size was found in the Facilitation of Recall II experiment. It’s worth noting that a Cohen’s d value of between 0.1 and 0.3 is considered a ‘small’ effect whereas one about 0.5 is ‘medium’. So the effect sizes range from small to approaching medium sized. I wouldn’t call them ‘tiny’.

    For example, in the word test described above subjects were correct 53% of the time, when 50% is predicted by chance.

    This is actually the result for “Experiment 1″ of Bem’s paper, not the facilitation of recall experiment (of which there were two)

    Bem’s research so far has failed the replication criterion. There have been three completed attempts to replicate part of Bem’s research – all negative so far.

    Actually, there has been one successful replication:

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1715954

  41. Steven Novellaon 10 Jan 2011 at 2:18 pm

    David,

    There is no consensus interpretation of the quantum effects being discussed – they do not necessarily involve reverse causality. Further, it is questionable if such quantum experiments have any implications for the behavior of macroscopic objects, like people.

    In short, they do not change at all the fact that the kind of reverse causality Bem is talking about is probably impossible.

    I maintain that the effect sizes are small and “tiny” is appropriate. You have to put this into the context of what is being measured – bottom line, they are all small enough to be in the noise.

    I was not aware of the fourth paper, I will take a close look at it. But at first glance, looks like barely significant results (0.02) and “tiny” effect size of 53.3% again. Also, the “boredom” result was not significant, and it does not look like they adjusted the stats for multiple analysis.

  42. cwfongon 10 Jan 2011 at 2:23 pm

    The present anticipates the future. The future can’t anticipate the present. Causation is sequential, and the web is not reversible sequentially.

  43. cwfongon 10 Jan 2011 at 2:52 pm

    Speaking of anticipating the future, consider the work to be done here:
    Special Interest Group in Anticipatory Systems
    BISC (Berkeley Initiative in Soft Computing)
    University of California, Berkeley (USA)
    http://www.anticipation.info/

  44. sonicon 10 Jan 2011 at 3:44 pm

    Regarding the possibility of backwards causation–
    http://arxiv.org/pdf/quant-ph/0610241v1

    Even Feynman uses the notion-
    “Feynman, and earlier Stueckelberg, proposed an interpretation of the positron as an electron moving backward in time…”
    http://en.wikipedia.org/wiki/Retrocausality

    Then there is the ‘transactional interpretation’-
    http://en.wikipedia.org/wiki/Transactional_interpretation

    If it is true that an electron can effect thought, then it is true that a positron could as well…

  45. cwfongon 10 Jan 2011 at 4:35 pm

    sonic,
    It’s not ‘backwards causation,’ it’s the theoretical ability for a change at that level to reverse itself. But the previous event has not been sequentially eliminated. The electron has not seen the future and returned to tell us what it saw.
    .
    Certain laws of physics have been said to be symmetric under time reversal. But some physicists have found that the symmetry is theoretical and in “real time” the symmetry is not exact, and the concept of time symmetry fails. (See Physicist David Albert for some commentary on this subject.)

    And then when you get into electrons effecting thought, you really have a problem, because thoughts in particular have no meaning without their sequential structure. Reverse the sequence of any electrons involved and thought self destructs.

  46. sonicon 10 Jan 2011 at 4:44 pm

    BillyJoe7-
    Freewill would not, by definition, be determined by inputs from the environment. This is not to say that this ‘will’ could not be informed by the environment. One could act on the input or not . Certainly this is a normal experience. Of course one could choose to ignore the environmental inputs– again a common experience.
    So I would disagree that ‘free’ implies ‘free of any inputs’.
    Your argument based on evolution is based on a definition of freedom that is quite different from what I mean.
    As an aside- I am not convinced that anything about what people do is based on some ‘appropriate output provided by the brain’.

  47. BillyJoe7on 11 Jan 2011 at 9:54 pm

    sonic,

    Sorry, you’ve probably given up oin this thread by now but I was unable to access the blog yesterday (it simply refused to load!)

    “Freewill would not, by definition, be determined by inputs from the environment. This is not to say that this ‘will’ could not be informed by the environment. ”

    Okay, I can accept that. But…

    “One could act on the input or not . Certainly this is a normal experience. Of course one could choose to ignore the environmental inputs– again a common experience.”

    Do you mean like tossing a coin – heads I act, tails I don’t? If so, I suggest that is a strange kind of freewill (with the emphasis on the will part). If not, then I suggest decisions must be based on something – and, let me suggest, that that something is the processing the brain has done with the input from the environment, together with inputs from other centres in the brain (eg memory, emotional centres). Let me suggest that there is nothing other than the brain processing inputs and producing outputs. No overlooker scrutinising the brain’s output and making secondary decisions (?deciding whether or not to veto the brain’s decisions)

    “Your argument based on evolution is based on a definition of freedom that is quite different from what I mean.”

    So what is your definition?

    “As an aside- I am not convinced that anything about what people do is based on some ‘appropriate output provided by the brain’.”

    I am saying that the brain processes the input from the environment together with inputs from specialised centres within the brain and then produces an output in the form of thoughts, and/or speech and/or motor action.
    I’m saying that freewill makes no sense from the evolutionary point on view in the form of either a coin flip or something independent of the brain which scrutinises the brains output.
    I’m saying there is no role for freewill to play.

  48. SimonWon 12 Jan 2011 at 7:22 am

    “It is noteworthy that none of the peer-reviewers on the Bem studies were statisticians”

    Daft question, but on a phenomenon like this I’d have thought this was essential. What did qualify the peer reviewers?

    I think any significant paper making marginal statistical claims (so many drug studies) should have review by a statistician. Probably not needed where the significance of result is very obvious.

    Even with a statistician occasionally a freaky result of a study will happen – chance is like that. I’m guessing that if the failed reproduction studies are included then the significance disappears already?

  49. davidsmithon 12 Jan 2011 at 8:17 am

    Steve said,

    There is no consensus interpretation of the quantum effects being discussed – they do not necessarily involve reverse causality.

    I agree. The very fact that there exists a variety of interpretations of that particular series of physics experiment should caution us against asserting that reverse-causality is impossible. Also consider the fact that the AAAS held a symposium on retrocausation in 2006 (see here for a brief news report – http://legacy.signonsandiego.com/news/science/20060622-9999-lz1c22cause.html )

    Further, it is questionable if such quantum experiments have any implications for the behavior of macroscopic objects, like people.

    I agree. However, I got the impression that, not long ago, scientists were pooh-poohing the possibility of relatively long lasting coherent states in anything larger than subatomic particles. Yet now we have papers arguing the case for quantum coherence in the photosynthetic apparatus. I guess that time will tell whether quantum effects could theoretically occur during activity of the brain/mind and be functional at the same time. Every time I read a popular science article about QM, the message is – we have so much yet to learn.

    I maintain that the effect sizes are small and “tiny” is appropriate. You have to put this into the context of what is being measured – bottom line, they are all small enough to be in the noise.

    The forward priming effect size found in Bem’s Experiment 3 had a Cohen’s d value of 0.45, the ‘retro-causal’ effect size found in Experiment 9 was 0.42 and the mean ‘retro-causal’ effect size reported for the high stimulus seekers across all experiments was 0.43. So, the article actually reports some psi effect sizes that are about as large as those reported for a standard, undoubtedly real psychological effect (forward priming).

    But at first glance, looks like barely significant results (0.02) and “tiny” effect size of 53.3% again.

    Significant nevertheless. That would, by usual standards, qualify it as a replication. As for the ‘tiny’ effect size, the paper does not report SD’s so we can’t estimate Cohen’s d for comparison but the effect appears to be approximately the same as that reported in Bem’s paper. This is what you would expect given that the paper is an attempt at replication. One should also take on board your previous point that the more a protocol has been replicated with similar results, the smaller an effect size we should take seriously.

    Also, the “boredom” result was not significant,

    This was likely a power issue since the original Bem study used 200 participants and also failed to reach significance over all sessions.

    it does not look like they adjusted the stats for multiple analysis.

    I’m not sure that controlling for familywise error rate would be necessary here since this was a confirmatory study and the comparisons between the ‘high arousal’ and ‘boring’ stimuli don’t need to be jointly accurate.

    Ideally, that study would have used a larger sample.

  50. Steven Novellaon 12 Jan 2011 at 9:06 am

    Physicists are extending the range of conditions in which quantum effects are observable – but so far they have to use extreme experimental conditions (like close to absolute zero, for example) in order to tease out certain kinds of quantum effects in medium-sized molecules. I think the record right now is the tiny tuning fork experiment which showed superposition.

    But this is still so far from macroscopic objects and normal temperatures and environmental interactions that they do not imply that we can extrapolate up to people. The DeBrolie wavelength and decoherence, I think, have something to say about this.

    Bottom line – these quantum effects do not add uncertainty to the statement that information cannot travel back in time to reverse causality.

    One weak replication, and three negative replications does not add up to a replicatable study protocol. If Bem is correct than the results should consistently appear in a standardized experiment – that is what we mean by replicated. One out of three is not consistent. And further we need to dig down to the quality control of attempted replications – what is the relationship between the quality of the study and he size of any effect.

    We are still left with extremely thin evidence for a highly implausible claim. This adds up to – almost certainly wrong.

  51. davidsmithon 12 Jan 2011 at 9:54 am

    Steve said,

    Physicists are extending the range of conditions in which quantum effects are observable – but so far they have to use extreme experimental conditions (like close to absolute zero, for example) in order to tease out certain kinds of quantum effects in medium-sized molecules.

    Are you aware of the research described here? – http://www.wired.com/wiredscience/2010/02/quantum-photosynthesis/#

    Here’s a quote from the article:

    *******************

    “Two years ago, researchers led by then-University of California at Berkeley chemist Greg Engel found coherence in the antenna proteins of green sulfur bacteria. But their observations were made at temperatures below minus 300 degrees Fahrenheit, useful for slowing ultrafast quantum activities but leaving open the question of whether coherence operates in everyday conditions.

    The Nature findings, made at room temperature in common marine algae, show that it does.”

    *********************

    Talk of ‘quantum biology’ no less.

    Bottom line – these quantum effects do not add uncertainty to the statement that information cannot travel back in time to reverse causality.

    Bottome line – articles like the one above should warn us about accepting the type of claims you are making at face value!

    One weak replication, and three negative replications does not add up to a replicatable study protocol. If Bem is correct than the results should consistently appear in a standardized experiment

    Unfortunately, all of the replication attempts I’m aware of (including the one I gave) were not direct replications. For example, the failed replication attempt of the Facilitation of Recall experiment was an online study – obviously not controlling for participants attention to the task. Richard Wiseman, Chris French and Stuart Richie (student at Edinburgh Uni) are attempting a direct replication of the FoR experiment which should be interesting either way!

  52. sonicon 12 Jan 2011 at 3:15 pm

    BillyJoe7-
    First off I hope you are high and dry and that all is well. Some excitement down under- no? (I had trouble loading the other day too…)

    On topic- I would suggest “Consciousness Explained” by Dennett is a good book on this.
    But we must understand that Dennett starts with a premise (materialism) and makes the best case he possibly can from there. I believe he does an admirable job.
    The fact that he concludes he if a fictional character is not surprising if you have considered these issues. It might give one pause in accepting the premise, however.
    Of course I guess that would all depend on previous inputs…

  53. BillyJoe7on 13 Jan 2011 at 7:19 pm

    sonic,

    “I hope you are high and dry and that all is well.”

    The floods are in the north-east (Queensland) and we live in south-east (Victoria) so we are quite safe, thanks. In any case, the entire adjoining township of Lilydale would have to be submerged before we in Mooroolbark would see any effect.

    Appartently it is part of the La Nina affect which commenced in May 2010 and is expected to last till May 2011, after which the El Nino effect will again predominate and bring back the droughts and fires. AGW predicts that these extreme weather events are likley to increase with time.

    We have actually just emerged out of a 10 year drought which saw the worst fires in the state’s history in Sept 2009 in which 180 people died, 3,500 houses destroyed, and 1 million acres of bushland burned. Since then, the state has been subjected to episodes of minor flooding from time to time but nothing life threatening as in Queensland at the moment. It’s unheard of for our water tanks to be full in mid Summer but that is what we have at the moment.

    …oops, Sorry for the derail

  54. BillyJoe7on 13 Jan 2011 at 7:28 pm

    sonic,

    “On topic- I would suggest “Consciousness Explained” by Dennett is a good book on this.”

    I read it some years ago.

    “But we must understand that Dennett starts with a premise (materialism) and makes the best case he possibly can from there. I believe he does an admirable job.”

    Likewise.
    Dennett is a philosopher and he like his philosophy to be based in science. Hence the materialist perspective.

    “The fact that he concludes he if a fictional character is not surprising if you have considered these issues. It might give one pause in accepting the premise, however.”

    Not when you realise how real illusions can seem. The alternative to the illusion of self and freewill is dualism, and there is as yet no evidence for a mind separate from and controlling the brain.

  55. BillyJoe7on 13 Jan 2011 at 11:21 pm

    Apparent “backwards in time causation” demonstrated.

    An apparent “backwards in time causation” is demonstrated in the delayed choice double slit experiment:
    (This is a necessarily simplified description)

    In the original double slit experiment, sensors are placed at the slits to detect the presence of the photon. If the senors are switched off, you don’t know which slit the photon passed through and, therefore, you see the usual interference pattern. If the sensors are then switched on, you do know which slit the photon passed through and, therefore, you see a scatter pattern on the screen.

    In the delayed choice double slit experiment, the detectors are switched on – so that you see a scatter pattern on the screen. Then a device that is capable of erasing the information obtained by these detectors is placed between the double slit and the screen. If the device is switched off, nothing really changes and therefore the scatter pattern remains. If the device is switched on, the information obtained by the detectors is erased and therefore we again see the interference pattern on the screen. So far, no problem.

    Finally, and here is where we see the apparent “backwards in time causation”: If you decide to switch the device on but delay that decision untill after the photon has already passed the double slit, you will still get the interference pattern!

    In other words, it seems as if your decision to turn on the device acts backwards in time to determine what the photon does at the slits.

  56. sonicon 14 Jan 2011 at 2:32 am

    BillyJoe7-
    Good to hear all well.
    I’m still dizzy from the ‘delayed choice quantum eraser’.
    I mean- WARNING- I’m still dizzy…

  57. BillyJoe7on 14 Jan 2011 at 11:33 pm

    I was trying to make it easy to understand.
    You should see Wikipedia’s description!

    http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser

  58. banyanon 06 Apr 2011 at 9:37 am

    So I realize this article is old now, and probably no one will see this comment, but today’s xkcd is directly on point, so I had to tack it on here:

    http://www.xkcd.com/882/

    Oh, I mean it’s related to the whole misinterpreting p value thing, it has nothing to do with the above flame war in the comments.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.