May 09 2008

Brainwave Entrainment – A Response from Transparent Corp.

Earlier this week I wrote about the marketing of devices for brainwave entrainment for therapeutic use, concluding that these devices and the claims made for them are pseudoscientific. In response to my blog post I received the following e-mail:

Dear Dr. Novella,

I am the director of research at Transparent Corporation, which is the developer of Neuro-Programmer, and was disappointed to read your blog entitled “Brainwave Entrainment and Pseudoscience”.

I fully acknowledge that peer reviewed research on Brainwave entrainment is hard to locate, and it is one of the biggest hindrances to the field. The greatest barrier to finding this research is the lack of consistency in terminology used to describe brainwave entrainment. In fact, the term “brainwave entrainment” appears to have been invented by those in the industry, rather than those who have published on the subject. In the last year, I wrote an article entitled “A comprehensive review of the psychological effects of brainwave entrainment” which has been accepted in Alternative Therapies in Health and Medicine and I’ve been told will be published this summer. I’ve attached a copy of the article I submitted, but in deference to the journal, I would like to ask you to not distribute this article. This is the first review article that will show such a comprehensive review of peer reviewed research on the effects of brainwave entrainment on psychological outcomes. I found 21 studies that met our basic criteria by using a long list of search terms. Many of these terms, such as photic stimulation or auditory stimulation are general terms that can include brainwave entrainment, so I had to search through thousands of studies that were not relevant to my subject of interest. You can see the procedure I used in my methods and figure 1. You will note that I did not use Pubmed, as I was told by the librarian at Tufts University that Ovid searches are more extensive than Pubmed, and include those from Pubmed. A number of the articles are from the Journal of Neurotherapy which can be found in the Psychinfo database.

I acknowledge that there is a huge range in the quality of these studies. While I realize that a controlled study, or even better a double blind controlled study is the gold standard, it is only easy to do with pharmaceuticals. A couple of the popular methods of control include comparing entrainment to music only or just wearing headphones or LED glasses, but the user will almost certainly be able to tell that they are not in the subject group. Another method is to compare 2 separate entrainment frequencies, but each will have their own affects, and thus is less than ideal. On the other hand, I agree that the field should continue to work towards establishing better controls.

In 2006, there was a Brainwave Entrainment Conference at Stanford (See, and there I realized how little most researchers in the field knew about the extent of studies in the field. Thus finding funding and support for their research has been difficult. While the field has been around for over 100 yrs (the first report of brainwave entrainment was in the late 1800’s), it is still has a long way to go. My hope in writing my review article was to adequately describe the current state of the field, so that scientists might gain an interest in the further pursuit of research, and to hopefully increase their prospect for gaining funding to do this important work. Only with adequate funding can we have large scale studies.

Given my research and experience with the field, I do believe that entrainment does have potential to affect a broad range of applications. While most positive findings have been found for various aspects of cognitive functioning, several studies have shown that entrainment can help relieve pain and headaches. There are numerous uses by the industry for other purposes as well, such as relaxation, meditation, sleep and depression. While the peer reviewed research may not yet support those claims (I didn’t look at meditation in my review article), given the infinite number of potential protocols, it may be that studies using better protocols have just not been either scientifically tested, or have not been subjected to peer review. From what I have seen by reports of users and non-peer reviewed research, I am optimistic that brainwave entrainment has the capacity to be effective for a broad range of applications. However, only time and more research will tell.

The studies found in my review article do show long term positive consequences of entrainment with repeated use. We believe that the effects are achieved a lot like learning. Repeated exposure to a stimulus eventually changes the brain to think or work differently. As you know, showing evidence of rewiring of the brain in humans is not possible, but the use of psychological testing before and after do suggest that changes are happening. Studies to determine how long the effects last after cessation of use remain to be done, but the study by Budzynski et al. does show that the positive effects of entrainment extended a school quarter beyond entrainment.

With regards to our website, I have discussed with my boss your concerns about some of the marketing language. Our immediate priority will be changing the website and to take out anything that could be misconstrued. We are also going to highlight the articles that are from peer reviewed sources.

I hope you will read the attached article, and consider updating your blog or adding an addendum to it. We have a very good reputation in the brainwave entrainment community as our intention and primary goal is to provide effective technological solutions to mental health with affordable tools.

Let me know if there are any questions that I can answer for you.


Tina L. Huang, Ph.D.
Director of Research
Transparent Corporation

I appreciate Tina taking the time to read my blog and respond, and she certainly sounds very sincere and straight-forward. However, I disagree with her interpretation of the research, and I think her analysis is very revealing of common errors pervasive on the edges of mainstream medicine.

Reviewing the Literature

Tina is correct in that my searching for articles on brainwave entrainment was not thorough. I did search beyond these terms, and as stated also searched on the names of the authors referenced in the Neuroprogrammer website, but did not do an exhaustive search. That is why I am always careful to say “I did not find any published research” rather than “there is no published research.” As Tina indicated, thorough reviews of the literature are tedious and take many hours of work – which is simply not possible for me to do for a daily blog. I therefore rely upon published reviews – taking advantage of the work of others, which is necessary in any scientific field. I also use more basic searches as an indicator for what is out there. The absence of published studies in listed peer-reviewed journals that deal directly with the claims being made is usually a pretty good indicator.

But now, thanks to Tina, I do have a review article to show me more thoroughly what research has been done. Because the article has not yet been published, and at Tina’s request, I will not reprint the article here. But I did read it and follow up on some of the references, and so will incorporate my impressions into my responses to Tina’s specific points.

Marketing Hype

Tina Wrote: “With regards to our website, I have discussed with my boss your concerns about some of the marketing language. Our immediate priority will be changing the website and to take out anything that could be misconstrued.”

In my previous blog entry I acknowledged that some of the hype on the Neuroprogrammer website was standard marketing fair – but that the standards for medical devices or anything with health claims needs to be higher than those for dishwashing detergent. I am glad she acknowledges there are problems with the website, and I further realize that often within the same company the development team (whether they are scientists, engineers, or programmers) are often not on the same page at the marketing team, who may take it upon themselves to go way beyond the evidence or what the product will do. I have hard developers express such annoyance at their company’s marketers. But this is still no excuse for the company. It is their responsibility to make sure that they are not making health claims for their product that are not supported by adequate evidence. This was, and remains, my primary criticism of this specific company.

Clinical Research

Tina wrote: “While I realize that a controlled study, or even better a double blind controlled study is the gold standard, it is only easy to do with pharmaceuticals.

Too often I have heard this as an excuse for the lack of properly designed clinical trials – whether it’s homeopathy, acupuncture, or therapeutic touch. This is simply not an adequate defense for the lack of reliable clinical evidence. Even if this were true (and I think it isn’t) it does not justify making claims that have not been adequately demonstrated.

Double-blind placebo controlled studies are not limited to pharmaceuticals. This is an absurd double-standard. It may be more difficult, and require some creativity – but well designed studies for non-pharmacological treatments are possible. For example, with brain wave entrainment a control group could be subjected to interventions that require the same amount of time and attention and similar tasks but that are not designed to produce any entrainment. This would be like doing sham acupuncture. Further, whatever mechanism is being used to assess the outcome could be blinded to which treatment group the subjects were in, and the subjects themselves would not have to know if they were getting the treatment or the sham treatment. Many studies also incorporate methods to assess the success of blinding. The simplest way to do this is to give subjects a questionnaire in which they are asked – do you believe you received the treatment or the sham treatment and why, or do you not know?

Tina further wrote: “The studies found in my review article do show long term positive consequences of entrainment with repeated use.

Here is where we simply disagree, the primary reason for which is that Tina is being too generous in her interpretation of the literature. I reviewed the studies she lists in her review article, and they all suffer from serious flaws or limitations.

1 – The primary limitation of all the studies is that the number of subjects were very small, ranging from just a few to 40-50. Small number size makes the results uninterpretable, or preliminary at best.

2 – Inadequate control group and blinding. This is critical for these types of interventions because there is a huge effort and attention component to cognitive performance. There is basically no way to know from these studies if we are seeing a non-specific effect from being studied, or a specific effect from brain wave entrainment.

3 – The studies generally did not adequately control for a learning effect when doing serial assessments. For example, if you give a study subject a test of some cognitive ability, then give them an intervention, then give them the test again there will be a trend toward better performance because there is a learning effect for the test itself. The second or third time the subject does the test they will do better even without any intervention. This should be controlled for by giving subjects the cognitive test several times to establish a baseline, then introducing the intervention and doing follow up testing.

4 – Like any review of published studies, there is no way to completely account for the file-drawer effect. The results of the studies in Tina’s review were generally mixed – some positive and some negative – but she was impressed by the preponderance of positive studies. However, the file-drawer effect (the tendency to publish positive studies and not negative studies) could explain this preponderance. This is partly why reviews and meta-analyses are not a substitute for large, well-designed, definitive clinical trials – which are lacking for any of the therapeutic claims made for brain wave entrainment.


In my opinion, Tina’s optimism is not warranted. The plausibility of therapeutic or performance claims for brain wave entrainment are very low. There is no established mechanism for a specific effect for entrainment, and the explanations given by Tina and the Neuroprogrammer website are vague and unconvincing. They really don’t provide a mechanism at all – just make vague statements about learning or training the brain, or they discuss the mechanisms of entrainment but not how that can relate to any improved performance.

Without a plausible mechanism the threshold for clinical evidence to show that there is actually an effect is high. But so far the clinical evidence does not cross even a low bar for acceptance. Research is preliminary and flawed at best. In my opinion, all of the evidence is perfectly compatible with the null-hypothesis, that there is no specific effect here (I keep saying “specific” effect because there is likely a non-specific effect from doing any mental activity). With low plausibility and weak evidence, the best scientific conclusion we can reach at this time is that brain wave entrainment is probably not useful for any therapeutic purpose.

But – entrainment (unlike homeopathy, for example) is not magic. It is not so implausible that there is no possibility of a specific effect (although I admit I would be surprised if this turns out to be the case), and so I am willing to be convinced if the evidence warrants. What proponents should do – rather than marketing devices with outrageous clinical claims and hyperbole – is do better research. They need to think more about what the mechanism of benefit could be and then investigate those hypotheses. They also need to design and carry out large, well-designed clinical trials. They need to prove that there is a real effect from brain wave entrainment.

If our regulatory laws worked as they should, such evidence would come before these devices are marketed to the public with health claims.

21 responses so far

21 thoughts on “Brainwave Entrainment – A Response from Transparent Corp.”

  1. orDover says:

    Her meta-analysis is being published in the Alternative Therapies in Health and Medicine journal. That alone is enough to set off a hundred red flags. When she is published in an actual neuroscience journal that is grounded in science based medicine, instead of the world of woo, then maybe entrainment will deserve a second look.

  2. mattdick says:

    So there is weak evidence that there is a long-term effect from entrainment.

    Okay, fine, but is there *any* evidence that any effects are positive? What if agree there is an effect, but find out later that the effect is to make us stupider?

  3. nowoo says:


    I think you meant “but did NOT do an exhaustive search” instead of “but did do an exhaustive search”.

  4. jonathan says:

    As a researcher in behavioral sciences and psychosocial interventions, I am tired of the excuses like ” While I realize that a controlled study, or even better a double blind controlled study is the gold standard, it is only easy to do with pharmaceuticals.” In short, bulls**t. Yes, it is difficult. Good, well designed, and thought out research is difficult in ALL subjects. It is, however, not impossible. In the social and behavioral sciences, RCT’s are the gold standard, and they are done ALL THE TIME, both in the field and in the lab. There is no excuse for a poorly designed and controlled study. If you are going to use data from poorly conceived studies, then you had better be careful about the claims that you make–you may generate hypotheses to be examined, but you are certainly not testing them in a meaningful way.

  5. dylan says:

    reminded me of this. I knew I had come across something like this in a main stream journal.

    Although it only had an n of 13 there were *significant* results. The procedure seems to be ok as well, TMS is pretty easy to sham.

    I think a point to make is that research of this type is generally in a academic rather than larger scale clinical field (such as clinical drugs trials involving large cohorts), which (rightly so in my opinion) reflects the infancy and tentative results of this field.

    The consequence of this is that in these type of trials ‘n’ cannot be large due to certain restraints.

    I think it is perhaps unfair of Dr Novella to demand large trail sizes in such preliminary studies, at this point in time large studies would not be a good idea for several reasons (cost being one, but also the proceedures for such experiments need to be refined before taken to the next level). Likewise it is miss-leading for commercial ventures to use such studies as the basis for sales.

    The bottom line is that studies like the one I have cited should be recognised for what they are: preliminaries that can be built upon in the future if they yield promising results, not reduced the status of ‘testimonials’ on a marketing website.

  6. dylan – I did not demand large trial sizes for preliminary studies. I said that these studies – because of their poor design and small sizes – should be considered preliminary. The point is – they should not be used to support health claims for products being marketed to the public.

    I will add that these claims have been around for years. If they were plausible and preliminary studies were promising, then they would attract research money and the larger trials would be done.

  7. dylan says:

    @ steve

    Apologies, after reading a bit more closely, I see you essentially said what I repeated; that small n means preliminary & that preliminary studies should not be used as marketing tools. A consensus needs to be reached by the scientific community before any of this stuff is peddled to us!

    Although I think it important to highlight the differences in this field compared to, say, pharmaceuticals. For pharmaceuticals there is a very good hierarchical framework for evaluating the benefits (and risks) of new treatments.

    The evaluation of using techniques to “enhance” brain function (if that can be a term in the loosest possible sense) falls into the relatively new field of cognitive neuroscience. There is as yet no framework in place for the large scale evaluation of any possible benefits.

    I’m not sure if there is a case to make for more difficult assessment of cognitive ability than of pharmaceutical effectiveness. Shams can (and should be used) be used. There are established tests for cognitive ability and the usual stats can be employed.

    It seems to me that these companies have put their eggs in one basket with this one, on the basis of tentative results only…

  8. wertys says:

    We need a shorthand term for companies which go right off on a marketing binge on the basis of speculative or unsound preliminary research. There are so many companies which can quote a couple of poorly done research trials and hype their products out of all proportion, or even worse they generalize from preclinical to clinical environments with no evidence of carryover effect that we as skeptics need a quick and easy name to refer to them by. Anybody got any ideas?

  9. decius says:

    If better designed studies turned out negative, they could still re-market the device for penis enlargement.

  10. I predict stiff resistance to Decius’ suggestion.

  11. wertys says:

    Would marketing them without proper research make them hardened criminals?

  12. Yes, but the evidence won’t stand up in court.

  13. Tina L. Huang, Ph.D. says:

    Dear Dr. Novella and readers of his blog,

    I would like to respond to several points you’ve made:

    Placebo controlled studies:
    I want to clarify that for some methods of brainwave entrainment, placebo controlled trials are possible and have been done. For most methods, however, we do not believe there is a true sham or placebo entrainment frequency. The best that can be done is to compare against another frequency which is expected to produce different results, and again, this has been done. But with regards to creating a true sham or placebo entrainment protocol, that will have no effects on the subject, one has not yet been discovered.

    Inadequate control and blinding:
    Twelve of the studies in my paper were controlled, most of them with the same background music the subjects were exposed to.

    While my study is the most comprehensive to date, it did not include all peer reviewed studies on brainwave entrainment. We limited our review to psychological and clinical outcomes.

    Number of subjects per study:
    With regards to the number of subjects in a study, a study that is small does not render a study invalid. 19/21 studies that I included in my paper had at least one statistically significant finding, many of them with many more. And in my paper, I defined statistical significance as P< 0.05, which is the standard in the literature. The 2 studies that did not have any statistically significant findings did not use protocols that were expected to affect the outcomes they were examining. The learning effect in cognitive studies This is something that I am always concerned about when I do cognitive studies, but when there are controls, the subjects and their controls are both susceptible to the learning effect. Out of the 9 cognitive studies in our paper, only 2 lacked controls. Lack of a known mechanism Lack of a known mechanism does not mean that a therapy is ineffective. Many therapies such as pharmaceuticals were developed and used long before a mechanism was truly understood. And I disagree that there is no plausible mechanism. There is a standard hypothesis in the field for how brain entrainment works, and the entrainment effect has been seen consistently in the literature for alpha waves. Large scale clinical trials: We all agree that large scale clinical controlled trials are the gold standard, and it is our hope that one day we will have the funding and resources needed to conduct such trials. For more information, my paper will be published in the summer in Alternative Therapies in Health and Medicine. Below are a subset of studies on brainwave entrainment: Kliempt, P., Ruta, D., Ogston, S., Landeck, A. and Martay, K., 1999. Hemispheric-synchronisation during anaesthesia: a double-blind randomised trial using audiotapes for intra-operative nociception control. Anaesthesia. 54, 769-773. Williams, J. H., 2001. Frequency-specific effects of flicker on recognition memory. Neuroscience. 104, 283-286. Padmanabhan, R., Hildreth, A. J. and Laws, D., 2005. A prospective, randomised, controlled study examining binaural beat audio and pre-operative anxiety in patients undergoing general anaesthesia for day case surgery. Anaesthesia. 60, 874-877. Nomura, T., Higuchi, K., Yu, H., Sasaki, S., Kimura, S., Itoh, H., Taniguchi, M., Arakawa, T. and Kawai, K., 2006. Slow-wave photic stimulation relieves patient discomfort during esophagogastroduodenoscopy. J Gastroenterol Hepatol. 21, 54-58. Anderson DJ. The Treatment of Migraine with Variable Frequency Photo-Stimulation. Headache. 1989;29:154-155. Patrick, G. J., 1996. Improved neuronal regulation in ADHD: An application of fifteen sessions of photic-driven EEG neurotherapy. Journal of Neurotherapy. 1, 27-36. Sincerely, Tina L. Huang, Ph.D. Director of Research Transparent Corporation

  14. Tina –

    I understand that some of the studies were controlled, some were blinded, and some had statistically significant results. What we do not have is one study that has all of these features plus a sufficiently large number of subjects to be considered reliable. What we are seeing is the same distribution and quality of studies that we see with homeopathy or acupuncture.

    Regarding statistical significance – this is not a substitute for large numbers. Small studies can achieve statistical significance – but that does not mean that they are as powerful or the results as reliable as a larger study with significance. In small studies, just a couple of anomalous outcomes can alter the results.

    Also – you did not address my point that the file drawer effect is sufficient to explain the preponderance of positive studies in this mixed literature.

    The learning effect is partly accounted for by proper controls and blinding – but it means the uncontrolled studies are worthless. But controlling is not adequate, because it still created a trend toward improved outcome that will bias the results.

    I never said that the lack of a mechanism means it does not work – that’s a common straw man. I said that the lack of a mechanism means we need more rigid criteria for evidence before we accept that there is a real effect.

    The mechanisms you refer to are for the entrainment effect – but NOT for any clinical benefit from it. That is a huge difference. I am not aware of any plausible mechanism for a sustained improvement in cognitive function from entrainment, and I find such a claim highly implausible.

  15. thuang says:

    Dear Dr. Novella,

    Regarding statistical significance: Two studies with the same p value or confidence interval, regardless of their size have the same chance of showing a true or false outcome due to chance. A larger study does not make a finding more statistically significant. (Feel free to consult a biostatistician if you disagree.) All studies can be influenced by anomalous outcomes. If an investigator finds a few outlying data points, it’s important for them to try to account for them and take this into consideration when doing their analysis. One advantage of taking into account lots of small scale studies (vs 1 large scale study), is that if multiple small studies show consistent findings, that the results are more generalizable to multiple subjects, environments and research methods.

    Regarding the “file drawer effect”: This is called “publication bias” among scientists and is something all scientists across all fields are taught about. While there are methods used to estimate whether there is publication bias for meta-analytic studies, these methods can only be used for studies that are examining the same outcome. Otherwise the people that have the most power to change this are the editors of journals, and those in leadership positions in science. There is no reason to suspect that publication bias with regards to brainwave entrainment is any different than any other scientific field.

    Regarding the learning effect with cognitive studies: My postdoc was at Johns Hopkins School of Public Health in the Department of Mental Health where I studied and published in the field of epidemiology of Alzheimer’s disease. We were taught that the best way to control for the learning effect is to make sure that cases and controls were subject to the same testing conditions, that is to ensure that biases are equal across cases and controls. If you have any evidence to suggest that more controlling needs to be done, please send me the evidence. If you are correct, I’ll be sure to alert my colleagues in the field.

    Uncontrolled trials are not worthless. The fact that they are uncontrolled needs to be taken into account while evaluating the study, and if findings show interesting outcomes then they should be followed by controlled trials. If the entire field of science believed that they were worthless, they would not be published in peer reviewed journals.

    With regards to mechanism: You should know that there are real limitations with regards to determining mechanism at the level you require in a living breathing human being. But I can at this point hypothesize that many of the same mechanisms involved in learning would be relevant here. The experience of monks and meditators have taught us that states of mind can be learned through repeated practice. If alpha stimulation can induce a subject to emit more alpha, then I could imagine that repeated exposure would allow a subject to express alpha more easily on their own. In fact in the study by Patrick (1996), he did EEG driven photic stimulation at 12-14Hz over 15 sessions with children with ADHD, and then gradually withdrew the stimulus when the subjects were able to produce it on their own. He showed significant improvements with a number of different cognitive tests that reveal improvements with regards to their ADD symptoms. I believe I’ve presented a plausible mechanism. Whether it’s correct or not can only be determined with the help of hundreds of neuroscientists across multiple specialties, and scientists to develop newer and better techniques and equipment, animal models, cell cultures, etc etc., depending on the level of detail you are looking for.


    Tina L. Huang, Ph.D.
    Director of Research
    Transparent Corporation

  16. Tina,

    Statistical significance is not the only measure of a study. The problem with small studies is that small errors or anomalies with one center can affect their outcome. Large multi-center trials are better able to average out such errors.

    Your point on publication bias is a non-sequitur. Publication bias would account for an excess of positive studies, and it is partly why reviews of lots of small studies are not as reliable as large definitive trials.

    Your point about the learning effect is a false dichotomy. You can account for it both by having adequately blinded and controlled studies and by achieving a proper baseline prior to the intervention.

    Your comment on uncontrolled trials is a straw man. I never said they had no utility. They can be used to generate hypotheses or see if later studies are warranted – but not as reliable tests of hypotheses. That’s the point.

    You still have not presented a mechanism – how does altering brain waves affect cognitive function? This is not a trivial question.

    If we were discussing whether or not brainwave entrainment deserves further study – you might have a point. Go ahead – study it. My problem is that this is a product being sold today to the public with definitive health claims. Those claims are not justified by current research. I will add that a similar quality of research is available for homeopathy and other claims that we know are false. This argues for demanding better research.

  17. Dudeman says:

    I just wanted to add to this albeit late. I think this warrants more research. I’ve been to all these sites proclaiming all sorts of ridiculous things. Even on transparent forums there are some wild claims. I’d like more information because it does work to relax you.

    I’ve been playing around with a auditory brain entrainment product, that requires stereo headphones. And it does cause a very real affect. When I entrain for 30mins I feel like I do after a good nights sleep, refreshed and alert. I don’t believe a lot of the claims out there for these products. But as a tool for stress relief and relaxation it does well. I invite you to try it yourself. Are you going to shoot sparks out, solve insolvable riddles and other wild claims, nope. But it does create a strong very real effect. I use it to regain focus or just as a conscience nap when I feel burned-out. It reminds me alot of self hypnosis or something of that nature.

    I don’t suffer from any medical or psychological disorders so I can’t comment on that. I admit there is a lot of hocus-pocas about entrainment, but I know from first hand experience it can be helpful.

  18. luisrz87 says:

    Hi Dr. Novella, I just signed up to your blog and have a couple questions for you. I understand how unjustified such claims by the Neuro Programmer company are and think you are doing a good job by alerting the public on what has yet to be proven scientifically.

    Although, I was wondering what your opinion on meditation is. I have been looking into it for about half a year now and it was very hard for me at first. I later came across a product similar to this one called Holosync, purchased it risk free, and decided to keep it for one reason: it facilitates meditation significantly. Not only does it facilitate meditation but it induces sleep and relieves stress. If this product is actually able to induce sleep or the transition from one state of mind to another, don’t you think it has some positive effect on the mind when dealing with people who suffer from sleeping problems. Don’t you think it could be considered therapeutic.

    I think one good thing can lead to another. Just like better sleep can lead to better attention, or a better mood. A better state of mind. We cannot limit ourselves, we have to look for the potential benefit.

    You asked Tina, “how does altering brain waves affect cognitive function?” Could you tell me how it does not affect cognitive functions? Sorry if this sound like a rhetorical question, but it’s not. Humans have found ways of bettering their health through natural remedies long before they understood the underlying mechanism. Why? Because it made a difference. Only later, as a result of optimism and interest in the medical field did we learn about the underlying mechanisms.

    Holosync definitely helps me relax and I can “feel” the stimulus; this is probably the reason it is so hard for there to be double blind studies in this field, because it is most likely that you will notice the stimulus when present and not at all when not present.

    In my opinion, as it isk important for you to declare what is probably not true because it is yet to be proven. It is important for you to let us know what AVS could potentially do for us, because it is this what will encourage others to undertake the desired research. I thin every field, despite its popularity in the present time, deserves a chance.

  19. Apyrase says:

    I know I’m totally late here, but why not use monospeaker headphones for a control group and stereo headphones for the treatment group?

  20. peterwicks says:

    I’m curious to know why Steven Novella regards claims about sustained improvement in cognitive function from brainwave entrainment as “highly implausible”. Even if it only had the relaxation/refreshment effect described by Dudeman (and which I have also experienced), isn’t that in itself, if practised regularly, likely to lead to a sustained improvement in cognitive function? Personally I find the null hypothesis less plausible than at least some weak alternative hypotheses, even if the essential claims seem unlikely to turn out to be warranted.

    On a somewhat pedantic point, Novella was also somewhat inconsistent in his responses to Dr Huang: first he described uncontrolled studies as “worthless”, then he denied having said they had “no utility”. What does “worthless” mean if not “having no utility”? Or did you mean “worthless in the context of testing a hypothesis?”

Leave a Reply