Feb 20 2014

Reality Testing and Metacognitive Failure

Imagine coming home to your spouse and finding someone who looks and acts exactly like your spouse, but you have the strong feeling that they are an imposter. They don’t “feel” like your spouse. Something is clearly wrong. In this situation most people conclude that their spouse is, in fact, an imposter. In some cases this has even led to the murder of the “imposter” spouse.

This is a neurological syndrome known as Capgras delusion – a sense of hypofamiliarity, that someone well known to you is unfamiliar. There is also the opposite of this – hyperfamiliarity, the sense that a stranger is familiar to you, known as Fregoli delusion. Sufferers often feel that they are being stalked by someone known to them but in disguise.

Psychologists and neuroscientists are trying to establish the wiring or “neuroanatomical correlates” that underlie such phenomena. What are the circuits in our brains that result in these thought processes? A recent article by psychologist Philip Garrans explores these issues in detail, but with appropriate caution. We are dealing with complex concepts and some fuzzy definitions. But in there are some clear mental phenomena that reveal, at least to an extent, how our minds work.

The “reality testing” model discussed by Garran reflects the overall hierarchical organization of the brain. There are circuits that subconsciously create beliefs, impressions, or hypotheses. We also have “reality testing” circuits, specifically the right dorsolateral prefrontal circuitry, that examine these beliefs to see if they are internally consistent and also consistent with our existing model of reality. Delusions, such as Capgras and Fregoli, result from a “metacognitive failure” of these reality testing circuits.

Garran and others argue that dreams are a normal state we all experience in which our reality-testing circuitry is either off or hypofunctioning. This is why our dreaming selves accept dream events that are clearly internally inconsistent or at odds with our model of reality. When we wake up, if we remember our dream, we are often struck by how “bizarre” our dream was and marvel at how our dreaming self accepted the clearly unreal dream.

The question Garran explores is whether or not pathological delusional states are neuroloanatomically similar to the dreaming state. Both, he argues, may result from a failure of reality testing. Part of the problem of exploring this hypothesis is that “reality testing” is a broadly defined concept. What, exactly, is the process? It seems to be a higher level inference about what is likely to be real based upon logic, internal consistency, and existing knowledge. 

Here is my own synthesis of what we currently know about how our brains work with respect to belief and reality testing:

There are multiple identified processes, acting mostly subconsciously, that “present” tentative beliefs or conclusions to our conscious awareness. These processes include our sensory perceptions, which are highly constructed and are not objectively reliable. Our brains not only construct our perceptions but give them meaning. We don’t just see shapes, we see objects that have a reality and a purpose. We also see people, who have emotional content, including familiarity. Locations also are imbued with a sense of familiarity or unfamiliarity.

Our memories are also highly constructed and malleable. We update our memories with new information every time we recall them. They become part of our dynamic internal model of reality. 

There are also a host of biases and needs pushing our model of reality and our construction of events in a direction that is emotionally comforting and satisfying to us.

Further, we have a set of heuristics or inherent logic by which we, by default, attempt to make sense of the world. This includes an inherent (flawed) sense of probability. There are also inherent tendencies, such as the tendency to see patterns, to detect agency in others and in our environment, and to weave compelling narratives.

All of these things are combined together to give us an impression of reality, of what is going on. But our brains also have circuitry which will then filter out or test these impressions to see if they make sense. The net effect – what you ultimately believe about reality – is a complex interaction of all of these moving parts. If the end result is congruous with existing and desired beliefs, then we are content. If they are incongruous, then this results in what psychologists call “cognitive dissonance.” We then marshal our reality-testing circuitry to resolve the conflict, usually through motivated reasoning and rationalization, and once the cognitive dissonance is resolved we are rewarded with a shot of dopamine to our reward circuitry.

The end result is about the balance of all of these circuits. If your biases and emotional motivations are relatively minor, and your metacognitive reality testing is relatively robust, then you will tend to come to a reality-based rational conclusion. (This is still dependent on your factual knowledge, culture, etc. but at least the process will likely be rational.)

If, however, you have impaired reality testing, you are more likely to accept whatever notion your subconscious processing results in, even if it makes no sense. Psychologists refer to a pathological lack of reality testing as a delusional disorder – persistent beliefs that are at clear odds with external reality.

However, even in those with intact reality testing, the motivation to accept a belief that is at odds with reality may overcome such testing. We call this “motivated reasoning” – the twisting of our reality testing process to confirm desired beliefs, or reject unwanted beliefs, rather than objective reality testing.

There may also be a problem with the subconscious circuitry that is making the “first pass” construction of reality, and these flaws or errors may overwhelm our reality testing. That is what Capgras and Fregoli are – errors in the detection of familiarity that present a highly incongruous picture of reality to our right dorsolateral prefrontal circuitry, which then struggles to deal with the incongruity. Often the reality-testing circuitry notices something is wrong, and confabulates a solution – my spouse is an imposter, or I am being followed by someone in disguise.

Confabulation (making stuff up) is a key component of how our brains resolve incongruities or cognitive dissonance. When the incongruity is great, then the tolerance for confabulation goes up. When our reality testing is impaired, confabulation is also unhinged. Confabulation is especially prominent in memory disorders. If, for example, a person with a severe short-term memory deficit meets someone who acts, or perhaps specifically states, that they are known to the person, but for whom they have no memory, they may simply confabulate prior interaction and knowledge in order to resolve the incongruity between current events and their memory deficit.

Even healthy and neurotypical individuals, however, will easily confabulate in order to resolve apparent incongruities between their memories and new information, or the memories of others. Some individuals are also highly emotionally invested in a belief or a particular narrative, and this can overwhelm their reality testing. This may include a penchant for conspiracy thinking, or a deeply held ideological, cultural, or personal belief.

We frequently see confabulation in patients who have had damage to a part of their brain resulting in impaired reality construction, causing dramatic incongruity. For example, a patient with a right hemisphere stroke may lose the very concept of the left side of their body and reality. When confronted with their left arm they will often state that the arm belongs to the examiner, or even that there is another patient in the bed with them. This is clearly at odds with reality, but they are certain the arm is not theirs because the circuitry that would generate the sense of ownership and control are damaged.


We still have a great deal to learn about brain circuitry and its complex interactions, but I think we are getting close to a reasonable working model of how our brains think. Subconscious processes create a hypothesized construction of reality out of our sensory input, memories, biases, heuristics, narratives, and emotional needs. Reality testing circuitry examines these hypotheses to see if they are internally consistent, compatible with our current models of reality, and if they serve our emotional needs. Our brains then struggle to come up with a resolution – a conclusion about what is probably true (or what we want to be true) based upon all of these simultaneous processes.

Understanding this process is helpful because it does shift the balance toward metacognition. Subconscious emotional processes have less of a hold on our thinking when we understand them. You can make a mental effort to think harder – meaning engaging your reality testing circuitry more robustly, rather than going with the flow of your subconscious processing. You can also impose an objective reference to external facts, and a formal process of logic onto your reality testing – you can, in other words, get better at reality testing.

This seems like a worthwhile endeavor.

33 responses so far

33 thoughts on “Reality Testing and Metacognitive Failure”

  1. Fascinating article, thanks! Regarding confabulation and the resolution of cognitive dissonance, Freud’s idea of defense mechanisms really hits the mark here. Freud didn’t get a lot correct about his musings on the mind, but on this topic he was definitely on to something we are still unraveling (as your article clearly explains). For some modern thoughts on this, a good place to start would be:


  2. Bronze Dog says:

    Stuff like this should serve to make a person aware of their human cognitive shortcomings. We’ve got so many things that can fail in insidious ways.

    I think I might have experienced a bit of the “lack of ownership” when I took a moment to look at and move my mouth in front of a mirror after getting some dental fillings. Big chunks of my mouth were numb, and it kind of felt like I was simultaneously seeing my normal face and my face with anonymous lumps of skin on top… kind of like naked, flat tribbles. I was consciously thinking of the phenomenon and wanted to test it, though, so that might have had some influence on my perception.

  3. Ekko says:

    Very interesting. The metacognition aspect made me think of meditation and various mindfulness based therapies actually as they often involve becoming more aware of and shining a light on various thought processes and their effects on our outlook.

  4. Hoss says:

    Great article, it really cleared up a few misconceptions that I had. I’m very grateful for the information, especially since I can pass it along to others. Thank you

  5. @Steven Novella

    “We still have a great deal to learn about brain circuitry and its complex interactions, but I think we are getting close to a reasonable working model of how our brains think.”

    This sentence is possibly right up there with the best of the silliest things you’ve ever written…

    I gather from your article that the brain is some sort of “computer” which has sort of things like “circuits” (chips) in them. And the “reality testing chip” has some connections broken, or something.

    Or maybe there is nothing wrong with the reality testing chip. Maybe the chip that keeps one’s paranoid delusions in check, got broken. So the reality checking is working fine, but its results keep getting ignored.

    Seriously, I can invent an endless collection of story lines that fit all the observations, especially considering I’m able to invent any chips I want and assign them any values or abilities I want. This “approach” in cognitive science is stupid and bankrupt. If any of this neurobabble is of any benefit whatsoever, what medical value (or any value you can think of) has it provided so far? After all, we’ve had over 50 years of research in this field.

  6. Bill Openthalt says:

    Will Nitschke —

    Robert Benchley said:

    “Drawing on my fine command of the English language, I said nothing.”

    I suggest you try this approach to commenting.

    Or you might try and regale the hoi polloi with your ideas on how the mind works (with cites, if you have them). If that’s too much to ask, at least spare us your empty, knee-jerk criticism.

  7. hardnose says:

    “I think we are getting close to a reasonable working model of how our brains think.”

    No, we aren’t.

    Cognitive scientists have been saying that for at least 60 years. It’s a mirage.

    Similar to the physicists who were always on the verge of discovering the ultimate particle.

  8. steve12 says:

    Isn’t it great when self appointed experts like Nitshke and Hardnose, who know nothing of the state of the cognitive psychology or cognitive neuroscience literature, pipe up to tell us that “we’re doing it wrong”?

    While there is a very, very long way to go, we’ve made huge jumps in understanding how the brain works in the past 20 years.

    Steve gives a very good lay explanation re: some of the systems we know exist from converging lines of research, and how they interact. He’s not just ‘supposing’ about what might make sense like you guys – he’s summarizing of a lot of literature (much of it not named, and that you obviously don’t recognize) in a way that is accessible. It’s called blogging. I think it’s going to be big.

    Now if you’ll excuse me, I’m going to run off to CERN to tell them how stupid they are because they haven’t resolved QM and gravity. I mean, I have no idea where any of their work is, and wouldn’t understand most of it anyway, but why should that stop me?

  9. Davdoodles says:

    “Seriously, I can invent an endless collection of story lines that fit all the observations, especially considering I’m able to invent any chips I want and assign them any values or abilities I want.”

    I suspect this is literally true, of you.

  10. etatro says:

    Is the dorsolateral PFC also involved in attention? I just read Michael Grazziano’s book, Consciousness and the Social Brain. What you describe here seems to fit a little bit with his attention schema theory of consciousness. In the case of hypo familiarity of an individual, I think that the brain is attending to the person present, is aware that it is attending to the person, but the connection from this object it’s attending to linking with accessing the memory of that object (person) is broken, hence the unfamiliarity. If this occurs in one’s own house, and the brain is attending to and aware of the surroundings (and attending to information in the brain about the surroundings), the presence of an unfamiliar person would be incredibly frightening. I wonder also whether emotions involved with the person (love, anger, etc) would be triggered in amygdala, but just not aware of the emotion by the part of the brain that pays attention. There was an episode of Futurama about just this thing! 🙂

  11. @Steven Novella

    At some stage it would be interesting if you attempted a defense of this sort of work, which seems to be little more than pseudo science, particularly since you identify as a ‘sceptic’. Why does a ‘sceptic’ act as a cheerleader instead of a critic? It would make a worthwhile post would it not?

    BTW, I received academic training in psychology and specialised in my undergradute years in the cognitive sciences, epistomology and history and philsophy of science. Although these days I work in the field of software engineering as the pay in those fields were below my expectations. 25 years ago I made the same points in academia. I argued in some detail that this field was a failure and specified the reasons why students were being taught nonsense. The result was simply for the academics concerned to take offense. (Admittedly, I was not as diplomatic or charming then as I am today.) Another quarter century has passed and progress in the field has been exactly zero, which is exactly as I anticipated.

  12. CKava says:

    Will, you took undergraduate courses in “the cognitive science, epistemology and history and philosophy of science” a quarter of a century ago, that certainly doesn’t qualify you an expert nor make you well informed to pass judgement on a research field. Rather, it sounds painfully like you formed an opinion as an undergraduate that “students were being taught nonsense” (not an unusual sentiment from undergraduates in any topic) and have since decided to defend your initial view even self-congratulating yourself in the thread that all of the advances in the field are just as “nonsense” as your undergraduate-self declared.

    There has been plenty of progress in cognitive research over the past 25 years. Anyone, familiar with the field would be aware of that. You might criticise specific areas or claims but to argue that there has been no progress betrays a fundamental bias and lack of awareness of the field. There are plenty of good critiques published by researchers in the field about specific issues and problems but your blanket criticism just seems like self-congratulatory rhetoric.

  13. Aardwark says:

    Will and Hardnose are, of course, perfectly entitled to their opinions about the general state of progress in neuroscience, or about its very principles and methods, but I wish to use this little exchange to underscore an important and frequently present problem in communicating about science.

    The problem is, in brief, this – whenever someone, like Dr Novella here, attempts to present a complex topic in relatively simple and (deliberately) overly general terms in order to map the general approach to a field – in this case, using terms like ‘neural circuitry’ – it is very easy to point out that those are not the strict terms used in actual research, i.e. in forming and testing hypotheses. Or, to put it more simply: a science communicator uses a proxy narrative for a complex topic, and regularly gets accused that what he says is ‘just a narrative’, while whatever it is that the narrative was used as a proxy for gets willfully ignored.

    The true criterion in assessing such narratives is, of course, how they connect to research data and proposed operational explanations thereof. In neuroscience, as in any other area, some concepts are supported better than others, but the overall information available clearly defends the field against allegations that it is ‘pseudoscientific’ – for pseudoscientific ideas (such as various mystical theories of the mind that disregard the clear fact that the mind is ultimately, in one way or another – though not necessarily in a ‘reductionistic’ manner – based on physical reality of the brain) are, by definition, not supported by any reliable data.

  14. @Aardwark

    The problem fundamentally is that this field has unlimited degrees of freedom in its postulates. It’s not about the failure to communicate, it’s about failure. But people are addicted to story telling so long as there are stories to tell, the gullible will be entertainment by them. Steve Novella included. (And the reason why, I suspect, he has highlighted this story and not some other – there are thousands to choose from – is that it fits into his pet theories about what distinguishes the stupid, i.e., Believers, from the clever, Activist Sceptics like him.)

    You must also distinguish neuroscience from cognitive science (actually psychology) that focuses on theory of mind, as they are distinct fields. Neuroscience is a valid research field, the sort of neurobabble that appears to impress Steve and his faithful gatekeepers at times, is not.

    Finally, it’s your decision but I would admonish people such as yourself (Steve is also guilty of this) of borrowing from the jargon of post-modernism with your talk of ‘narratives’ and all that. This is another academic field rich in nonsense. Classicial scepticism has its own rich source materials. There is no need to use the jargon of post modernism, particularly since post modernism borrowed many ideas from classical scepticism before going off on it’s own wild tangent. But as I said, it’s your call.

  15. BillyJoe7 says:


    “Or maybe there is nothing wrong with the reality testing chip. Maybe the chip that keeps one’s paranoid delusions in check, got broken. So the reality checking is working fine, but its results keep getting ignored”

    Apt self-characterisation (;

    “And the reason why, I suspect, he has highlighted this story…is that it fits into his pet theories about what distinguishes the stupid, i.e., Believers, from the clever, Activist Sceptics like him”

    Well, at least you’ve laid your motivation bare.

  16. Will – the synthesis above is derived from neuroscience and examinations of neuroanatomical correlates, not just psychology. Some of it is derived from the examination of patients with stroke or other lesions, some from fMRI studies, and others from psychological studies. This is where, in my opinion, the evidence converges. (Some of the clinical examples are from my own patients that I have examined.)

    I also acknowledge that this is just the current dominant working model. It will certainly be modified by new evidence as the field goes forward.

    I also notice that your criticisms are based on even broader brushstrokes than my summary. Care to give a specific evidence-based criticism of any particular point that I make? I’m all ears.

    I think the other commenters have nailed it – your criticism is based on your own ignorance of the field and confusing being a contrarion with being a skeptic.

    Regarding borrowing the language of post-modernism, this is just poisoning the well. I have been openly critical of the over-application of post-modernist ideas to science. This does not mean we have to purge anything that gives a hint of post-modernist language from our thought. That people construct narratives (sometimes referred to as “storytelling”), I think, is an entirely uncontroversial notion. The critical thinking point is not to become a slave to the narrative, but rather make the narrative subordinate to facts and logic. This requires metacognitive effort.

  17. Also – I think Will has it backwards. I am a skeptic because I understand the profound effect that biases, heuristics, errors in perception and memory, and other cognitive flaws have on our thinking, and the need for science and critical thinking as tools to counteract them. I didn’t start out as a skeptic, just a science enthusiast, but I accepted almost anything as legitimate science. I learned the hard way, over years, that we have to discriminate between science and pseudoscience.

    I have also acknowledged many times that the skeptical outlook (while fundamentally correct) does become its own narrative, and we have to resist the temptation to follow it in a knee-jerk or blind fashion. We have to be skeptical of our own skepticism. Science and critical thinking are basically an endless process of self-reflection, self-correction and metacognition.

  18. @Steven Novella

    Before you request answers from me would you care to answer my original question? It seems you respond only to what you could respond to, and ignored the actually important arguments you couldn’t address. Let me remind you by repeating it: what medical value (or any value you can think of) has it [cognitive science] provided so far?

    (Although I am *primarily* critical of the type of postulations of mind that you viewed so favourably in the article you wrote above.)

    And I’m not interested in debating points where we agree. That’s hardly interesting.

    Finally, my observation about the tendency to adopt post modernism lingo is nothing more than a minor remark. If people who identify as ‘sceptics’ want to appear silly, that’s their call. But it’s not a particularly important observation; it hardly deserved a response when there were valid criticisms I raised that you could have addressed, but sidestepped.

  19. I don’t believe you have leveled any valid criticisms – only knee-jerk contrarionism.

    So your premise is that the field of cognitive science has provided no value. That’s a pretty bold claim. It’s also a common denialist tactic. Creationists, for example, often challenge what concrete value evolutionary theory has provided. That is not the ultimate test of the validity of a scientific theory. Rather, it is it’s ability to make predictions about future experiments and observations.

    But, here are some examples off the top of my head.

    Cognitive behavioral therapy (CBT) has proven effective in treating a range of disorders, such as anxiety, PTSD, depression.

    Richard Wiseman published an excellent book, :59 seconds, in which he gives many examples of how the pychology literature provides practical solutions to everyday problems. He is extremely evidence-based.

    In my field of clinical neurology, understanding how the different parts of the brain interact to produce thoughts and behavior are critical to interpreting the effect that specific brain lesions have, and therefore in diagnosing many neurological problems. There are also many psychological disorders that present with neurological symptoms, and understanding their nature helps us diagnostically. For example, we can reliably differentiate pseudodementia secondary to depression from organic dementia.

  20. Aardwark says:


    Thank you for the admonishment, and sorry if the word I used (‘narrative’) sounded to you like post-modernist jargon. In fact, my intention was quite un-post-modernistic: to point out (as others have done, here and elsewhere) that there are objective criteria we apply when we assess our hypotheses. These criteria are not infallible, of course, but what matters is that they exist. By bringing this to the fore, I really made a thoroughly anti-post modernist statement.

    (In fact, I look at positivism on the one hand and post-modernism on the other like a sort of passage between Scylla and Charybdis; but let us not digress in that direction right now.)

    I would also like to respectfully remind you of what you stated in your first comment above:

    “Seriously, I can invent an endless collection of story lines that fit all the observations, especially considering I’m able to invent any chips I want and assign them any values or abilities I want.”

    If that was meant to dismiss any objective reality to neuroscience, then forgive me, but I cannot fail to see this as a suspiciously ‘post-modernist’ stance. To start with, why is ‘endless collection of story lines’ less of a ‘post-modernist lingo’ than ‘narrative’? Not to mention that I used the term ‘narrative’ only to highlight the crucial distinction between that which is merely a narrative and that which is a narrative that actually narrates something about reality. Again, a very un-post-modernist point to make, so I plead not guilty.

    I also do know the difference between theories of brain and theories of mind. I just happen to be of the opinion that the latter are, all evidence considered, obviously (cave reductionism) not independent of the former, although the exact extent and nature of their relation(s) may as yet elude us. I’d say they elude us in spite of (and there we appear to disagree) all the progress in our understanding that has been, and still is, being achieved.

  21. willrodgers says:

    Great article, I rather like the idea that we are starting to bridge the gap between the Danny Kahnemanns and Ray Kurzweils of the world. More please!

  22. steve12 says:

    I had an undergraduate RA just like Will. Instead of coming into the lab to learn, he came to teach. And just like will, he had little clue of what he was talking about, but that didn’t stop him from lecturing highly successful researchers on all of their shortcomings. I sort of envy the self-confidence, in a way.

    Will, you should really look up the Illusion of Explanatory Depth. I’m sure this doesn’t stop at the brain for you (let me guess – you’re an expert in everything, right?) so it might help you in other areas of your life as well.

  23. steve12 says:

    “Great article, I rather like the idea that we are starting to bridge the gap between the Danny Kahnemanns and Ray Kurzweils of the world.”

    Unfortunately the former is a genius and the latter a false profit.

    I don’t want to give Kurzweil too much shit – he’s a genius at what he does. But his idea about the brain are all vague adaptations of old cog sci ideas, and there’s no reason to thing the singularity is coming anytime soon.

  24. hardnose says:

    ” the skeptical outlook (while fundamentally correct) does become its own narrative, and we have to resist the temptation to follow it in a knee-jerk or blind fashion”

    Good point.

  25. @Steven Novella

    Cognitive behavioral therapy has nothing to do with theory of mind. It attempts ‘practical’ behaviour modification and is certainly superior to psychotherapy and other approaches. But it has no relation to cognitive theories of mind – and I would go so far as to suggest its philosophical approach rejects it. That was what my primary criticism was directed at, and this is what I explicitly stated in my last post to you.

    (One my favourite cocktail party stories revolves around the practice of behavioural therapy but I won’t punish you in this post by relating it.)

    You wrote:

    “We still have a great deal to learn about brain circuitry and its complex interactions, but I think we are getting close to a reasonable working model of how our brains think.”

    What is the evidence for the above claim?

    (And by the way, your other non-examples rapidly dissolved into mist.)

    There are certain rhetorical games you are playing here, and they fail to impress. I’ll mention here your previous attempt to draw me into a debate about the internal consistency of the claims made in your article. But that was not my criticism. Middle Earth is no doubt highly internally consistent. But what I asked was what is the basis for believing it is true? Similarly, because cognitive science is an interdisciplinary field, you attempted the strategy of grabbing a strand I did not criticise and which you did not discuss in your article, and claimed I objected to that. (Two research fields aren’t equivalent because they both happen to use the word ‘cognition’ in their title.)

    Finally, I’m guessing that calling me a contrarian or a denialist you mean I am sceptical for the sake of being sceptical. That strikes me as a pompous way of declaring that my mother has low morals. In classical scepticism all knowledge is tentative, but the degree of uncertainty varies greatly from subject matter to subject matter. I’m not sceptical of the academic field of information technology. But I am sceptical of the academic field that engages in the type of story telling you praised. Have a think about why that might be the case.

  26. @Aardwarko

    I’m not being critical of neuroscience as a research field. Nor am I being critical of psychology in general. And I took courses in neuropsychology and found nothing controversial in its subject matter either.

    If as a sceptic I’m critical of psychotherapy one shouldn’t assume I’m therefore critical of medicine. Juvenile insults not withstanding, they are different topics of discussion. Or to phrase this another way, if I’m being critical of eugenics, that doesn’t imply I’m rejecting evolutionary theory.

    So what am I objecting to? I’m objecting to the idea that you sit down, draw boxes, connect them with arrows, and label everything. This becomes your ‘theory of mind’ (or ‘theory of brain’ – there is no difference for the purposes of this discussion). You then appeal or interpret whatever experimental data you want, to make it consistent with your diagram. Of course, that’s how we were taught to do it 25 years ago at a cutting edge university (and why I lost my patience with my lecturer). These days such academics dress up the boxes and the arrows and make it all sound *very* sophisticated. So now the activity doesn’t appear at face value to be as dumb as it actually is. But it’s the same nonsense.

  27. steve12 says:

    “This becomes your ‘theory of mind’ “.

    Jesus Christ, you don’t even know that ‘theory of mind’ has it’s own specific (and commonly known) meaning, but you’re lecturing us…

    Episodic vs. semantic memory systems started off as boxes connected by arrows (which were made from elegant experiments, btw, decades ago). Now these memory subsystems have been confirmed and much of the substrate identified. This has, in part, lead to the identification of a higher level of the hiearachy of brain networks – anticorrelated default vs. task networks.

    This is but one example of many where successful cognitive psychology models have yielded real insight into how the brain works.

    Why don’t you tell me – with critiques of the actual literature – what’s wrong with this work?

  28. spinkham says:

    “Understanding this process is helpful because it does shift the balance toward metacognition.”

    Is there evidence for this? Most studies I’ve seen so far seem to indicate that awareness of biases of various types does little for our ability to avoid those biases.

  29. Psychbot709 says:

    This is like Diogenes of Sinope, tossing before Plato a chicken and stating “here is Plato’s man!” Cynicism is not the same as skepticism, although I can see that cynicism may have social value to motivate empiricists to shore up their definitions and methods. However, this nihilistic approach to an empirical science, which seems to be the basis of argument for Will, is a philosophically unarguable tautology- ie, “cognitive science has no value because it has not demonstrated any value to me.” There is no way to win or way to lose with that one, and like Diogenes of Sinope, the individual remains the supreme authority and adjudicator of value. Well done!

  30. I’m late to the party, but I meant to respond to Will’s claim earlier:

    “I’m objecting to the idea that you sit down, draw boxes, connect them with arrows, and label everything. This becomes your ‘theory of mind’ (or ‘theory of brain’ – there is no difference for the purposes of this discussion). You then appeal or interpret whatever experimental data you want, to make it consistent with your diagram.”

    Will, did you mean a diagram such as this?:

    This diagram is derived ENTIRELY from empirical data (anatomical, electrophysiological, and lesion studies), which supports the existence of at least 32 separate visual processing areas in the cortex and their gross connectivity. This is from van Essen et al (1992), which you’ll notice is from 22 years ago.

    So if you want to blame your naivety on your professors from 25 years ago, that’s fantastic, but don’t then pretend in the same breathe you aren’t naive about this field of research. If you would like to correct some of this naivety, try listening quietly and politely to people who do know what they are talking about (and this is not necessarily me).

  31. steve12 says:

    Other JohnMc:

    I don’t think that’s what Will’s referring to, to be fair to him. I think he’s referring to traditional cognitive psychology, like the Baddeley & Hitch model. OF course,you’re still right: these box models were based on experiments, usually with RT as the DV. It’s funny – some of the most elegant experiments in psychology history were cog sci experiments done pre 1990s. And these experiments still serve as the (often unattributed) basis for cognitive neuroscience experiments today.

    Really, this reflect’s Will’s misunderstanding of what models are for in science, which he also showed in the Krauthammer/ AGW post. Models are always wrong. You construct them based on the data available, and they generate predictions that can be tested to scrap/change/improve the model. They’re not invented de novo or passed off as a complete explanation!

  32. Oh I see, Will was criticizing models in cognitive science and cognitive psychology, just not cognitive neuropsychology or cognitive neuropsychophysiology….ha, I confuse these myself sometimes. Thanks Steve!

  33. steve12 says:

    I thought that’s what he meant. Then again, I doubt he really has any good sense of these distinctions, so who knows…

Leave a Reply