Mar 23 2010

The Global Workspace – Consciousness Explained?

As neuroscientists continue to build a more accurate and sophisticated model of the human brain, finding the neurological correlate of conscious awareness remains a tough nut to crack. The difficulty stems partly from the fact that consciousness is likely not localized in any one specific brain region.

But as our technology advances and we are able to look at brain function in real time and in greater detail, researchers are starting to zero in on the hardwiring that produces consciousness.

In this context, consciousness is operationally defined as being aware of sensory stimulation, as opposed to just being awake. We are not conscious of everything we see and hear, nor of all of the information processing occurring in our own brains. We are aware of only a small subset of input and processing, which is woven together into a continuous and seamless narrative that we experience.

The New Scientist has a good review of this topic – in which they discuss the work of Bernard Baars who proposed in 1987 the “global workspace theory.” Essentially, he hypothesized that conscious awareness stems from a discrete network of neurons that are widely distributed throughout the cortex. This networks receives input from the various sensory regions of the brain and puts it all together – filtering out any contradictory or unnecessary information to create one unified picture of reality in a continuous stream that we experience.

According to this model sensory input that gets filtered out of the global workplace remains subconscious, as is any processing that occurs in other parts of the brain but is filtered or not presented to the global workplace.

Baars also proposes that the global workspace can explain the dichotomy between the slow serial functioning of the conscious brain and the fast parallel processing of the brain as a whole. He writes in his book on the topic:

The difference is, of course, that most psychologists work with the limited capacity component of the nervous system, which is associated with consciousness and voluntary control, while neuroscientists work with the “wetware” of the nervous system, enormous in size and complexity, and unconscious in its detailed functioning. But what is the meaning of this dichotomy? How does a serial, slow, and relatively awkward level of functioning emerge from a system that is enormous in size, relatively fast-acting, efficient, and parallel? That is the key question.

The global workspace seemed like a reasonable hypothesis from the point of view of explaining existing observations and data, but there wasn’t a way to really test it, and so it remained in science limbo. Until recently, that is, when the new tools of neuroscience enabled neuroscientists to look at brain function to find the potential correlates of the global workspace.

A team of researcher led by Stanislas Dehaene of the French National Institute of Health and Medical Research, beginning in 2005, looked at a phenomenon known as inattention blindness. (For a fun demonstration of this, take a look at Richard Wiseman’s Color Changing Card Trick.) Basically, inattention (or inattentional) blindness occurs when we fail to notice something which is right in front of us. The information simply does not become part of our stream of consciousness. This seemed like a good opportunity to test the global workspace theory.

In the study they presented subjects with two streams of four letters. In some cases the subjects had to answer a question after the first stream, which distracted them and caused them to miss the second stream of letters.  In other cases they perceived both streams of letters.

In both cases for 270 milliseconds the streams of letters resulted in the same neuronal activity (as measured by a 128 lead EEG). In the case when the subjects perceived the second string this initial activity was followed by a synchronized burst of activity in parts of the brain (frontal and parietal lobes) thought to be part of the global workplace. In cases where the subjects did not consciously perceive the letters there was no such activity – the neurons quieted down after 270 milliseconds.

What this could mean is that the initial 270 milliseconds of activity represents the subconscious processing in the visual and visual association cortex, while the next phase of activity is conscious awareness of that stimulus by the global workspace. This experiment has been replicated with implanted electrodes as well.

So it seems there is about a 300 millisecond delay from perception to conscious awareness, and those stimuli we are consciously aware of result in activation of a distributed network of neurons, while those we are not aware of do not result in such activation. So far so good for the global workspace.

However, just being awake should result from some basic level of activity in any consciousness network. In fact, this is what researchers have found – the default mode network (DMN) is the baseline activity in the network thought to be part of the global workspace.

Steven Laureys (yes, the same Steven Laureys from facilitated communication infamy) hypothesized that if the DMN is part of the brain function that causes consciousness then we would expect the level of activity in the DMN to be greatest among healthy controls and those who are locked in (conscious but paralyzed), decreased in those with minimally conscious state, and decreased further in those in a vegetative state. That is in fact what he found when he studied 14 patients with disorders of consciousness and 14 healthy controls.

This is a fascinating area of research which seems to be progressing nicely. Although the results should be considered preliminary, it is not surprising that researchers are finding that brain activity correlates with levels of consciousness and awareness. The global workspace and the brain regions now associated with it are a good candidate for the neural substrate of consciousness.

The obvious clinical application is an improved ability to diagnose by direct functional brain scanning which unresponsive patients are conscious and which have impaired consciousness, and to what degree. This could also lead to an important criterion for prognosis – who is likely to recover and who will not.

But the basic neuroscience advance is interesting in its own right. Understanding why certain brain processes are conscious and others are not will take us a long way to building a model of overall brain function.

Share

161 responses so far

161 Responses to “The Global Workspace – Consciousness Explained?”

  1. johncon 23 Mar 2010 at 11:58 am

    Fantastic post, this is kind of stuff I’m bookmarking. I wasn’t aware of the global workspace theory.

    Finding the neurological correlate of consciousness is one step to creating a digital one, methinks. I don’t think it’ll be long after this part of the puzzle is cracked that we see some incredible advances in ‘artificial’ intelligence.

  2. daedalus2uon 23 Mar 2010 at 2:03 pm

    My suspicion is that consciousness is actually an illusion, a result of the HAAD mentioned in the earlier post. Certainly there is not a “thing” that is called “consciousness” that persists unchanged over an individual’s lifetime.

    If consciousness is an illusion, then we might be able to create that illusion in AI, but that may just be another example of HAAD. We already have many individuals who attribute agency to mechanical things, and many who do not. I am not sure that there is any way to actually test for consciousness.

  3. petrossaon 23 Mar 2010 at 2:44 pm

    This gives rise to the thesis that ‘our’ consciousness is just along for the ride. Although ‘we’ can plan and act accordingly, when it comes to real-time environmental interaction its our other consciousness which calls the shots.

    This has far reaching consequences for the premise of ‘free will’. Who has the free will, which consciousness we hold accountable. Or do we just hold the one accountable which can make itself heard even though in reality that consciousness actually hasn’t a clue why his body did what it did and has to concoct an explanation itself.

    Excerpt from a little thought i wrote
    http://www.neowin.net/forum/blog/316/entry-3140-free-will-does-it-exist/

  4. bluskoolon 23 Mar 2010 at 3:43 pm

    Have their been any studies looking at DMN in subjects who are sleeping? If so, is the level of activity decreased like it is in patients in a PVS or a minimally conscious state?

  5. artfulDon 23 Mar 2010 at 3:43 pm

    “We are not conscious of everything we see and hear, nor of all of the information processing occurring in our own brains. We are aware of only a small subset of input and processing, which is woven together into a continuous and seamless narrative that we experience.”

    Shouldn’t that last be, “we are ‘conscious’ of only a small subset of input, etc.” – because we can be aware yet not consciously so, yet can’t be conscious without being aware of that consciousness.

    Otherwise an unusually good post.

  6. Rob Heberton 23 Mar 2010 at 4:35 pm

    Great stuff. This is the kind of research that gets me very excited about future developments.

  7. bluskoolon 23 Mar 2010 at 8:53 pm

    My suspicion is that consciousness is actually an illusion, a result of the HAAD mentioned in the earlier post.

    How could consciousness be an illusion? In order for an illusion to happen, there has to be something to experience the illusion. If our conscious is an illusion, what is perceiving that illusion?

  8. tmac57on 23 Mar 2010 at 9:54 pm

    How does memory play a part in all of this? I am reminded of cases of patients with brain damage that have very brief, short term memory, and forget everything that just happened to them within a few moments. They were conscious enough to meet someone ‘new’ to them, and have a conversation, but could literally turn their backs on them and forget that they just met (loss of consciousness?). Their long term memories enable them to function as a conscious person, but, do they have defective ‘workspaces’, or are they just unable to move ‘new files’ from their workspace into long term memory because of some other deficit?

  9. Yoinkel Finkelblatton 24 Mar 2010 at 1:36 am

    This is pretty far from explaining consciousness, at least pretty far from describing the mechanism for the generation of mental phenomena — i.e. the hard problem. That our experience can be correlated perfectly to a neurological substrate is useful from a clinical standpoint, but we can’t take for granted that this is a complete accounting of a mechanism. I’m not entirely sure that there isn’t something more fundamental that we are missing in our accounting of consciousness, an electromagnetic or field component. To be fair, we have no evidence or outstanding need in the literature to call for such evidence, but our understanding of real time in vivo functionality of neural microcircuitry is far from complete and I think we need a deeper humility about this topic all around. The brain is the most complex thing we have ever encountered in the universe, we need to be clear that we don’t even begin to understand all the rules of the game it is playing.

  10. BillyJoe7on 24 Mar 2010 at 5:37 am

    “Their long term memories enable them to function as a conscious person, but, do they have defective ‘workspaces’, or are they just unable to move ‘new files’ from their workspace into long term memory because of some other deficit?”

    My understanding is that it is the latter.

  11. BillyJoe7on 24 Mar 2010 at 5:47 am

    buskool,

    “How could consciousness be an illusion? In order for an illusion to happen, there has to be something to experience the illusion. If our conscious is an illusion, what is perceiving that illusion?”

    “Consciousness” is an illusion like the “self” is an illusion.
    This pretty well has to be the case if we are going to stick with the scientific/materialist assumption.
    “Consciousness” and “self” arise out of the function of the brain. To speak of something or someone doing the experiencing is to invoke dualism – mind and spirit, or mind and soul – and this is a philosophy without an evidential base.
    To speak of “my brain” is an understandable error. “You” don’t have a brain, there is just a brain that produces the illusion of “you”.

    So, are you a materialist or a dualist?

  12. bluskoolon 24 Mar 2010 at 8:13 am

    Materialist.
    You only said that it has to be an illusion for materialism to be true, but I disagree. You said that to speak of something or someone experiencing is to invoke dualism. This is not so. The brain is something. The sum total of your genes, hormones, neurons, etc. and their interactions with their environment is someone.
    Yes, the brain produces consciousness, but it isn’t an illusion – it actually is something that happens. The conscious experiences illusions (magic tricks, 3d movies, etc…), so I don’t see how it could be an illusion. How can an illusion experience an illusion?

  13. davidsmithon 24 Mar 2010 at 1:18 pm

    So called neuroscientific theories of conscious experience always get so far and then, suddenly, erm, well, distributed brain activity, recurrent loops or whatever neurodynamic description happens to be flavour of the month creates an experience.

    How exactly does this happen?

    Can someone follow me through the steps from neurodynamics to conscious experience in easy to understand language? I’ll put a bet down that we’ll only get so far as a description of neurodynamics every time. Does neuroscience simply ignore the philosophical problems associated with a physicalist approach to conscious experience, or does it carry on with the hope that such problems might just magically go away?

  14. M. Davieson 24 Mar 2010 at 1:38 pm

    @davidsmith

    Some (vulgar) physicalist approaches simply say ‘neurodynamics is consciousness’ or ‘where there is neurodynamics of sort X, there is consciousness’.

    However, I am not sure that approach resolves the problem of subjective experience, that is, if you ask someone ‘are you aware of your experience’ and you get the answer ‘yes I am, I know what it is like to experience things’, how do you know whether that is a report of internal states or whether it is a mechanism which works as follows: ‘when prompted about your self-awareness, assert that you have it, whether you do or not’. To put it another way, how do you differentiate an entity which ‘actually sees red’ from one which checks stimulus input and says ‘it appears I am seeing red’.

    I am more inclined to think that BillyJoe7 is on to something and that ‘consciousness’ as a placeholder for all sorts of things, and semantically might lead us astray. Also, the invoking of consciousness has more to do with ethical issues than epistemological ones (I say extremely provisionally).

  15. M. Davieson 24 Mar 2010 at 1:42 pm

    (to clarify, not all physicalist approaches are vulgar, i.e. unsophisticated or superficial; but the one I provided above certainly is)

  16. davidsmithon 24 Mar 2010 at 2:08 pm

    “However, I am not sure that approach resolves the problem of subjective experience, that is, if you ask someone ‘are you aware of your experience’ and you get the answer ‘yes I am, I know what it is like to experience things’, how do you know whether that is a report of internal states or whether it is a mechanism which works as follows: ‘when prompted about your self-awareness, assert that you have it, whether you do or not’. To put it another way, how do you differentiate an entity which ‘actually sees red’ from one which checks stimulus input and says ‘it appears I am seeing red’.”

    That appears to be more of a description of the observational problems associated with assuming that a third person perspective is distinct from first person subjective experience, rather than a problem of subjective experience itself. Either way, why is that relevant? Just refer to your own subjective experience and compare that with how physical explanations are constructed. I can’t fail to see how the latter is totally incapable of explaining the former no matter how sophisticated the neurodynamic description.

    With regards to type-identity theories, they seem like just-so stories. The two sides of the equation refer to totally different properties, one is physical description the other is subjective experience. It’s certainly ‘simple’ in terms of philisophical content but it doesn’t make any sense whatsoever.

  17. M. Davieson 24 Mar 2010 at 2:18 pm

    Um, I’m agreeing with you?

  18. davidsmithon 24 Mar 2010 at 2:36 pm

    Oh right! When I was reading your reply I wasn’t sure. In the bit I quoted, I thought you were saying that it’s the job of neuroscience to provide a description of the differences between neurodynamic processes associated with conscious experience and those that aren’t, without the need to provide an account of how neurodynamics produce conscious experience in the first place. Sorry if I misunderstood.

  19. BillyJoe7on 24 Mar 2010 at 4:45 pm

    bluskool

    “The brain is something.”

    Agreed.

    “The sum total of your genes, hormones, neurons, etc. and their interactions with their environment is someone.”

    In common parlance, what you said above no one would argue with. However, strictly, in a scientific/materialist sense it is false. Here is the translation in materialist terms:
    “The sum total of the body’s genes, hormones, neurons, etc. and their interactions with their environment produces, through its brain, an illusion called “you”.”

    “Yes, the brain produces consciousness, but it isn’t an illusion – it actually is something that happens.”

    Illusions are not nothing.
    If you look at the checkerboard illusion:
    http://www.123opticalillusions.com/pages/opticalillusions40.php
    The squares A and B do look different – almost as different as black and white – but in actual fact they are identical. This is an illusion, but would you say this doesn’t exist? The illusion of “self” is as convincing, even more so.

    “How can an illusion experience an illusion?”

    It doesn’t. The brain of that body we referred to above produces that illusion, just like the juxtaposition of pixels in that checkerboard produces the illusion that those squares are identical.

  20. Charles Won 24 Mar 2010 at 8:29 pm

    “how do you differentiate an entity which ‘actually sees red’ from one which checks stimulus input and says ‘it appears I am seeing red’.”

    This is the topic of Chapter II of Rorty’s “Philosophy and the Mirror of Nature”. The latter entity he calls an “Antipodean”, an entity which knows all about it’s own neurology; as opposed to “Terrans” – ie, us – who don’t. I’m on reading number one and don’t anticipate being competent to elaborate until reading number 3 or 4, if ever; so if interested, you’re on your own for now.

    In the meantime, something to mull over: literally “seeing red” presumably amounts to nothing more than excitation of specific neurons in specific modes. But our “experience” of “seeing red” is a “mental image” of something that “looks red”. So, why do we need that mental image, and who/what is “viewing” it? Translated into Dennett’s lingo, why do we create a virtual Cartesian theater and a virtual entity (the self?) sitting in it “watching the show”?

    I have some vague notions of answers but I’d be interested in others’ takes on that question.

    Also, for those interested in current concepts of consciousness, try Susan Blackmore’s “Conversations on Consciousness”, a collection of interviews with twenty leading researchers on consciousness. Not much depth, but a good overview for those – like me – newly interested in the topic.

  21. M. Davieson 24 Mar 2010 at 8:47 pm

    Ha! I was thinking of Rorty’s PMN when I wrote that comment. It’s well worth any time you devote to it, so enjoy.

    I find ‘The Nature of Consciousness: Philosophical Debates’ by Block, Flanagan, and Güzeldere to be an excellent anthology for the experienced reader. It’s from 1997 but it has the major works in it.

  22. BillyJoe7on 24 Mar 2010 at 11:18 pm

    Charles:

    ” Translated into Dennett’s lingo, why do we create a virtual Cartesian theater and a virtual entity (the self?) sitting in it “watching the show”?”

    “We” don’t.
    The brain does.
    Why does the brain create the illusion of a self sitting in a theatre?

    Presumably so that the genes which are the blueprint for the brain that creates this illusion have a better chance of surviving into the next generation. At least that was the reason throughout most of evolutionary history.

    Presumably, genes which produce a brain which produce a “self” do better than those that produce a zombie brain.
    Maybe, put in this way, it’s not so hard to see why.

  23. Charles Won 25 Mar 2010 at 10:19 am

    BJ7 -

    My question was specifically with respect to the next step: the optical neuronal “data” are there and the brain is ready to “process” that data. The processing has evolved to be such that our experience is of watching a movie of the outside world. Why?

    For example, take a simple scenario. A person is facing a white wall with a small black circle painted on it. The objective is to touch the circle with a finger. In principle, the brain could simply take the optical neuronal data, do the appropriate geometrical “calculations” to locate the circle relative to the body, determine the appropriate motor neuronal excitations to move the finger, and execute them. But in addition there is a mental image of the wall and the circle as viewed by a virtual self. Why? Ie, what evolutionary benefits accrue from that extra processing?

    The “vague notion” to which I alluded is that since everything necessary to create the virtual Cartesian theater is in place, even a subtle benefit from doing so might be sufficient evolutionary motivation. For example, in principle, a person with the relevant skills could “consciously” do the geometric computations; but no one does. In a sense, we just start moving the arm-hand-finger complement, “watch” what happens, and correct. So, perhaps the visualization helps in the predictive aspects of the resulting servomechanism.

    I was soliciting ideas among those lines.

  24. Charles Won 25 Mar 2010 at 10:30 am

    “along” those lines.

  25. M. Davieson 25 Mar 2010 at 10:45 am

    @BillyJoe7

    Presumably, genes which produce a brain which produce a “self” do better than those that produce a zombie brain.

    How do you propose to distinguish a zombie brain from a brain which produces a self? A person with a zombie brain produces the same reports as a person with a ‘self-brain’, and would presumably be identical to third-person observation, including observations of neural imaging. Like, if I told you that I had a zombie brain, that I have no ‘qualitative experience’*, could you convince me or demonstrate otherwise?

    *only the lazy will go for the easy joke here

  26. Steven Novellaon 25 Mar 2010 at 11:16 am

    davidsmith – I think your latter option is close to the truth – neuroscientists carry on hoping the hard problem will simply fade away. I think this is a good approach, and is working so far.

    The philosophical issues are interesting and meaningful, but I think ultimately stem from limitations of language an our conceptual grasp. Specifically, the brain is trying to understand itself, but it is limited to what it can do.

    Here is a partial analogy that may be illuminating – people with damage to one side of their brain and neglect cannot think about the opposite side of the world. There is nothing you can do to explain this to them – they can never understand their new limitations (until their brains recover a bit).

    We cannot know what cognitive limitations are inherent to being human, unless and until we have something else to compare it to.

    That aside – neuroscience is progressing just fine as solving the so-called “easy problems” of neuroscience. The fact that we cannot philosophically solve the hard problem does not seem to impair progress on the easy problem. Which leads to the possibility that the hard problem is not a problem at all (as Dennet contends) and it will simply fade away when the easy problems are solved.

    Further, with regard to the “why are we all not just zombies” question, there may be a reason, but there does not have to be. Self awareness can be an emergent property – it’s just what happens when you have a complex nervous system that needs to pay attention and be motivated to take certain actions.

  27. M. Davieson 25 Mar 2010 at 11:49 am

    neuroscientists carry on hoping the hard problem will simply fade away. I think this is a good approach, and is working so far.

    The fact that neuroscientists can do neuroscience without having to deal with the hard problem does not mean that the problem is solved or dissolved, simply that it doesn’t fall under their purview. As you say, neuroscientists can work on ‘easy’ problems (no pejorative intended) quite successfully without having to deal with the hard problem. It’s also possible to be an engineer or physicist without answering ‘what causes mass’; that doesn’t mean that question is dissolved.

    How do you think neuroscience could answer this problem:

    If you ask someone ‘are you aware of your experience’ and you get the answer ‘yes I am, I know what it is like to experience things’, how do you know whether that is a report of internal states or whether it is a mechanism which works as follows: ‘when prompted about your self-awareness, assert that you have it, whether you do or not’. To put it another way, how do you differentiate an entity which ‘actually sees red’ from one which checks stimulus input and says ‘according to my inputs I am seeing red’.

    Can neuroscience tell me whether I am a zombie or not? I think I agree with Dennett but also think we interpret his claims differently.

  28. Steven Novellaon 25 Mar 2010 at 12:00 pm

    I agree, and I have stated before, that the only reason I know you are not a zombie is because I am not a zombie, and it is reasonable to assume that I am not unique.

    If we create human-level AI that is indistinguishable from human-level intelligence, we will still not know empirically that the AI is self-aware in the way we are self-aware. We can infer this if the computer brain functions in a way similar to a human brain, but that’s it.

    I like you analogy about mass. I have written about dualism in the past, specifically those that use the hard problem to argue that therefore the brain does not fully cause consciousness. This is similar to saying that because engineers do not have a theory of mass and energy, that internal combustion is not entirely responsible for the propulsion of a car – that some magic is at work.

    In other words – it works both ways. Solving the easy problems may not make the hard problem vanish, but the hard problem does not invalidate the solutions to the easy problems.

    I do think, however, that the hard problem may end up being of no practical consequence.

  29. Charles Won 25 Mar 2010 at 2:42 pm

    Prof Novella -

    I really appreciate your comments re “the hard problem” (in particular, the role of language limitations in making it “hard”) and the emergent property view. Based on my reading about consciousness, I’ve come to similar conclusions, and it’s nice to have reassurance that I might be on roughly the right track.

    Especially intriguing was your phrase “motivated to take certain actions”. Having no problem with the possibility (IMO, probability) of being to some degree a “zombie”, I have been trying to look at the problem from the perspective of viewing us as (very complex) stimulus-response systems (which comes naturally since I’m a systems engineer). So, I’m curious how much I should read into that phrase.

    Thus far, I’ve found the systems view a helpful perspective in trying to get a grip on a variety of issues. For example, I would answer M Davies’ “problem” as follows. I don’t understand the implied distinction between the causes for responses to questions like “Do you have self-awareness?” I infer that the inner-state-reporting cause is supposed to be human-like, the programmed-response cause zombie-like, but since I view us humans as being “programmed” by our culture to respond affirmatively, I see no difference. (See note below.) Just look at how hard it seems to be for even the professionally trained to even entertain the possibility that self-awareness is an illusion.

    Similarly when we are “seeing red”. As I described earlier, in my view our brains essentially do “check stimulus input” and – if asked whether we are seeing red – cause an affirmative response. The additional phenomenal effect that red appears in the virtual Cartesian theater seems a separate issue.

    All, I should emphasize, IM-unprofessional-O; I may be entirely wrong and almost certainly am at least some wrong.
    =================
    [Note]

    There is no reason, of course, to give any credence to my claiming this. But some – especially Rorty fans – might be interested in this passage on p. 374 of “Rorty and His Critics”:

    “Would there still be snow if nobody ever talked about it? Sure. Why? Because according to the norms we invoke when we use “snow”, we are supposed to answer this question affirmatively. (If you think that glib and ethnocentric answer not good enough, it is because you are still in the grip of the scheme-content distinction.)”

    BTW, I consider the Ramberg essay and Rorty’s response in this book must-reading for Rorty fans. In his response, Rorty abandons some of his signature positions.

  30. bluskoolon 25 Mar 2010 at 3:27 pm

    “The brain is something.”
    Agreed.
    “The sum total of your genes, hormones, neurons, etc. and their interactions with their environment is someone.”
    In common parlance, what you said above no one would argue with. However, strictly, in a scientific/materialist sense it is false. Here is the translation in materialist terms:
    “The sum total of the body’s genes, hormones, neurons, etc. and their interactions with their environment produces, through its brain, an illusion called “you”.”
    “Yes, the brain produces consciousness, but it isn’t an illusion – it actually is something that happens.”
    Illusions are not nothing.
    If you look at the checkerboard illusion:
    http://www.123opticalillusions.com/pages/opticalillusions40.php
    The squares A and B do look different – almost as different as black and white – but in actual fact they are identical. This is an illusion, but would you say this doesn’t exist? The illusion of “self” is as convincing, even more so.
    “How can an illusion experience an illusion?”
    It doesn’t. The brain of that body we referred to above produces that illusion, just like the juxtaposition of pixels in that checkerboard produces the illusion that those squares are identical.

    I think we are just arguing over semantics here. I would say that you are your brain, not an illusion created by your brain. Really we mean the same thing I think.

  31. Charles Won 25 Mar 2010 at 3:50 pm

    Test (recently submitted comments seem to be disappearing down a rat hole)

  32. BillyJoe7on 25 Mar 2010 at 5:02 pm

    M. Davies:

    “A person with a zombie brain produces the same reports as a person with a ’self-brain’,and would presumably be identical to third-person observation, including observations of neural imaging.”

    I don’t think your presumption is a reasonable one?

    “How do you propose to distinguish a zombie brain from a brain which produces a self?

    Your underlying presumption in asking this question is that it is possible for a zombie brain to do the same work as a brain that produces a self. If that is so, why has evolution gone to the expense of producing a self when a self provides no survival advantage.

    “Like, if I told you that I had a zombie brain, that I have no ‘qualitative experience’*, could you convince me or demonstrate otherwise?”

    I am presuming a “theory of mind” or an “intentional stance”.
    It is based on the fact that there is someone that it is like to be me so I’m presuming that there is something that it is like to be you. Can I prove it? No. But I think it is the more reasonable presumption.
    I know that a brain that produces a self can do all the things that I can do, but I don’t know that a zombie brain could do all those things. Presumably you are in the same position.

  33. M. Davieson 25 Mar 2010 at 6:28 pm

    @BillyJoe7

    I don’t think your presumption is a reasonable one?

    It’s the standard definition of a philosophical zombie. If the question is ‘does entity X have subjective experience’ and your response is ‘well anything which is like my brain must also have subjective experience’ then that is question-begging. It asserts as fact that which it has yet to prove via argument or evidence.

    Sure, it’s a reasonable presumption, but skepticism isn’t founded on ‘my presumptions let me get by in the world, so that’s enough’.

    I’m not sure who I am disagreeing with here. If you say that zombie brains don’t exist, and cite people’s reports of a sense of self as proof, then you are going around in circles, because zombies do all the things everyone else does (including a report of having a self), they simply lack ‘conscious experience’. It’s not ‘like anything’ to be a philosophical zombie.

    As for my personal stance on the issue, it is a mistake to say ‘well, we could have had zombie brains but evolution made us otherwise’ because there is no ‘otherwise’, but in the opposite direction from BillyJoe7. Dennett’s response to the philosophical zombie isn’t that zombies don’t exist and we actually do have qualia after all; it’s that everything in the definition of a philosophical zombie accounts for consciousness. The dissolution of the hard problem of consciousness doesn’t dissolve the problem and retain consciousness, it dissolves ‘consciousness’ as a meaningful entity as well. I’m fine with that, too! However, it no longer means that we can say ‘well, entity X has consciousness, entity Y does not’, but have to be more discrete in what we are talking about. ‘Entity Y reports the colour red; entity Z demonstrates activity in the parietal lobe; entity M has c-fibers firing, entity L appears to exercise the faculty of memory’ and these might apply to people in a coma, to my computer, to your parrot, and so forth. This is why I said earlier that ‘consciousness’ usually points to ethical rather than epistemological issues – consciousness is often and historically cited as a marker for giving something ethical consideration or treating it as a moral agent.

  34. Charles Won 25 Mar 2010 at 7:06 pm

    Third try …

    Prof Novella -

    I really appreciate your comments re “the hard problem” (in particular, the role of language limitations in making it “hard”) and the emergent property view. Based on my reading about consciousness, I’ve come to similar conclusions, and it’s nice to have reassurance that I might be on roughly the right track.

    Especially intriguing was your phrase “motivated to take certain actions”. Having no problem with the possibility (IMO, probability) of being to some degree a “zombie”, I have been trying to look at the problem from the perspective of viewing us as (very complex) stimulus-response systems (which comes naturally since I’m a systems engineer). So, I’m curious how much I should read into that phrase.

  35. Charles Won 25 Mar 2010 at 7:07 pm

    and to continue …

    Thus far, I’ve found the systems view a helpful perspective in trying to get a grip on a variety of issues. For example, I would answer M Davies’ “problem” as follows. I don’t understand the implied distinction between the causes for responses to questions like “Do you have self-awareness?” I infer that the inner-state-reporting cause is supposed to be human-like, the programmed-response cause zombie-like, but since I view us humans as being “programmed” by our culture to respond affirmatively, I see no difference. (See note below.) Just look at how hard it seems to be for even the professionally trained to even entertain the possibility that self-awareness is an illusion.

    Similarly when we are “seeing red”. As I described earlier, in my view our brains essentially do “check stimulus input” and – if asked whether we are seeing red – cause an affirmative response. The additional phenomenal effect that red appears in the virtual Cartesian theater seems a separate issue.

    All, I should emphasize, IM-unprofessional-O; I may be entirely wrong and almost certainly am at least some wrong.
    =================
    [Note]

    There is no reason, of course, to give any credence to my claiming this. But some – especially Rorty fans – might be interested in this passage on p. 374 of “Rorty and His Critics”:

    “Would there still be snow if nobody ever talked about it? Sure. Why? Because according to the norms we invoke when we use “snow”, we are supposed to answer this question affirmatively. (If you think that glib and ethnocentric answer not good enough, it is because you are still in the grip of the scheme-content distinction.)”

  36. Charles Won 25 Mar 2010 at 7:08 pm

    and to continue …

    Thus far, I’ve found the systems view a helpful perspective in trying to get a grip on a variety of issues. For example, I would answer M Davies’ “problem” as follows. I don’t understand the implied distinction between the causes for responses to questions like “Do you have self-awareness?” I infer that the inner-state-reporting cause is supposed to be human-like, the programmed-response cause zombie-like, but since I view us humans as being “programmed” by our culture to respond affirmatively, I see no difference. (See note below.) Just look at how hard it seems to be for even the professionally trained to even entertain the possibility that self-awareness is an illusion.

    Similarly when we are “seeing red”. As I described earlier, in my view our brains essentially do “check stimulus input” and – if asked whether we are seeing red – cause an affirmative response. The additional phenomenal effect that red appears in the virtual Cartesian theater seems a separate issue.

    All, I should emphasize, IM-unprofessional-O; I may be entirely wrong and almost certainly am at least some wrong.

  37. Charles Won 25 Mar 2010 at 7:09 pm

    and the note …

    [Note]

    There is no reason, of course, to give any credence to my claiming this. But some – especially Rorty fans – might be interested in this passage on p. 374 of “Rorty and His Critics”:

    “Would there still be snow if nobody ever talked about it? Sure. Why? Because according to the norms we invoke when we use “snow”, we are supposed to answer this question affirmatively. (If you think that glib and ethnocentric answer not good enough, it is because you are still in the grip of the scheme-content distinction.)”

  38. BillyJoe7on 25 Mar 2010 at 11:44 pm

    bluskool.

    “I think we are just arguing over semantics here. I would say that you are your brain, not an illusion created by your brain. Really we mean the same thing I think.”

    Not really.
    If you create a puppet, is the puppet you? No.
    Similarly, if the brain creates you, you are not your brain

    If “you” think “you” are the one making decisions, how do you explain the 300 microsec delay between the decision being made and “you” becoming aware of it (see the article).
    In fact, the brain makes the decision and then it lets the self know about it, making the self feel like it made the decision.

    The feeling of the self being in control of the brain is actually an illusion produced by the brain.

  39. BillyJoe7on 26 Mar 2010 at 5:27 am

    M. Davies,

    “zombies do all the things everyone else does (including a report of having a self), they simply lack ‘conscious experience’. It’s not ‘like anything’ to be a philosophical zombie.”

    Yes, I understand the concept of a p-zombies.
    But P-zombies are philosophical thought experiments.
    The are defined in such as way as to be unfalsifiable. The p-zombie hypothesis also does not make predictions that can be tested. In other words, p-zombies cannot be considered a scientific concept.

    “If the question is ‘does entity X have subjective experience’ and your response is ‘well anything which is like my brain must also have subjective experience’ then that is question-begging.”

    But that’s not what I said. What I said was:
    I know I am not a P-zombie. I look about me and see everyone behaving an reacting more of less like I do and, extrapolating from my own experience, I make the reasonable assumption that there is someone at home inside those other bodies.
    So, this is my assumption which I am saying is more reasonable than the assumption that some are actually p-zombies.
    I was not my evidence.
    I did hint at some evidence though…

    “Sure, it’s a reasonable presumption, but skepticism isn’t founded on ‘my presumptions let me get by in the world, so that’s enough”.

    The evidence is from evolution.
    Throughout evolutionary history, there were in general vastly more progeny entering the next generation than there was food to support them. The struggle for existence was, in fact, caused by the scarcity of food. Evolution, in general, favoured obtaining the largest amount of food for the least energy expenditure. The energy cost of a “self” is extraordinarily high. If the same could be achieved without it, there would be no advantage for brains to produce selves.

    “Dennett’s response…”

    I don’t recognise Dennett at all in your summation of his ideas.
    But the discussion is already complicated enough so I will leave that be for the moment.

  40. bluskoolon 26 Mar 2010 at 9:17 am

    Not really.
    If you create a puppet, is the puppet you? No.
    Similarly, if the brain creates you, you are not your brain.

    If you create a puppet and go out into the world and talk through the puppet, why not just save a step and go out into the world and just talk yourself?
    If it aids your understanding to say that the brain creates the self rather than the brain is the self, by all means say that. I personally don’t see where it really makes any difference. But don’t pretend that your way of conceptualizing is more “scientific.”

    If “you” think “you” are the one making decisions, how do you explain the 300 microsec delay between the decision being made and “you” becoming aware of it (see the article).
    In fact, the brain makes the decision and then it lets the self know about it, making the self feel like it made the decision.
    The feeling of the self being in control of the brain is actually an illusion produced by the brain.

    Okay, I see the problem here. You are confusing two different concepts – free will and consciousness. Although they are related, they are not the same thing. I never said anything about free will, but since you bring it up I would say yes, free will is an illusion. More specifically, contra-causal free will is an illusion.
    It does appear that we make uncaused choices, but we really don’t, so it makes sense to say that free will is an illusion. However, it is not the case that it appears we are conscious, but really aren’t. So saying that consciousness is an illusion makes no sense to me.

  41. davidsmithon 26 Mar 2010 at 9:35 am

    “I like you analogy about mass. I have written about dualism in the past, specifically those that use the hard problem to argue that therefore the brain does not fully cause consciousness. This is similar to saying that because engineers do not have a theory of mass and energy, that internal combustion is not entirely responsible for the propulsion of a car – that some magic is at work. ”

    I don’t think the analogy of mass is similar to the situation posed by the hard problem. The hard problem is a statement about the incompatibility of physical explanation with conscious experience. Nobody would claim such incompatability between an explanation of propulsion in terms of internal combustion. The latter is clearly and demonstrably defined physically in terms of quantitative relationships. Conscious experience on the other hand is not, which is the whole point of the hard problem. Any comparison to the hard problem that is based on a relationship between a physically defined phenomena and a physical explanation of it is inappropriate.

  42. davidsmithon 26 Mar 2010 at 9:38 am

    I said,

    “The hard problem is a statement about the incompatibility of physical explanation with conscious experience. Nobody would claim such incompatability between an explanation of propulsion in terms of internal combustion. The latter is clearly and demonstrably defined physically in terms of quantitative relationships. ”

    I made a mistake. I meant to say,

    “The hard problem is a statement about the incompatibility of physical explanation with conscious experience. Nobody would claim such incompatability between an explanation of propulsion in terms of internal combustion. Propulsion is clearly and demonstrably defined physically in terms of quantitative relationships.

  43. M. Davieson 26 Mar 2010 at 10:56 am

    @Charles W
    in my view our brains essentially do “check stimulus input” and – if asked whether we are seeing red – cause an affirmative response. The additional phenomenal effect that red appears in the virtual Cartesian theater seems a separate issue.

    We’re probably more on board than off, but I think if you say there is an ‘additional phenomenal effect’ then you remain in the realm of the hard problem, and to assert a ‘virtual Cartesian theater’ (how is a virtual one different than a regular C-theater? Artificial butter on the popcorn maybe) seems to me to be a kind of dualism.

    BillyJoe7
    P-zombies are philosophical thought experiments.
    The are defined in such as way as to be unfalsifiable. The p-zombie hypothesis also does not make predictions that can be tested. In other words, p-zombies cannot be considered a scientific concept.

    And the point of the thought experiment is to test your intuitions, not to assert factual claims about the world, so I don’t see the problem. If p-zombies are defined so as to be unfalsifiable, that it can’t make predictions that can be tested, the same goes for subjective experience. In other words, subjective experience (consciousness), by this logic, cannot be considered a scientific concept. See my next point.

    I know I am not a P-zombie. I look about me and see everyone behaving an reacting more of less like I do and, extrapolating from my own experience, I make the reasonable assumption that there is someone at home inside those other bodies. So, this is my assumption which I am saying is more reasonable than the assumption that some are actually p-zombies.

    You claim that they (and you) have property X (consciousness). I bet I can in theory explain everything about them without invoking property X. What can you explain, or hope to explain, about them thanks to property X, besides asserting their possession of property X? What if I treat people as very sophisticated functional automata, with the ability to generate self-reports? Suppose I told you I found a p-zombie (or a few million of them), who consented to have us study her. Could you prove that she was not a p-zombie?

    As for the evolution example, do I understand your argument?
    Are you saying the following:
    (1) Using less energy has an evolutionary advantage.
    (2) Brains which do not produce selves use less energy, and thus, would appear to have the evolutionary advantage.
    (3) However, since brains with selves exist, this shows that they had an evolutionary advantage.

    If this is your argument, it commits petitio principii, it assumes the existence of a ‘sense of self’ in step 3, it doesn’t demonstrate it.

  44. M. Davieson 26 Mar 2010 at 11:03 am

    @davidsmith
    I don’t think the analogy of mass is similar to the situation posed by the hard problem.

    SN’s analogy or mine? I think they are on addressing different things.

    In response to SN’s analogy, I would say yes, the successful propulsion of a car and a detailed description of internal combustion doesn’t say why things have mass to begin with, just like functional neuroscience explanations can tell us all sorts of things about the brain’s function and self-reports but they don’t say whether and why people have consciousness to begin with.

  45. Charles Won 26 Mar 2010 at 1:53 pm

    M. Davies -

    My comment at 25 Mar 2010 at 10:19 am is the best I can do at explaining what I mean by the term “virtual CT”. And there I was addressing why, not how. But I suspect that ideas about the former would help in addressing the latter.

  46. BillyJoe7on 27 Mar 2010 at 1:40 am

    buskool,

    “If it aids your understanding to say that the brain creates the self rather than the brain is the self, by all means say that. ”

    To say that the brain creates a self is not the same as saying the brain IS the self. The brain is more than just a self. Likewise you are more than just your puppet. You may do all your speaking and interacting with the world through your puppet, but you will always be more than your puppet.

    ” I never said anything about free will, but since you bring it up I would say yes, free will is an illusion.”

    I wasn’t specifically talking about free will either but I agree that free will is an illusion.

    “It does appear that we make uncaused choices, but we really don’t, so it makes sense to say that free will is an illusion.”

    It appears we agree on a lot. :)

    ” However, it is not the case that it appears we are conscious, but really aren’t. So saying that consciousness is an illusion makes no sense to me.”

    In the checkerboard illusion, the bits marked A and B do exist, it’s just that they seem to be something they are not. They seem to be different colours but, in reality, they are identical in colour. The illusion is not that they exist (they do) but that they are to be different colours (they aren’t).

    Similarly with “self”:
    The self exists, but it seems to be something that it is not.
    The self seems to control the brain.
    In reality, the brain produces and controls the “self”.
    The illusion, then, is not that the self exists (it does), the illusion is that the self is in control of the brain (it isn’t).

    And similarly with “consciousness”:
    Consciousness exists, but it seems to be something that it is not. Consciousness seems to be part of what enables the self to control the brain. In reality, consciousness is what the brain produces in order to produce the self.

  47. BillyJoe7on 27 Mar 2010 at 2:49 am

    M. Davies,

    “And the point of the thought experiment is to test your intuitions, not to assert factual claims about the world, so I don’t see the problem.”

    But the conclusion that p-zombies do not exist is not mere intuition.
    That conclusion is based on a number of scientific facts (see below).
    It is, of course, not a 100% water-tight conclusion, but it is not mere intuition.

    “If p-zombies are defined so as to be unfalsifiable, that it can’t make predictions that can be tested, the same goes for subjective experience.”

    A conscious brain does not fall into the same category as a p-zombie. I know that I am conscious. So there is at least one example of a conscious brain. That is a fact, not an hypothesis. True, I can only infer from that that you are conscious but you could not say there is no fact underlying that inference.

    “You claim that they (and you) have property X (consciousness). I bet I can in theory explain everything about them without invoking property X. What can you explain, or hope to explain, about them thanks to property X, besides asserting their possession of property X? What if I treat people as very sophisticated functional automata, with the ability to generate self-reports? Suppose I told you I found a p-zombie (or a few million of them), who consented to have us study her. Could you prove that she was not a p-zombie?”

    The burden of proof is yours though.
    I already have one example of a brain that is conscious (and so do you). I have no examples of a p-zombie (and neither do you). So I know what conscious brains are capable of (and so do you). I have no idea what a p-zombies could be capable of (and neither do you).
    In other words, your claims about what p-zombies are capable of is pure speculation.

    “As for the evolution example, do I understand your argument? … If this is your argument, it commits petitio principii, it assumes the existence of a ’sense of self’ in step 3″

    I would put it more like this:
    There are three facts upon which my inference is based.

    FACT: there is at least one instance of a conscious brain.
    FACT: the facts of evolution tell us that, in order to maximise the chances of survival, life needs to be as parsimonious as possible in energy terms.
    FACT: the production of a conscious brain is extremely costly in energy terms and would have no survival value if there was no benefit compared with a p-zombie.

    INFERENCE: every life form that is capable of doing all the sorts of things that I am capable of doing are conscious.

    That is not 100% proof, but it not pure speculation either.
    That promotes the conscious brain to the status of a scientific concept and relegates a p-zombie to the status of a philosophical thought experiment.

    regards,
    BillyJoe

  48. bluskoolon 27 Mar 2010 at 10:47 am

    To say that the brain creates a self is not the same as saying the brain IS the self. The brain is more than just a self. Likewise you are more than just your puppet. You may do all your speaking and interacting with the world through your puppet, but you will always be more than your puppet.

    When I say the brain IS the self, that doesn’t mean that the brain is only the self. Like if someone says Daniel is tall, that doesn’t mean that I am only tall.
    When you say the brain creates the self, that makes it sound like the self is separate from the brain, which it isn’t. That is why I wouldn’t use that language. It sounds dualistic.

    I wasn’t specifically talking about free will either but I agree that free will is an illusion.

    Yes, you didn’t specifically say it, but you were talking about control. Free will entails the idea that you are in control of your actions.

    And similarly with “consciousness”:
    Consciousness exists, but it seems to be something that it is not. Consciousness seems to be part of what enables the self to control the brain. In reality, consciousness is what the brain produces in order to produce the self.

    That’s what I meant when I said we are arguing over semantics. I basically agree with what you are saying, but I wouldn’t phrase it as “consciousness is an illusion.” I would say “uncaused choices,” “free will” or “control” is an illusion.

  49. Charles Won 27 Mar 2010 at 11:33 am

    “We’re probably more on board than off”

    Correct. I belatedly read your comment at 25 Mar 2010 at 6:28 pm (I don’t find debates about zombies terribly informative and tend to ignore them) and agree with your “personal stance” (except for the “ethical” part, a perspective with which I am unfamiliar). I think a big problem in consciousness discussions is absence of a mutually agreed-upon vocabulary, so it’s almost as hard to establish agreement as disagreement.

    Since you know Rorty, I assume you know Sellars, so perhaps you can answer a question. The assumption that people “know” they are conscious strikes me as the sort of “incorrigible” first-person knowledge that he was disputing with the Myth of the Given. But my grasp of his thesis is currently at best tenuous – am I at all on the right track?

  50. Charles Won 27 Mar 2010 at 11:39 am

    To clarify, what I meant to say was “the certainty that one is conscious strikes me …”.

  51. M. Davieson 27 Mar 2010 at 11:45 am

    @BillyJoe7>/b>

    A conscious brain does not fall into the same category as a p-zombie.

    Only if you are a dualist. For me there is nothing but p-zombies.

    I know that I am conscious. So there is at least one example of a conscious brain. That is a fact, not an hypothesis.

    I see. Okay, I accept this fact tentatively, that there is one conscious brain. Yours. Prove to me how your brain differs from all the brains that exist.

    True, I can only infer from that that you are conscious but you could not say there is no fact underlying that inference.

    People can assert all sorts of things which seem reasonable, that doesn’t mean they are correct.

    The burden of proof [on showing that something, fully explained, also possesses the property X] is yours though.

    How so? You are making the positive claim, that property X exists. Your proof is ‘I know I have it and infer that other people do too’. I, however, can account for all phenomena without invoking this extra property.

    I already have one example of a brain that is conscious (and so do you).

    Nope, I’m a p-zombie. Prove me wrong.

    I have no idea what a p-zombies could be capable of (and neither do you).

    Sure I do – they are capable of everything every person is. Apparently some people like you think they also have property X, which they call ‘consciousness’, but I am not sure what this means, or what explanatory value it has beyond current descriptions of neural activity and behavior.

    I would put it more like this:
    There are three facts upon which my inference is based.

    FACT: there is at least one instance of a conscious brain.
    FACT: the facts of evolution tell us that, in order to maximise the chances of survival, life needs to be as parsimonious as possible in energy terms.
    FACT: the production of a conscious brain is extremely costly in energy terms and would have no survival value if there was no benefit compared with a p-zombie.

    I take issue with your third FACT, the production of a conscious brain is extremely costly in energy terms. How do you know this? Can you compare a ‘conscious brain’ to a brain which doesn’t produce consciousness and compare their energy consumption? Tell me a situation where a conscious brain would have survival value compared to a brain which is functionally equivalent (monitors states, reponds to pain, rewards, and so forth) but doesn’t have subjective experience. I don’t see how your restatement of your argument is different than mine, and thus, when it comes to your inference:

    INFERENCE: every life form that is capable of doing all the sorts of things that I am capable of doing are conscious.

    You call it an inference, I still call it question begging.

  52. M. Davieson 27 Mar 2010 at 11:46 am

    Whoops, sorry for the ruined bold tag.

  53. M. Davieson 27 Mar 2010 at 11:54 am

    @Charles W

    Sure, of course, my point about your CT wasn’t an aggressive one, just trying to be wary of the language your were using and what it might imply.

    I don’t know Sellars enough (I know enough to say Rorty is good, and then shrug my shoulders) but that seems to me a straightforward enough account. The SEP entry makes me think that Sellars argues something like the ‘I’ we utter when we say ‘I think’ is possible only upon lived experience and is not a fundamental property of the brain – more like something like a linguistic fiction which helps us orient ourselves in the world (see BillyJoe7′s ‘reasonable inferences’ about the world, which I agree with insofar as they have pragmatic utility but not as scientific proofs).

  54. Charles Won 27 Mar 2010 at 1:02 pm

    M. Davies -

    No aggression assumed. I actually agree that “virtual CT” is a poor choice.

    As I understand it, the CT that Dennett disputes is envisioned as a specific location in the brain where “it all comes together” and is “viewed” by the “self”, a homunculus-like entity. I am addressing the fact that we – as complete physical entities – have the sense of being actors on a stage moving in a set among other actors, all of which we “see” as we go through our roles. And my question is why (and how) the brain creates the visual illusion of the set and the other actors. As I suggested, doing so doesn’t seem necessary in order for us to avoid bumping into things, to converse with the other actors, etc.

    Following along the BJ7′s line of thought, perhaps the main benefit of doing so is to create the illusion of consciousness as a reinforcement of the illusion of self. The latter illusion does seem to to have clear evolutionary benefits.

    Yes, Sellars took the “linguistic turn”. According to Rorty’s discussion of him (Ch 4, PMN), he even considered awareness (in the full sense in which we use that word) to be subsequent to language ability. This seems somewhat nonsensical if one thinks of language ability in terms of parts of speech, diagramming sentences, etc. But if you consider a child learning a language, it seems more reasonable. A child learns language in the stimulus-response mode I alluded to in an earlier comment. And from that perspective, learning a language is simply learning how to respond to certain stimuli with verbal responses sanctioned by the simple “society” comprising immediate family, et al. And since that procedure continues into later life, one learns to respond to the stimulus of being asked “Are you conscious?” affirmatively – because that’s the response sanctioned by our society.

    At least that’s my take on the issue.

  55. Charles Won 27 Mar 2010 at 2:12 pm

    Re free will …

    From the “Closer to Truth” interview series on PBS, I got two insights that helped me pretty much resolve that issue for myself.

    From Searle re free will, “you can’t live without it” (by which – in the context of the interview – he meant you can avoid neither the sense that you have it nor acting accordingly). And from Dennett, the observation that the key issue in free will vs determinism is predictability. Putting those together with my prior inference from physicalism that we can’t have free will, I concluded:

    1. Notwithstanding the illusion of free will, we don’t really have it.

    2. But to function, we must act as if we do.

    3. Since it is not (and probably never will be) possible to predict one’s future, accepting #1 need not be a personal (as opposed to a societal) concern since no practical consequences need result.

    4. Unnecessary personal concern about #1 (eg, “But that means we’re only robots!”) is indicative of philosophical immaturity. Unfortunately for one so inclined, it – and any unpleasant consequences – are unavoidable (determinism, you know). On the other hand, denial of #1 usually works fine (again, for individuals – not so much for society).

  56. cwfongon 27 Mar 2010 at 3:38 pm

    Notwithstanding the illusion of freewill, we may or may not really have it.

    And whether we have it or not, we are destined either way to act as if we do. (The conundrum of incompatibility – inevitable effects from undetermined and undeterminable causes.)

    We have no choice except to act on either the fact or the illusion that we do.

  57. BillyJoe7on 27 Mar 2010 at 5:53 pm

    M. Davies:

    “For me there is nothing but p-zombies.”
    You really truely believe that you are a p-zombie?

    “I accept this fact tentatively, that there is [at least] one conscious brain. Yours.”
    I don’t need you to accept that fact, tentatively or otherwise.
    For me (BillyJoe), it is a fact that there is at least one conscious brain (BillyJoe’s). For you (M. Davies), it is a fact that there is at least one conscious brain (M. Davies’).

    “Prove to me how your brain differs from all the brains that exist.”
    I am infering that all brains are conscious like mine and you want me to prove how my brain is different? That, my friend, is your job. ;)

    “People can assert all sorts of things which seem reasonable, that doesn’t mean they are correct.”
    I tell you that I know this as a fact: that there is something that it is like to be me; that I am conscious. Can you look yourself in the mirror and tell yourself truthfully that you are not?

    “You are making the positive claim, that property X exists. Your proof is ‘I know I have it and infer that other people do too’. I, however, can account for all phenomena without invoking this extra property.”
    How? By using your conscious brain?
    When I say “I have a conscious brain” I am making a factual statement. I invite you to say to yourself “I have a conscious brain” and then say truthfully that that statement is false.

    “Nope, I’m a p-zombie. Prove me wrong.”
    I don’t need to. For my premise I only need one example. I already have that example. You are part of my inference, not my facts.

    “Sure I do – they [p-zombies] are capable of everything every person is.”
    That is just a flat out assertion, unsupported by fact.

    “Apparently some people like you think they also have property X, which they call ‘consciousness’, but I am not sure what this means…”
    Well, you can to lie to me because (I agree) I can never prove that you don’t know what having consciousness means. But can you also lie to yourself? Can you say truthfully to yourself “I am not conscious”, “there is no one at home here”, “there is nothing that it is like to be me”?

    “or what explanatory value it has beyond current descriptions of neural activity and behavior”
    When you know it as a fact – that there is at least one example of a conscious brain – you don’t need explanatory power. Only hypotheses require explanatory power. Self evident facts do not.

    “I take issue with your third FACT, the production of a conscious brain is extremely costly in energy terms. How do you know this?”
    You think consciousness comes free? Consciousness represents information and information does not come free.
    If you are qibbling about the word “extremely”, I can drop it. In terms of evolution, being costly is sufficient.

    “Can you compare a ‘conscious brain’ to a brain which doesn’t produce consciousness and compare their energy consumption?”
    I don’t need to (see above).

    “Tell me a situation where a conscious brain would have survival value compared to a brain which is functionally equivalent (monitors states, reponds to pain, rewards, and so forth) but doesn’t have subjective experience.”
    Again you making a flat out assertion without any evidence whatsoever that a p-zombie could function equivalently to a conscious brain.
    And you are forgetting that the three facts I listed have to be taken together to support the inference that all brains are conscious: There is at least one conscious brain + producing a conscious brain is costly + with no survival value such a costly activity would not survive evolution.

    “You call it an inference, I still call it question begging.”
    I have at least offered argments.
    It seems to me that all you have offered is flat out unsupported assertions.

    regards,
    BillyJoe

  58. BillyJoe7on 27 Mar 2010 at 5:56 pm

    bluskool,

    “I basically agree with what you are saying, but I wouldn’t phrase it as “consciousness is an illusion.” I would say “uncaused choices,” “free will” or “control” is an illusion.”

    I agree that we seem to be on more or less the same wavelength. :)

  59. BillyJoe7on 27 Mar 2010 at 6:24 pm

    Charles W,

    Regarding free will:

    I think it is important to know what we mean by “the illusion of consciousness” and “the illusion of self”.
    The checkerboard illusion is illustrative. We are not saying that the bits marked A and B do not exist. We are saying that there are not what they seem: A and B *seem* to be different colours but in *reality* they are the same colour.
    Similarly, we are not saying that consciousness/self does not exist. We are saying that it is not what it seems. Conciousness/self *seems* to control the brain but in *reality* the brain controls consciousness/self.

    The corollary is that free will is an illusion.

    It is essentially an evidence-backed materialist (ie scientific) explanation and a refutation of dualism.

  60. M. Davieson 27 Mar 2010 at 6:25 pm

    @BillyJoe7

    Your tone suggests you are getting defensive or something, I don’t know where that is coming from?

    A minor point:

    There is at least one conscious brain + producing a conscious brain is costly + with no survival value such a costly activity would not survive evolution.

    Evolution seeks optima, not maxima, and there are plenty of things organisms have that if they lost would increase their energy efficiency. I still don’t see how ‘consciousness’ draws any more energy that a homologous functional/responding mechanism would (or how one would even know to begin with).

    A major point I would prefer to hear an answer to:

    Can you say truthfully to yourself “I am not conscious”, “there is no one at home here”, “there is nothing that it is like to be me”?

    When I check to see if I am conscious, what am I checking for? Be specific, not just synonyms.

  61. M. Davieson 27 Mar 2010 at 6:31 pm

    @Charles W

    Okay, I think I follow, but consider the following, anyway:
    How can we be deceived about our phenomenal awareness? The idea that consciousness is a useful illusion seems problematic, because it is like saying ‘I experience red, well wait, I think I experience red, but it is just an illusion, I experience red but don’t really experience it’. That is, the experience of X is the experience of X, how could it be an illusion at all? Sorry if this is not clear.

    And since that procedure [learning how to respond to stimuli] continues into later life, one learns to respond to the stimulus of being asked “Are you conscious?” affirmatively – because that’s the response sanctioned by our society.

    Hmm, I think it is a bit different, we don’t just say ‘I am conscious’ when asked because otherwise people will frown at us and we don’t want their condemnation…is that what you mean? I don’t think that’s Sellars’ argument.

  62. M. Davieson 27 Mar 2010 at 6:41 pm

    Seriously, how is invoking ‘consciousness’ any different than invoking élan vital or aether? What do you need to use it to explain? There’s all sorts of discrete mechanisms, capacities for self-reference, state monitoring, et cetera, in all kinds of entities, biological and otherwise, why lump all those processes together and call it consciousness except in casual conversation?

  63. BillyJoe7on 27 Mar 2010 at 6:42 pm

    M. Davies,

    “Your tone suggests you are getting defensive or something, I don’t know where that is coming from?”
    I could have left smilies everywhere, but I thought that might spoil the effect :)

    When I check to see if I am conscious, what am I checking for? Be specific, not just synonyms.
    Oh, you know, ask yourself: “Is there anyone home?”, “Is there anything that it is like to be M.Davies?”
    I don’t seem to need to ask myself these questions but, when I do, my answers are an unequivocal: “Yes, there is someone home; there is something that it is like to be BillyJoe”.
    What answers do you get? If you are a p-zombie your answer would be “No, there is no one home; there is nothing that it is like to be M. Davies”. If you are not a p-zombie you could lie and say the same thing and I would not be able to tell the difference. But you would, wouldn’t you?

  64. M. Davieson 27 Mar 2010 at 6:47 pm

    What answers do you get? If you are a p-zombie your answer would be “No, there is no one home; there is nothing that it is like to be M. Davies”.

    I dunno. The p-zombie might say ‘what do you mean by consciousness’ and the interlocutor could respond ‘do you know who you are’ and the p-zombie could answer in the affirmative (I’m Bob, he says). Or the interlocutor might say ‘do you have an internal monologue, that is to say, can you stimulate part of your apparatus to provide a list of things which have occurred to your person in the last 24 hours without uttering it through your vocal cords’ and the p-zombie would say ‘oh, of course, I do that all the time, we call it ‘remembering’. Or the interlocutor might say ‘check whether you can identify the smell of strawberries as distinct from other smells’ and the p-zombie says ‘sure I can!’

  65. BillyJoe7on 27 Mar 2010 at 7:17 pm

    M. Davies,

    “‘I experience red, well wait, I think I experience red, but it is just an illusion, I experience red but don’t really experience it’.

    Just an illusion?
    The bits marked ‘A’ and ‘B’ on that checkerboard are experienced as different colours. That experience is real. But the fact is they are the same colour. But, and I will repeat this for emphasis, the experience is real.

    You experience red. That experience is also real (as real as ‘A’ and ‘B’ being coloured differently) . But the fact is, all there is is light being partially aborbed and partially reflected by the rose, the reflected part entering your eye and interacting with chemicals in the retina that results in the setting up and propagation of electical impulses in nerves which conduct it to certain specific centres in the brain. The fact is that there is nothing that is red except your experience. The rose is not red, it just reflects and absorbs different wavelenghts of light. The centres in the brain are not red, there’s just patterns of neural activity. But you certainly do experience red, that is not in doubt.

    “Seriously, how is invoking ‘consciousness’ any different than invoking élan vital or aether?”

    Invoking “consciousness” IS like invoking “elan vital/spirit/soul”, which is the dualist position. That isw exactly why we materialists/scientists do not invoke it. That is why we materials/scientist invoke the “illusion of consciousness”.

    Similarly, there is no “red”, but there is the “experience of red” (and, of course, the “specific neural patterns” that underlie it)

  66. BillyJoe7on 27 Mar 2010 at 7:35 pm

    M. Davies,

    “I dunno. The p-zombie might say ‘what do you mean by consciousness’ and the interlocutor could respond ‘do you know who you are’ and the p-zombie could answer in the affirmative (I’m Bob, he says). Or the interlocutor might say ‘do you have an internal monologue, that is to say, can you stimulate part of your apparatus to provide a list of things which have occurred to your person in the last 24 hours without uttering it through your vocal cords’ and the p-zombie would say ‘oh, of course, I do that all the time, we call it ‘remembering’. Or the interlocutor might say ‘check whether you can identify the smell of strawberries as distinct from other smells’ and the p-zombie says ’sure I can!’”

    So your p-zombie passes the Turing test.
    Congratulations!
    Unfortunately he is a hypothetical p-zombie. :(
    Our robots cannot yet do so. In the future? Who knows?
    For the present, however, I will return to this question:

    “Is there anyone home?”, “Is there anything that it is like to be M. Davies?”
    And all I am asking is that you answer yourself truthfully.

  67. M. Davieson 27 Mar 2010 at 10:50 pm

    @BillyJoe7

    But you certainly do experience red, that is not in doubt.

    I have a stimulus signal that cannot be conflated with other signals, yes, correct. So?

    we materialists/scientists do not invoke consciousness. That is why we materials/scientist invoke the “illusion of consciousness”.

    The point of the ‘red’ example is that you can’t have an illusion of subjective experience. You say yourself: But you certainly do experience red, that is not in doubt. ! So you certainly experience red, that is no illusion, but your experience of ‘self’, of ‘consciousness’, is an illusion?

    So your p-zombie passes the Turing test.
    Congratulations!
    Unfortunately he is a hypothetical p-zombie. :(
    Our robots cannot yet do so. In the future? Who knows?

    From your point of view, how will you ever know if robots are conscious? What if they already are? How will you find out?

    To say consciousness is an illusion is to say none of us are conscious, we just think we are. Is this what you are saying?

    “Is there anyone home?”, “Is there anything that it is like to be M. Davies?”
    And all I am asking is that you answer yourself truthfully.

    The implication is that I am arguing in bad faith. I am not. I don’t find those questions very good scientific or philosophical questions. “Is there anyone home?” Really? That is a good question? Could you define your terms for me?

    My previous comment was truncated. Here it is for the sake of inclusion:

    And the interlocutor might say ‘well, I lump all those abilities and functions (and thousands others) together and call them consciousness.’
    And the p-zombie says ‘why bother?’
    And the interlocutor says ‘when you have all of those, well most of them, or some of them or more than those, we’re not sure yet, then we can say “it is like something to be you”‘
    And the p-zombie still says ‘why bother? What’s the point of saying that? Is it a scientific object, or like a literary or rhetorical thing? Nothing against the latter (poetry stimulates my receptors, so I seek more of it), but it sounds more like a folkloric conception or poetic device than anything scientific.’

  68. M. Davieson 27 Mar 2010 at 11:10 pm

    Just so it doesn’t get lost in the shuffle:

    To say consciousness is an illusion is to say none of us are conscious, we just think we are. Is this what you are saying?

  69. Charles Won 28 Mar 2010 at 12:43 am

    “How can we be deceived about our phenomenal awareness? The idea that consciousness is a useful illusion seems problematic …”

    A case of do as I say, not as I do. I’m trying to avoid using “consciousness” if at all possible since it is not well-defined and seems to add more noise than signal, and also trying to use “phenomena”. But since this vocabulary is relatively new to me, old habits often triumph. Ie, you are right – I should have said something like:

    “… And my question is why (and how) the brain creates our phenomenal experience of the set and the other actors. As I suggested, doing so doesn’t seem necessary in order for us to avoid bumping into things, to converse with the other actors, etc.

    Following along BJ7’s line of thought, perhaps one benefit of doing so is to reinforce the illusion of self, an illusion that does seem to to have clear evolutionary benefits.”

    This still may not make sense, but it’s closer to what I had in mind.

  70. Charles Won 28 Mar 2010 at 1:06 am

    BJ7 -

    To be honest, I’m not even clear myself on what I mean by “illusion of self”. I’m pretty comfortable just viewing a person as an organism that while relatively complex, essentially responds to stimuli as do simpler organisms. Because one of the abilities of that complex organism is to “model” its environment, including the organism itself, in that sense it has “self”-awareness. What I have in mind by “illusion of self” may be ascribing anything more exotic to the notion of “self” than that.

    In any event, I’ll try to be more careful about using that term in the future. Thanks.

  71. Charles Won 28 Mar 2010 at 1:49 am

    Forgot to address this:

    “we don’t just say ‘I am conscious’ when asked because otherwise people will frown at us and we don’t want their condemnation…is that what you mean? ”

    No, I mean (as I said earlier) that we are “programmed” by the culture to respond to certain questions essentially automatically. In response to “Are you conscious?”, only a few people with specialized interests (philosophers, psychologists, etc) would think carefully about the answer; I would guess that everyone else would say something like “Of course, what a ridiculous question.”

    “I don’t think that’s Sellars’ argument.”

    Didn’t intend to suggest that either your version or mine was. I’m not (yet – but perhaps soon) competent to opine on whether he would have agreed with my version.

  72. BillyJoe7on 28 Mar 2010 at 7:37 am

    M. Davies,

    “The point of the ‘red’ example is that you can’t have an illusion of subjective experience. You say yourself: But you certainly do experience red, that is not in doubt. ! So you certainly experience red, that is no illusion, but your experience of ’self’, of ‘consciousness’, is an illusion?”

    You need to distinguish between “red” and the “experience of red”.

    “Red” is an illusion.
    In fact, there is no “red”. Otherwise where is “red”? The rose is not red, it just reflects certain wavelengths of light. There is no red in the brain, just patterns of neural activity. So there is no red. If you see something that is not there, that is an illusion.

    However, the “experience of red” is real.
    You really do experience red.

    “From your point of view, how will you ever know if robots are conscious? What if they already are? How will you find out?”

    I will never know for sure. Same as I will never know for sure that you are not a p-zombie. I simply infer you are conscious on the basis of the above argument. If robots start behaving exactly like humans behave, I might need to infer consciousness for robots as well.

    “To say consciousness is an illusion is to say none of us are conscious, we just think we are. Is this what you are saying?”

    I have already covered this. The brain produces consciousness on the way to producing a self (consciousness is a necessary precondition for a self). The consciousness and self are real. The illusion is that the conscious self is in control.

    ” I don’t find those questions very good scientific or philosophical questions. “Is there anyone home?” Really? That is a good question? Could you define your terms for me?”

    All I am suggesting is that you acknowledge to yourself that “there is something that it is like to be M. Davies”.
    (You don’t need to acknowledge it to me because Idon’t need that for my argument. There IS something that it is like to be BillyJoe, and I know that conclusively, and that is all I need for my argument.)

    “To say consciousness is an illusion is to say none of us are conscious, we just think we are. Is this what you are saying?”

    It works like this: The subconscious brain makes a decision -> the subconscious brain passes the decision onto the conscious self -> 300 msec later the conscious self becomes aware of the decision -> the conscious self is unaware that the decision was already made 300msec ago -> therefore the conscious self believes it made the decision -> therefore the conscious self believes it is in control.
    So, the *fact* is that the brain is in control and the *illusion* is that the conscious self is in control.

  73. BillyJoe7on 28 Mar 2010 at 7:49 am

    Just to extend the following:

    “So, the *fact* is that the brain is in control and the *illusion* is that the conscious self is in control.”

    Of course this is only true for the brain in relation to the self. But even the brain is not in control in the grand scheme of things. The brain is merely a cause and effect engine. Completely deterministic (unless there is leakage from the quantum level, in which case, the brain would be largely deterministic but interrupted now and then by the equivalence of the flip of a coin).

  74. cwfongon 28 Mar 2010 at 1:10 pm

    In a deterministic world, gentlemen, all coin flips or their equivalents have been predetermined.

  75. M. Davieson 28 Mar 2010 at 1:40 pm

    Can’t write much now, but could you answer this question, perhaps you missed it:

    To say consciousness is an illusion is to say none of us are conscious, we just think we are. Is this what you are saying?

  76. Watcheron 28 Mar 2010 at 3:44 pm

    I think he addressed it in the very last part of his second-to-last post.

  77. BillyJoe7on 28 Mar 2010 at 3:58 pm

    cwfong

    “In a deterministic world, gentlemen, all coin flips or their equivalents have been predetermined.”

    All analogies are incomplete. ;)

  78. cwfongon 28 Mar 2010 at 4:06 pm

    The same brain that makes the decision then becomes conscious of the implementation process, which includes the opportunity to second guess the decision before the irrevocable step of taking an action. The consciousness at this point represents the form of conceptualization that the brain needs to employ while examining what can amount to a visual picture of possible consequences. I believe Damasio speaks to this process in his book, The Feeling of What Happens.

  79. M. Davieson 28 Mar 2010 at 7:52 pm

    @Watcher

    You are right, he gave an answer, my mistake, I apologize for the repetition. I am not really satisfied with his answer; we are probably at an impasse.

    He said
    I have already covered this. The brain produces consciousness on the way to producing a self (consciousness is a necessary precondition for a self). The consciousness and self are real.

    Which is simply repeating his point; he assumes the existence of ‘consciousness’ which has taken no better formulation than ‘I know I got it, you do too if you are honest with yourself buddy’. He still has not vouched for the explanatory value of ‘consciousness’. You might as well say ‘all moving objects have a motive force, if you don’t see it you are being dishonest, I know it’s there, that’s all I need for my argument.’

    The subconscious brain makes a decision -> the subconscious brain passes the decision onto the conscious self -> 300 msec later the conscious self becomes aware of the decision -> the conscious self is unaware that the decision was already made 300msec ago -> therefore the conscious self believes it made the decision -> therefore the conscious self believes it is in control.

    See, I don’t get how this demonstrates that there is something called consciousness beyond function and stimulus response. cwfong gives me another example of some neural operation. It describes something my computer does all the time. Does my computer possess consciousness? Is it ‘like something’ to be my computer?

    This ‘conscious self’ that people are talking about – it becomes aware of things that are brought to it, is this correct? The subconscious brain makes a decision and passes it on to the conscious self which appraises it and vetoes it if necessary? Sounds to me like you are reintroducing a homunculus into the process, something (‘the self’) which witnesses the theater of the