Jan 08 2010

Ray Tallis on Consciousness

Raymond Tallis is an author and polymath; a physician, atheist, and philosopher. He has criticized post-modernism head on, so he must be all right.

And yet he takes what I consider to be a very curious position toward consciousness. As he write in the New Scientist: You won’t find consciousness in the brain. From reading this article it seems that Tallis is a dualist in the style of Chalmers – a philosopher who argues that we cannot fully explain consciousness as brain activity, but what is missing is something naturalistic – we just don’t know what it is yet.

Tallis has also written another article arguing that Darwinian mechanisms cannot explain the evolution of consciousness. Curiously, he does not really lay out an alternative, leading me to speculate what he thinks the alternative might be.

The Evolution of Consciousness

While Tallis is clearly a sophisticated thinker, who does not appear to have an agenda (and therefore deserves to be taken seriously) he constructs what I feel is a very flawed argument against the evolution of consciousness.

His primary point seems to be that consciousness is not necessary and would not provide any unique survival advantage, and therefore purely Darwinian mechanisms would not select for it. He writes:

Even if we were able to explain how matter in organisms manages to go mental, it is not at all clear what advantage that would confer. Why should consciousness of the material world around their vehicles (the organisms) make certain (material) replicators better able to replicate? Given that, as we noted, qualia do not correspond to anything in the physical world, this seems problematic. There may be ways round this awkward fact but not round the even more awkward fact that, long before self-awareness, memory, foresight, powers of conscious deliberation emerge to give an advantage over those creatures that lack those things, there is a more promising alternative to consciousness at every step of the way: more efficient unconscious mechanisms, which seem equally or more likely to be thrown up by spontaneous variation.

One error is Tallis’s reasoning is the unstated assumption that evolution will always take the most advantageous path to survival. There may be more efficient methods of survival than consciousness, but so what. One might as well ask why birds fly, when it is such a waste of energy and there are more efficient ways of obtaining food and evading predators.

Life through evolution does not find the solution to problems, but many solutions. Life is also constrained by its own history – so once species heads down a certain path its descendants are constrained by the evolutionary choices that have been made.

Consider, for example, that many forms of life on earth have very limited (if any, depending on your view) consciousness. The entire invertebrate world, including clams, sea stars, worms, etc. lack sophisticated central nervous systems and do just fine without anything approaching human consciousness.

In fact Tallis’s point that there are more likely solutions than consciousness conforms nicely to the natural world – evolution seems to have solved the problem of survival much more often without resorting to consciousness. Humans are the exception, not the rule.

His arguments are ultimately extremely evolutionarily naive. They are excessively adaptationist, for example. Not everything that evolves was specifically selected for in all of its aspects. There are many epiphenomena – properties of life that arise as a side consequence. That is because life is messy.

Tallis also fails to consider possible advantages for even primitive consciousness, or how it may emerge out of neural functions that themselves provide useful functions. M.E. Tson goes over this issue in an interesting article. But I will give my take.

The most primitive roots of consciousness may have been in the affinity and aversion to various stimuli in the environment – the ultimate roots of emotion. This could be as simple as a bacteria moving toward food and away from toxins.

As behavior became more complex, so did the systems of aversion and affinity, allowing for pleasure and pain, which in turn allow for a reward system. Once you have a chemical system that rewards certain behaviors and discourages others, you have a foothold into the evolution of complex psychological motivations and emotions. But these have to be experienced by the organism in some way – the foreshadowing of consciousness.

Another factor that could lead to consciousness is the need to filter all the information coming into the organism. With a certain amount of sophistication of visual, auditory, sensory, and chemical sensing systems the organism’s programmed responses can be easily overwhelmed. The world is complex, and not every shadow is a predator. There can also be multiple competing simuli – should an organism go after food or avoid a predator?

It is easy to imagine that the same neural system that collects all this information input would also develop a system to filter out the most useful information from the less useful, or even distracting, information – to prioritize inputs. This is a functional equivalent of attention, which is a component of consciousness.

Life does not have to evolve down such a pathway, nor does this even have to be the most likely pathway. It may, in fact, be very unlikely. It just needs to be possible.

The brain and consciousness

Tallis begins this article by acknowledging that consciousness does in fact correlate with brain activity – there is no consciousness without brain activity. He also acknowledged that most neuroscientists are content with the notion that the brain causes consciousness and is a sufficient explanation for it.

He therefore departs from the self-serving and patently false propaganda of intelligent design dualists who would have you believe that neuroscientists are abandoning materialism in droves (right, just like they are abandoning evolution), or who will pretend that consciousness does not correlate closely with brain activity (just search my blog for Michael Egnor to read my dismantling of these arguments).

Here is where Tallis departs from the mainstream of neuroscience:

It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demonstrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is.

Here is commits a bit of a straw man in saying that the position of neuroscience is that brain activity and consciousness are “one and the same thing.” I prefer the summary that the mind is what the brain does. But understanding consciousness cannot be reduced to neurons firing any more than an appreciation for a masterwork painting can be reduced to the chemical structure of paint or wavelengths of light. There is a higher order of complexity to art, just as there is to consciousness.

But this subtle straw man opens the door for Tallis to exploit, unintentionally, vagueness in the language to create the impression of contradictions where none exist (ironic for someone who has been such a foe of post-modernism). For example, he then writes:

Many neurosceptics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be indistinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of “aspects” depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness.

I find this paragraph to be an incoherent linguistic mess. You can see how the straw man of saying that brain function and consciousness are the exact same thing leads to his curious rejection that the brain explains consciousness. He then introduces another straw man – that the brain and consciousness are aspects of some third thing.

The core problem of understanding here is that language is inadequate to capture the nuance of concepts needed to wrap one’s brain (pun intentional) around the concept of consciousness and its relationship to the brain. The brain is an object. Consciousness is a brain phenomenon – a dynamic manifestation of brain function.

He extends this point when he writes:

If it were identical, then we would be left with the insuperable problem of explaining how intracranial nerve impulses, which are material events, could “reach out” to extracranial objects in order to be “of” or “about” them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are “about” the physical object. Biophysical science explains how the light gets in but not how the gaze looks out.

Again, I find this little more than word play, originating from the false premise that the neuroscience position is that consciousness is identical to the brain. And what does he mean – exactly, operationally – by “aboutness”. Does he mean the abstract concept? How an object is represented in the brain? These all have neural correlates too.

He next makes a point that I have not encountered before, so he gets some points for originality. But I think he should have consulted a neuroscientist before making this point, for he does not acknowledge what seems to me to be the obvious answer. He writes:

My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this “merging without mushing”, this ability to see things as both whole and separate.

He is saying that neuronal activity cannot explain how we have experience of multiple independent things at the same time, without those information streams becoming mushed together. But in fact our understanding of brain function accords nicely with the experience Tallis describes.

Our brain are massively parallel in their organization. And there are neurons that make millions of connections to thousands of other neurons. Networks of neurons are discrete, and can store and convey discrete sensations, thoughts, memories, etc. And yet they are meshed with numerous other networks of neurons with other discrete sensations. This setup is perfect for allowing meshing without mushing.

But also – there is mushing in that memories do merge together. We get information mixed up all the time, because the discreteness of memories in the brain is not perfect. But this probably goes along with the fact that our brains are excellent at pattern recognition – one network of neurons overlaps or connects in some way with another network, and so one thought reminds us of another – we make connections, we see patterns and associations – we mesh, with some mushiness.

This objection of Tallis is simply not valid. Nor is his next:

“A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system.”

Tallis is just overthinking the issue. What is the distinction between a “sense of the past” and storage of information about the past? Storage of information is a present physical state, but the information is about the past.

In fact neuroscientists have discovered neurons in the brain that “time stamp” events. This is where a little more knowledge of the latest in neuroscience would have helped Tallis immensely. Understanding time is just another function of the brain.

In fact Tallis next makes a very telling statement:

This is consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a “stubbornly persistent illusion”.

First, he shows how he is overthinking this issue, trying to understand the brain’s understanding of time as a basic feature of physics. But if we take Einstein’s quote at face value, that time is an illusion, that accords nicely with the standard materialist neuroscientific view of consciousness – that it is analogous to an illusion our brains construct for our conscious selves to experience. That would include a sense of time.

This is absolutely not to say that reality is an illusion. Reality exists. But we have an internal model of reality in our brains – a very dynamic model that is part of our internal processing or self-reflection. That model is a constructed “illusion” – it has a very functional an adaptive relationship to external reality, but it is not a simple reflection of it. What we call “optical illusions” are just one manifestation of the ways in which our internal model of reality is an imperfect representation of external reality.

Tallis’s final points are these:

There are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness.

I don’t think these three things can be conflated. The notion of self is again a function of the brain – there are parts of the brain, networks, that produce the sense of self as part of our model of reality. A distinct but related function is to place our sense of self inside our physical bodies, and to make it separate from the rest of the universe. These are clearly identified brain functions – functions we can localize, and turn off with interesting results.

Initiation of action also localizes, and there are disorders (such as Parkinson’s disease) that interfere with the ability to initiate actions. There are parts of the brain that generate activity – keep the neurons firing, and provide for the initiation of specific thoughts or actions. Rather than thinking about initiation as firing up neurons from nothing, it is more accurate to imagine neurons firing throughout the brain all the time (at least while awake) and this activity follows different patterns depending upon external stimulation and the internal conversation.

Free will is a more difficult concept to deal with. There are certainly those who believe free will does not exist because the brain is a deterministic (if very complex) machine. A meaningful discussion of this topic is beyond the scope of this blog post. I will just say that I think the discussion of free will falls victim to semantics as well.

What is clear is that people can make choices. Sure, those choices do not occur as a result of some non-material external will. They are just another function of brain activity.

The bottom line is that free will does not present a problem for the neuroscientific view of consciousness. The extent to which we can say that it exists is also the extent to which we can say it is a brain function.

Conclusion

In my opinion Tallis does not put forward one valid argument against a purely materialistic neuroscience view of consciousness – that consciousness is brain function. His evolutionary arguments misrepresent evolutionary theory. His neuroscientific arguments are simply false, and do not reflect the state of the science. And his philosophical arguments are failed semantic gambits that are ultimately incoherent.

But I am curious as to what Tallis thinks consciousness is, if it is not brain function and its existence cannot be explained by Darwinian evolution. I acknowledge he has written a great deal that I have not read – I do not claim to have exhaustively searched for an answer. But he is certainly being coy in these two articles, which is an interesting omission.

I am especially curious as Tallis seems to be an intellectual with whom I likely agree about a great deal. I’ll have to do some more digging.

Share

43 responses so far

43 Responses to “Ray Tallis on Consciousness”

  1. eeanon 08 Jan 2010 at 8:40 am

    If your argument relies on finding a problem with one of the most proven facts of biology (evolution) then clearly you’re doing something wrong. The fact is that we are here, we do have consciousness (as do dolphins likely via some convergent evolution), therefore evolution certainly can produce conciousness.

    And even if we accept that it is a valid argument to even begin to make… clearly being smart has its advantages. Just look at the way dolphins hunt, you have to feel sorry for the sharks. I doubt Tallis would deny that being more clever then your prey and your predators confers evolutionary advantages.

    So then Tallis has to argue that its possible to be clever without consciousness or that there’s some evolutionary ‘easier’ way to be clever. Or something. I’m not sure, like I said his whole argument doesn’t make any sense to me.

  2. eeanon 08 Jan 2010 at 9:08 am

    Reading his evolution article more closely… his only alternative to how a system could become complex without consciousness is a computer. Which is of course far less parallel and complex then the human mind at least, and more importantly, is intelligently designed.

    He agrees that there is a gradient of less to more complex consciousness. But sort of makes up this argument that even the simplest consciousness would’ve been such a hassle to deal with that it would be a disadvantage. Huh?

  3. Steven Novellaon 08 Jan 2010 at 9:35 am

    He seems to be saying that consciousness is an advantage, but does not become an advantage until late in the game, and why select for it early one when there are easier ways.

    My point is – much of the time other paths are taken. (like the path that led to clams). But even if we grant his premise that non-conscious evolutionary solutions are more likely, that does not make them exclusive – and one tiny branch led to a central nervous system, which worked out pretty well in the long run.

  4. CWon 08 Jan 2010 at 10:12 am

    Thank you for this post. It’s going to be great to noodle this around in my head throughout the day.

  5. M. Davieson 08 Jan 2010 at 11:39 am

    The Evolution of Consciousness

    I read Tallis as arguing against adaptationism so I am not sure where you differ from him. He is saying that we cannot account for consciousness in crude terms of adaptation because, as you say, evolution seems to have solved the problem of survival much more often without resorting to consciousness. In other words, if consciousness does not confer advantages in terms of selection, then why, Tallis asks non-rhetorically, does it emerge? To put it another way, what aspects of consciousness confer selective advantage and what aspects are spandrels, for example? Evolutionary accounts may be necessary but not sufficient to explain the emergence of consciousness.

    You say Once you have a chemical system that rewards certain behaviors and discourages others, you have a foot hold into the evolution of complex psychological motivations and emotions. But these have to be experienced by the organism is some way. Why? Why does the organism have to have phenomenal experience of these? I could make a robot that has a reward system – must the robot have experience of its processes? As for your point about attention, perhaps it is useful for an organism to process and filter inputs, but it is not necessary for this be a conscious process.

    The Brain and Consciousness

    I find this paragraph to be an incoherent linguistic mess.

    Let me give it a go. Given: (1) observable neural activity, and (2) reports of phenomenal experience, some people would say that (1) and (2) are aspects of the same thing, where ‘that thing’ is observable neural activity. In a similar fashion, in the case of water, H20 and ‘this wet thing on my hand’ are phenomenal aspects of the same thing: water. However, unlike the case of water, where there are two ways of framing it in experience, we have the problem of why there is ‘experience’ in the first place. (1) and (2) are not identical, and we cannot infer (2) from (1) alone – can we know that there is such a thing as phenomenal experience from reading neurology alone, or is it possible to conclude from neural activity that we are stimuli-processing machines through and through (i.e. philosophical zombies)?

    ***

    I think your response to Tallis’ arguments has some connection to the Hard Problem of consciousness. You can say ‘well I side with Dennett’ but that doesn’t solve the problem, it just means you now face Dennett’s critics (and there are a sufficient number of them, and their arguments are non-trivial).

  6. Skepticoon 08 Jan 2010 at 11:58 am

    A very good analysis Steven, as usual.

    I looked at Tallis’s New Scientist article, and quickly found the piece I fully expected would be there:

    Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and toward quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them.

    First, this is just an argument from ignorance.  Second, it suffers the same problems that I always note in discussions about the so called “hard problem” and ”qualia”: his definition of “qualia” appears to be that they are non-physical, which means that his conclusion is circular.  Only by defining qualia as non-physical can he claim that the physical brain can’t produce qualia.  But who says qualia are non-physical?  It’s just an assumption that is the same as the conclusion he is trying to reach.

  7. Eternally Learningon 08 Jan 2010 at 12:07 pm

    I’m no expert on neurological matters and am an amateur at philosophy, but I think Steve’s point here is that Tallis seems to be creating one half of an argument from personal incredulity. From my perspective, I cannot even say for certain what the scientific consensus is on this matter, but I’ve no reason to doubt Steve when he states that there has been no reason to assume that something unknown is going on here and that consciousness does reside in the material. For my part, it speaks worlds to that assertion that physical changes to the brain create predictable changes to consciousness. Again, not being an expert, I see nothing compelling in his arguments that differs from, “I don’t understand,” or apparent misconceptions about what actual neuroscientist do and don’t understand. To me it smacks of the tone ID proponents use in their arguments:

    - We don’t understand
    - Other experts don’t understand
    - Those that claim to understand are arrogant and jumping to conclusions
    - “Something” else must be going on

    That alone, of course, doesn’t mean that he or they are wrong, but it’s certainly raising a lot of red flags for me.

    Thanks for writing about this Steve. Really interesting stuff!

  8. Steven Novellaon 08 Jan 2010 at 12:27 pm

    Davies – regarding the evolutionary argument: The flaw is this – Tallis argues (and you repeat) that because consciousness is not the optimal solution to survival, that it therefore confers no survival advantage.

    But survival advantage is not abstract or absolute – it exists in the context of the organism and the environment. And a solution that is suboptimal may be selected for because it provides an immediate survival advantage, and by chance and historical contingency happens to be the advantage that one evolutionary line hit upon.

    Nature is replete with sub-optimal solutions.

    I think it is hyperadaptationalist to argue that only optimal solutions are adaptive, or that selective pressures will always favor optimal solutions. That is the flaw.

    I agree this comes down to the hard problem, and I absolutely agree with Dennett that it is a non-problem – in terms of why. Again – it is hyperadaptationalist to say that consciousness itself must have a purpose and an advantage, when in fact it can simply be what emerges, sometimes, from an evolving central nervous system that provides for the ability to respond to stimuli and process it in increasingly complex ways, and to filter and selectively attend to information (both internal and external).

  9. M. Davieson 08 Jan 2010 at 1:06 pm

    Tallis argues (and you repeat) that because consciousness is not the optimal solution to survival, that it therefore confers no survival advantage.

    Of course I am repeating his arguments! I was explicitly rephrasing and restating what he said since I didn’t see how you two differed. As I read it, Tallis is not arguing that only optimal adaptations exist. He is saying that ‘consciousness is adaptive’ is true post hoc, but that is unsatisfactory and is thus not an adequate explanation.

    ***

    I would very much like to hear your thoughts on the point I had made, however, which was can we know that there is such a thing as phenomenal experience from reading neurology alone, or is it possible to conclude from neural activity that we are stimuli-processing machines through and through (i.e. philosophical zombies)?

  10. Steven Novellaon 08 Jan 2010 at 1:39 pm

    Davies – but again, my point is that consciousness itself does not have to be adaptive (it may be, but even if we set that aside) – the neurological functions that lead to consciousness can themselves be adaptive, with consciousness being an emergent epiphenomenon. Gould argues that evolution is full of these, and I find his arguments persuasive.

    Regarding the second question, I think it is hyperreductionist. We could not look at the brain and conclude it is conscious, because we do not yet know what it is about SOME brain function that results in consciousness and what is different about conscious brain from subconscious brain.

    We only know the brain is conscious because we have consciousness, and arguably from the higher-order studies of psychology.

  11. superdaveon 08 Jan 2010 at 1:43 pm

    if he doesn’t think foresight and memory are helpful for long term survival I would love to play him in a game of Chess.

  12. superdaveon 08 Jan 2010 at 1:47 pm

    In Musicophilia, Oliver Sacks writes about a man with severe short term memory loss. The man would often write in a journal the phrase “I am awake now”,”now I am awake”. Clearly his inability to make new memories was affecting his own sense of consciousness.

  13. canadiaon 08 Jan 2010 at 2:17 pm

    I think the basic assumption that consciousness is not advantageous is patently flawed, even in its early stages. Evidence from the animal kingdom clearly indicates that consciousness is present to some degree in many advanced creatures such as dolphins, elephants, whales, and monkeys.

    Consciousness gives them two advantages. First, they become capable of executing complex hunting strategies that rely on consciousness of self and other, formulating an idea of the future and then calculating actions needed to get there. Anyone who has watched dolphins splitting schools of smelt or hyenas surrounding a gazelle or monkeys attacking a neighboring troop can the see the benefits of consciousness below the human level. Consciousness allows individual organisms to function as meta-organisms, which clearly confers enormous advantages.

    The second advantage of consciousness is that consciousness allows culture, especially in late stage (humans, dolphins). Culture is hugely advantageous because it serves like an extra set of DNA with faster mutability and more accurate transmission of traits than real DNA.

    To say that consciousness in any way lies outside evolution or nature is to turn a blind eye to the long chain of increasing neurological complexity leading from the smallest nematode to ourselves.

    In my opinion, philosophy is art, not science. Philosophers tend to be artists, who in turn are poorly equipped for the skepticism of the scientific approach.

  14. Steven Novellaon 08 Jan 2010 at 2:21 pm

    to further clarify (I am really trying to hone in on his argument) – Tallis seems to be be saying that:

    - standard evolutionary theory cannot account for consciousness, therefore there is something wrong with our concept of consciousness and its origins.

    But he could, rather, conclude that there is something wrong with our standard theory of evolution. What is true, in my opinion, is that there is something wrong with HIS account of evolution, and I have pointed these out. Therefore Tallis is pointing out a non-problem – the only problem is with his understanding of evolutionary theory.

    It should be noted that Tallis gives neither an alternative to his understanding of evolution or the neuroscience account of consciousness (in either article).

  15. canadiaon 08 Jan 2010 at 2:39 pm

    This feels to me like the kind of tingly awestruck appreciation for natural phenomena that philosophy creates in people on occasion. For some reason, to many people, thinking that something can be explained makes it not special. Clearly Tallis feels that consciousness is special (too special to be explicable), and so he works backward from there.

    “Why should consciousness of the material world around their vehicles (the organisms) make certain (material) replicators better able to replicate?”

    I can’t understand how this is even a serious question. Forgot about consciousness. The more relevant information an organism has access to, the better equipped it is to respond to its environment. A slug with no sensors at all is worse off than a slug with two eyes and some tactile nerves. Similarly, a dolphin who cannot conceive of itself and its pod mates cannot consequently conceive of even the simplest cooperative hunting strategy, and is thus worse off than one with some level of consciousness.

    What Tallis doesn’t understand is that he is missing the real specialness. He’s basically arguing that we don’t, or maybe even can’t, understand consciousness. If he accepted what is known, he could start contemplating some really interesting questions, like whether we can make a self-aware machine or whether we could re-design existing organisms to be more intelligent. I recall a study that suggested that the actual number of neurons required for consciousness was startling small; small enough to theoretically be contained within an insect brain. Self-aware bees are far more philosophically interesting and useful to think about than basking in the mystery of an inexplicable process.

  16. [...] Chopra’s going to wave around this article from New Scientist, and Shermer will probably criticize said article on the same grounds Yale neurologist Steven Novella already has on his blog. That’s really all I got for this for [...]

  17. daedalus2uon 08 Jan 2010 at 3:53 pm

    I think that Tallis makes the erroneous assumption that by observing an organism “at rest”, we can infer that the characteristics of the organism “at rest” are the primary traits that evolution has selected for. For example, an organism not in sepsis doesn’t need a strong immune system, a weak immune system will do fine. An organism with a raging infection needs a much stronger immune system than the organism with its immune system “at rest”. When considering the evolution of teeth, it is not what kinds of teeth are needed during the best of times, but rather what teeth are needed during the worst of times. What kind of teeth are needed when conditions are so bad that everyone without the right kind of teeth dies? That is where evolution exerts most of its selection, when times are so bad that individuals are killed by those bad conditions.

    I think by focusing on mental states “at rest”, that the true mental states that have mostly driven evolution are not considered. I suspect that the most important mental states are those that occur under extreme stress, for example when running from a bear. What mental state is most important for a mouse? The state when it is alone? Or the state when it is running from a predator? I think the latter.

    I suspect that not a small fraction of the brain is redundant, and that during extreme, life-threatening stress, some amounts of the brain shut down, or are diverted into other tasks due to changes in functional connectivity (this redundancy is probably holographic, so it doesn’t show up other than as only a fraction of neurons in a volume element firing simultaneously, this redundancy also compensates for neuron loss over time). Survival then depends on how those parts of the brain function together during that episode of extreme stress.

    I suspect that consciousness during times of rest might be the mechanism by which all those other parts are kept “in sync” for the periods of extreme stress which may never come, but which are life threatening if they do come. If the brain needs to have all this functional redundancy for emergency situations, better to make use of it all the time. You have to keep using the brain for something, or it deteriorates.

    I think that many researchers make the assumption that the state “at rest” is the most simple state. That is not correct. The state of “at rest” is actually the most complicated state because all the other states that the organism can enter into are held “at the ready” to be entered in a heart beat if necessary. There are myriad control systems that keep the conditions “at rest” stable and within a certain range. That there are so many independent control systems is evident in that virtually all of the parameters that are measured are seen to be chaotic (for example heart beat interinterval variability). When a heart is healthy, it exhibits a chaotic beating frequency. As a heart becomes more and more pathological, the beating gets more and more regular. My interpretation is that as the various control systems get out of “range”, they drop out and fewer control systems remain and so the variable becomes more regular.

  18. CWon 08 Jan 2010 at 3:53 pm

    After noodling this around for a bit, I have concluded that I must be ignorant over the subject of consciousness. I can either resolve this by hoping that it’s discussed on an episode of SGU between 100 and 200 (that’s what I got left to listen to) or do some googling for articles that can bring me up to speed on the topic.

    It doesn’t seem that Tallis offers an alternative theory or idea though, that’s for sure. A testable hypothesis would be nice, but since he seems to have established credibility, I would just settle for some kind of wild guess on what his beliefs are regarding the origin/nature of consciousness.

  19. artfulDon 08 Jan 2010 at 4:02 pm

    Consciousness is the experience of being alive. If you want to posit that it exists outside of life, you will be talking of something else entirely, having assumed without evidence that consciousness has come to you outside of your experience.
    All we can safely conclude is that there is no consciousness without life, just as there is arguably no life without some form of consciousness.
    There is almost surely no free-floating consciousness out there in the cosmos, quantum or otherwise. But perhaps conversely there are almost surely entities out there that are able to have some form of related experience.

    From an Interview with Lynn Margulis, Astrobiology Magazine October 1, 2006:
    “If you look up consciousness in the dictionary, it says, ‘awareness of the world around you,’ and that’s because you lose it somehow when you become unconscious, right? Well, you can show that microorganisms, or bacteria, are certainly conscious. They will orient themselves, they will work together to make structures. They’ll do a lot of things. This ability to respond specifically to the environment and to act creatively, in the sense that that precise action has never been taken before, is a property of life. Of course, it has to be moving life, or you can’t tell. You can’t tell if a plant is thinking, but in organisms that move, you can tell their intelligence.”

  20. Zelockaon 08 Jan 2010 at 6:27 pm

    This is too metaphysical. There is no way to prove his statement without knowing or scientifically defining what Consciousness is in the first place.

  21. CWon 08 Jan 2010 at 7:13 pm

    Where do you think the need to explain/define/prove consciousness comes from – a desire to find some characteristic that separates humans from all other animals or organisms? Or is it just to further some ideology?

  22. Winsl0won 08 Jan 2010 at 9:02 pm

    Zelocka: Yes, Tallis’s argument is one from ignorance. The majority of his argument (and the arguments of many philosophers which reject physicalism on account of consciousness) presumes that qualia are non-physical events. There is no adequate explanation of what qualia are. We can say very little about them other than what we know of them from our own experience, other than that they are caused by the brain(as Dr. Novella and Tallis point out). I suspect that the future of neurology will shed some light on this, but we simply do not know enough about the brain to make a conclusion. The best we can do is say that we cannot currently explain the makeup of qualia in physical terms. I am willing to admit (as I might imagine Dr. Novella would agree, though I would hate to presume too much) that it is very difficult and perhaps mystifying to imagine what an physical explanation of phenomenological experience might look like. I believe this is what leads many philosophers to assume, prematurely, that it is impossible. But while I can appreciate their bewilderment, it is even more mystifying to imagine what a non-physical explanation might look like. So mystifying in fact that it would require us to rethink much more than our conception of consciousness, but of existence entirely. The reason that Tallis does not posit an alternative understanding of consciousness is because any other understanding would be even more bewildering.

    It also seems that we have reason to be wary of Tallis’s argument on the grounds that he tries to sneak free will in through the back door at the end. Desire to posit free will and human meaning are two of the most common reasons for denouncing physical explanations of consciousness. As Canadia noted: “This feels to me like the kind of tingly awestruck appreciation for natural phenomena that philosophy creates in people on occasion. For some reason, to many people, thinking that something can be explained makes it not special.” There seems to be an inherent conflict of interest in solving the riddle, and Tallis may be showing bias for some degree of mystery by citing free will as a problem of physical explanations. I cannot help but be wary when people display aversion to a particular conclusion before it is drawn. Even the title of his essay “You won’t find consciousness in the brain” somewhat suggests that we shouldn’t even bother looking there, like he is asking us not to even look before we can possibly know. I feel certain that solving the hard problem of consciousness would take supreme creativity and ingenuity, best not to limit ourselves this early in the game.

    In addition, I think Dr. Novella is on strong ground here in emphasizing possibility over probability. While probability is still quite relevant to the discussion, I don’t believe the possibilities he suggests (attention and filter) are not highly improbable relative to the other to the alternatives that Tallis suggests, though it may not be entirely clear how exactly this might have happened. It seems naive to try to posit some way that humans “ought” to have evolved, given that adaptive pressures can only select traits which are more or less randomly available, not to mention that our knowledge of what our selective pressures might have been is still fairly general. The existence of numerous suboptimal adaptations ought to be noted as well.

    I read his article earlier today and was mildly disappointed in it. I was glad to see that there was a response to it on here.

  23. tmac57on 08 Jan 2010 at 10:42 pm

    “His primary point seems to be that consciousness is not necessary and would not provide any unique survival advantage,”
    Well, without out consciousness, would humans have been able to to utilize fire, fashion clothing, build shelters, create vaccines, and antibiotics for survival? Those seem like unique survival advantages to me.

  24. John D. Draegeron 08 Jan 2010 at 10:53 pm

    Dr. Novella,
    As usual your analysis is great, but you’re giving this guy Tallis too much respect. The Tallis article is clearly an opinion piece—it’s says so right on the top of the link you’ve included. NewScientist seems to be all about getting subscriptions, not promoting truth. It’s not a science journal like Science or Nature, it’s a magazine—people can write whatever drivel they want.

    Ray Tallis is part of a small minority that are simply wrong. You call him a sophisticated thinker, but he’s not thinking in a sophisticated manner on human consciousness. Just because he “oversaw a major neuroscience project” at Univ. of Manchester does not make him a good neuroscientist. Need I remind you of that neurosurgeon who believes he was abducted by aliens? And philosophers need not know anything about the human brain to be philosophers, so that qualification is irrelevant. I couldn’t bear to read his entire baloney-laden article, but towards the end he says, “Material objects require consciousness in order to “appear”.” He “appears” to believe stuff doesn’t exist without us thinking of it. It’s like some of the nonsensical misinterpretations of quantum mechanics.

    There’s no amount of explaining human consciousness by brain activity that will satisfy him, just like there’s no number of fossil intermediates that will satisfy a creationist. They want magic. Steve, there are atheists that still believe in nonsense. If you are correct that Tallis is an atheist, he must still believe he has a soul/spirit. The burden of proof then lies with him to show what that magical soul is made of.

    Sometimes when people become advanced in age their brain exhibits poor judgement—as was seen recently with a guy named Randi—our hero. Do you recall some poor judgements made by other famous people later in their lives–James Watson? Linus Pauling? I’ve already decided to shut my mouth for the last 10 years of my life!

    Cleary Tallis believes in magic to some extent or has fallen off his rocker. Steve, you should NOT do more digging on Tallis; he’s not worth your time.

  25. HHCon 08 Jan 2010 at 11:27 pm

    Consciousness is a prerequisite for human culture. It was Darwin’s consciousness which shaped his scientific musings similarly as Tallis provides us his own version of mind. But Tallis is talking about unconscious benefits as a arm chair philosopher, not as Darwin did, by sailing the oceans. There are certain states of mind which involve minimum processing, senility, psychosis, drug induced stupor. But how adaptive are any of these semi-conscious states to the world at large. For example in the mixed martial arts arena, watch any UFC Hall of Fame champion, such as Royce Gracie with skills in ju-jitsu, his mental astuteness and physical prowess induce temporary unconscious states in his challengers. Losing requires loss of consciousness as well as control.

  26. canadiaon 09 Jan 2010 at 1:07 am

    I agree with John D.

    Tallis has crossed into pseudo-science with this piece. Not that he can’t come back or anything – just that this one article is very new age in bad way

  27. sonicon 09 Jan 2010 at 5:27 am

    To understand Tallis better try this–

    http://newhumanist.org.uk/2172/neurotrash

    (I think he makes his points more clearly in this piece)

  28. tmac57on 09 Jan 2010 at 11:49 am

    sonic- I read the piece that you linked to, and I do understand better what Tallis is saying, however, I found his argument against the human brain not being the seat of consciousness unconvincing. He makes a case against simplistic uses of neural scans in understanding brain phenomena, and against using neuroscience for policymaking, but his ideas about consciousness itself seems to argue that it is a phenomenon outside of the individual. He does this by invoking the complexities of culture, human experience, and how we are woven into the fabric of our world, and mistakes the apprehension of these things for the ability to apprehend them. He gives no direct idea of where he really thinks that consciousness arises from, but you get the sense that it might be some intrinsically unknowable source, so why even try to understand it. Sort of like the Christian idea of God’s mind being unknowable.
    I got the sense that maybe he views the human brain as something like a television set, that while it can receive and translate complex information, that it, in itself, is just an inert set of circuits, and that it is totally dependent on the flow of stimuli from the outside, with no innate ability to form consciousness.

  29. BubbaRichon 09 Jan 2010 at 11:56 pm

    I may have missed it in reviewing the exchanges above, but it seems obvious to me that there is adaptive advantage to models of the environment in planning to obtain resources and to avoid dangers. The animal brain started as complexification of even earlier neural systems, and was nearly a purely input/output machine. Intermediate neurons and especially neural loops added the ability to record state information, although even the basic construction of neural systems enabled recording of SOME state information as a basic reflex loop. But more intermediate neurons and more redundant systems enabled a more complex model of aspects of the environment (the umwelt). Aspects of the self (such as position, pain) were important to model. These models became more complex, including feedback loops between them. That sounds very hand-waving, but I don’t see any reason to assume even a state change, much less an impossible evolutionary leap.

  30. sonicon 10 Jan 2010 at 2:59 am

    tmac57-

    First we can look at Pasher-

    http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

    (This is about the current use of fMRI studies of emotion, personality and social cognition)

    “We show how this nonindependent analysis inflates correlations while yielding reassuring-looking scattergrams…. In addition, we argue that, in some cases, other analysis problems likely created entirely spurious correlations.”

    From this we can understand how someone who is basically skeptical would consider the possibility that much of the ‘perfect correlation’ between mental events and brain events might be overstated.

    Having questioned the fundamental faith that is the current underpinning of neurophilosophy, one then could go on to other questions.

    It seems that Ray has a number of issues with the current model of neuroscience.

    Sometimes the wisest person is the one who understands that he doesn’t know.

  31. YairRon 10 Jan 2010 at 7:04 am

    I am afraid I disagree with Novella on this issue. I think he makes several errors of miscommunication and misunderstanding.

    First, Tallis (like Chalmers, and me) does not doubt that human consciousness was brought about by evolution. It is perfectly clear that the mind is what the brain does, and that the brain evolved. What isn’t clear is why is it that the brain activity is accompanied by phenomenological experience (down to the details – why this activity implies this consciousness, and so on). Answering this question is key to extending consciousness beyond the human sphere – there have been many examples upthread about grades of consciousness in the animal kingdom and so on, but unless we can understand why certain physical states (or dynamics) are conscious, we don’t really have a good basis to make such calls.

    Novella’s (and Dennet’s) “solution” to this problem is essentially to miss it. As M. Davies’ asked – why is it that “[reward mechanisms] have to be experienced by the organism is some way”? Why don’t affinity and aversion, present in bacteria as Novella said, require subjective experience? And if they do, why don’t attraction and repulsion, present in all physical interactions?

    Clearly, human consciousness is “what emerges, sometimes, from an evolving central nervous system that provides for the ability to respond to stimuli and process it in increasingly complex ways, and to filter and selectively attend to information (both internal and external)”. No one is arguing otherwise. But what about non-human consciousness? And what is it about some human brain states that leads to consciousness? If we seek to understand consciousness, we need to delve deeper.

    Tallis’ point about evolution is that phenomenological experience (what something feels like, subjective experience) does not have any causal power and therefore is not a player in evolution (or, for that matter, science). Behavioral “affinity and aversion to various stimuli in the environment” has nothing to do with consciousness, about how these stimulai feel like or if they feel like anything at all. To be more precise – it isn’t clear why there is connection between behavior and how something feels like, and this ‘why’ is the (hard) problem at hand.

    It is therefore absolutely necessary for consciousness to be an evolutionary epiphenomena. This doesn’t go against Tallis’ point, it stems from it. Upthread, there were many comments about the evolutionary advantages of consciousness; there are none. Or at least, no one has even given a satisfactory account of why some systems (such as reward systems), that have clear evolutionary advantages, are also accompanied by consciousness.

    The problem is that such epiphenomenalism isn’t satisfactory. We don’t want to merely conclude that human consciousness arose accidentally – we want to know what is it that it consists of.

    We find ourselves in the curious position of knowing the answer, but not the question. It is intuitively clear that consciousness does have an advantage, at times, that a large part of the brain’s evolution is an evolution of consciousness. I don’t accept that it’s all an accident. Yet, consciousness cannot have an advantage.

    —–

    What we seem to be missing here is a psychophysical rule – a rule that says how physical states [and this includes dynamic, not just static, states] feel. A successful rule will allow us to reconstruct, from the bottom up, synapse by synapse, how a human feels; yet be as simple and universal as possible (in accordance with scientific aesthetics). It will allow us to understand why certain, conscious, structures had an evolutionary advantage and evolved as adaptations, while others evolved accidentally. From such a rule, we will be able to deduce conclusions about the consciousness of non-humans, going beyond neural correleates of connsciousness.

    No such rule has been established. And even given such a rule, no one has been able to understand why this rule applies, and not some other.

    The only serious suggestion raised thus far for such a rule is “consciousness is information processing”. It is scientifically elegant – it talks about a universal phenomena, not ethnocentrically focusing on the human. It seems to explain why evolution of consciousness will relate to the evolution of the most sophisticated information processing system ever developed (to our knowledge) – the human brain. But it is too vague – it doesn’t make clear how experiences will feel like, it doesn’t allow us to truly reconstruct a model consciousness and see if it fits a human. And it has some strange ramifications too – it implies that every single physical particle has some sort of consciousness (panpsychism), that a single physical system can have many different conscious states associated with it simultaneously (since there are different representations, in terms of information processing, for the same process), and that there are layers of consciousness within all objects (each particle’s, each molecule’s…).

    Dennet flirts with this law in his Multiple Drafts Model of consciousness. But he veers off into behaviorism, ignoring the existence of subjective experience and the ramifications of his own theory regarding the experience for non-neural parts of our bodies and for the non-human.

    I, myself, do not feel this law is sufficient. I suspect it is true, but it lack the specificity and applicability to make it more than a broad outline. More work is needed to expand and truly apply it.

  32. That Guy Montagon 10 Jan 2010 at 4:47 pm

    I’m going to have to say I’ve read a bit of Tallis and I’m with Steven on this one.

    First, a quick burst on terminology, “aboutness” is philosophical slang: he’s talking Intentionality here. We have thoughts about things and it’s this aboutness that supposedly defines the specifically mental.

    The next thing is that I understand Tallis in his model of the mind to be very much a proponent of the extended mind. This is the sense that we can’t make sense of the mind just by looking at the brain and M.Davies gave us all a very big hint as to where and how to think about this with his comments on water. We’re basically supposed to think here of Hilary Putnam and his twin-earth thought experiment.

    http://en.wikipedia.org/wiki/Twin_Earth

    The idea here is that supposedly you can get two brains that are in an identical state and yet mean fundamentally different things. Another way to think of it would be we could use the sentence “Let’s go to the pub.” many times and each time go to a different pub. The analogy is a little strained but should get you there eventually. Putnam’s point was as he put it “meaning ain’t in the head” and that to understand meaning you can’t just rely on the brain state, you need (and it pains me to write this because it is such a cliche) “context.”

    Me, I find this fascinating because it really is a puzzle where we draw the line between our thinking and the world. That said I’m with Dennett in that it is only a problem of consciousness because dualism has been smuggled into the converstaion by everyone involved, even the staunch materialists. C-Fibres are pain. It might be hard to conceive of it, but tough. Our mind is a worrying kludge of processes and we shouldn’t expect it to be capable of grasping every truth about the world. We also shouldn’t be surprised if how it works throws the occasional spanner in our progress.

  33. jstunkelon 10 Jan 2010 at 9:20 pm

    Steve,

    Understanding consciousness drives me crazy. I don’t understand. It seems like we simply don’t have a good explanation for why neural activity creates awareness.

    You make the statement: “I prefer the summary that the mind is what the brain does”. How does the brain create the mind by what it does?

    Is this an unsolvable problem?

  34. HHCon 10 Jan 2010 at 10:32 pm

    I am sorry to say that Ray Tallis is quite behind the times in his thinking. B. F. Skinner, stimulus-response chains explained all living behaviors without the concept of cognition. Cognitive theory would come later in the form of S-O-R, stimlus-organism-response chains. Your philosophical discussions are sounding like the early history of psychology as a science.

  35. dlbon 11 Jan 2010 at 12:33 am

    Very briefly:
    1. The aboutness point is a reference to intentionality, as mentioned above. This is a problem for theories of mental content generally: how can thoughts be about anything, such as objects? Even supposing externalism about mental content, as Putnam and Burge motivate (with the twin-Earth argument or the arthritis/tarthritis argument), there is still a question about how thoughts can be about, or directed toward, their contents (whether those contents are in the head or outside of it). One typical response here is to assume a representational theory of mind, and then give some account of representation that makes sense of the contentful-ness of our thoughts.
    At any rate, it is not obvious that all of our conscious mental states are contentful: Searle likes to cite background affect as lacking any particular object, such as generalized anxiety. Furthermore, even supposing we could understand consciousness (i.e. subjective properties of experience or ‘qualia’), it is not obvious that we would thereby solve the problem of intentionality. These are two separate issues.

    2. On the mushing without mashing or what-not:
    Seeing as Tallis seems to cite one other general problem from the philosophy of mind (intentionality: see pt 1 previous), and without knowing anything about Tallis at all, his points about the unity of consciousness despite the diversity of its contents seems to refer to a famous (in philosophy) characterization of consciousness that dates to Kant: the so-called transcendental unity of apperception. I would not be surprised if that is what he is obliquely referring to.
    The transcendental unity of apperception is… i. extremely complex to understand; ii. especially for a blog comment; and iii. extremely contentious in some philosophical circles (see: the Churchlands).
    At any rate, consciousness surely has different properties, such intentionality and (possibly) a transcendental unity, beside from the subjective properties of experience. But it is the subjective properties of experience that purportedly cause the real headaches, or so Tallis (and Chalmers and Kim and…) believe.
    Again, without knowing anything of Tallis, at the very minimum he seems to be mushing together lots of different problems.

  36. Steven Novellaon 11 Jan 2010 at 9:02 am

    YairR – If that is Tallis’s position, he was very unclear.

    In any case – it still amounts to a hyperadaptationalist position. Why is explaining consciousness as an evolutionary epiphenomenon unsatisfactory? To clarify – it is satisfactory from an evolutionary point of view – maybe not from a neuroscience point of view, but that is irrelevant to the evolution question.

    The point is, evolution does not require consciousness to have a specific advantage. We still want to know why we are conscious, but that is (or may be) irrelevant to evolution.

    Further- we don’t know that consciousness is an epiphenomenon. Doing the processing the brain does may require some form of consciousness – we don’t know enough to say that it doesn’t.

    I think what we have here are philosophers trying to understand or at least describe consciousness as a phenomenon, and failing claim that there is some “hard” problem with consciousness. But this is purely a problem of philosophical understanding. It is not a problem of neuroscience.

  37. artfulDon 11 Jan 2010 at 2:22 pm

    “But this is purely a problem of philosophical understanding. It is not a problem of neuroscience.”

    So the philosophical problem of understanding the evolutionary value of subjective experience is not a problem for neuroscience because what, that neuroscience isn’t concerned with evolution of the cognitive apparatus?

    Give us a break here. Odds are that it’s the subjective experience that is crucial to all aspects of an organism’s adaptive potential.

  38. Pixy Misaon 12 Jan 2010 at 2:30 am

    YaiR, you say:

    “The problem is that such epiphenomenalism isn’t satisfactory. We don’t want to merely conclude that human consciousness arose accidentally – we want to know what is it that it consists of.”

    Your make a mistake here. The explanation of consicous experience as an epiphenomenon of complex cognition may or may not be personally satisfactory. It may or may not be correct. But if it is an epiphenomenon, that in no way implies that it is an accident, it just implies that it is not the specific function that was selected for, just as blood was not selected for redness.

  39. YairRon 12 Jan 2010 at 9:45 am

    Novella –

    “Why is explaining consciousness as an evolutionary epiphenomenon unsatisfactory? To clarify – it is satisfactory from an evolutionary point of view – maybe not from a neuroscience point of view, but that is irrelevant to the evolution question.”
    If you only want to talk about the biology of the brain, without mentioning consciousness, I suppose it is satisfactory. But if you want to talk about consciousness – the evolution of it, the neuroscience of it, and so on – you need to know what you’re talking about. Which means knowing what phenomena it is epi to. But here you run into the root of the hard problem – science cannot investigate epiphenomena, just phenomena, so we are left without our greatest tool (science) when we come to resolve this question. All we can do scientifically is essentially to establish correlations between verbal reports of consciousness and other phenomena, and that’s revealing but just not enough.

    “Further- we don’t know that consciousness is an epiphenomenon. Doing the processing the brain does may require some form of consciousness – we don’t know enough to say that it doesn’t.”
    But that won’t make it not an epiphenomena. As an analogy – the redness of human blood follows from how it functions, so we understand the evolution of it, but we also understand that the evolutionary pressures were on the metabloic functions and the color is an epiphenomena. Likewise, knowing that the processing the brain does requires consciousness (and I’m sure it does) would help us understand the evolution of consciousness, but it would still be an epiphenomena. The problem is that while we understand enough chemistry to figure out when proto-blood is red and why blood is red, we don’t have a similar understanding regarding the processing. We don’t know when processing becomes conscious and why.

    “I think what we have here are philosophers trying to understand or at least describe consciousness as a phenomenon, and failing claim that there is some “hard” problem with consciousness. But this is purely a problem of philosophical understanding. It is not a problem of neuroscience.”
    To give an analogy, consider the study of motion within Aristotelian physics. For Aristotle, there was no separate category of movement, it was just part of “change”. Only once people, in the late middle ages, started to look at movement as a separate phenomena were they able to formulate mathematics to describe it, which led, in time, to Newtonian physics. It took the right conceptual framework to study the phenomena.

    I would posit that the same is true for neuroscience. We currently lack the conceptual framework that will allow us to formulate the right questions. And that is a problem with philosophy, not pure science (although the two are obviously connected).

    I would note that I disagree with Tallis’s and most other philosopher’s description of consciousness. I think they’re overly relying on their own subjective experience with human consciousness, and too enamored with linguistics. I do not think consciousness is united or about something. I think that’s just an illusion, a delusion that is part of human consciousness, much like our sense of psychological self. I believe such issues only serve to confuse the study of consciousness as-such, obscuring the hard problem with a host of other issues – Tallis is “mushing together lots of different problems”, as dlb said. But the hard problem remains, it is not a problem with the philosophical description of consciousness but rather with the applicability of the empirical method.

    Pixy Misa -

    It is an accident precisely in the sense that it is not the function selected for, so its arousal is due only to chance and not to selective pressures. It is an accident precisely in the sense that the redness of our blood is an accident, no more and no less.

  40. benshumson 14 Jan 2010 at 7:45 am

    I learned 3 new words from this post!

    Coy, Epiphenomenon, and conflate. Thank you Steve!

  41. hornungerouson 30 Jan 2010 at 10:35 pm

    @HHC: This was most certainly not Skinners position. Skinner would have said that consciousness is something the brain does (consciousness is behavior), along the lines of what Dr. Novella said about the “mind” is what the brain does or whatever. Ray Tallis is a new kinda somethin’ with his philosophy, have to read it to see what its about.

  42. Mong H Tan - PhDon 16 Apr 2010 at 3:59 pm

    RE: Deciphering Tallis’ writing on consciousness!?

    Steven Novella concludes above that “In my opinion Tallis does not put forward one valid argument against a purely materialistic neuroscience view of consciousness – that consciousness is brain function. His evolutionary arguments misrepresent evolutionary theory. His neuroscientific arguments are simply false, and do not reflect the state of the science. And his philosophical arguments are failed semantic gambits that are ultimately incoherent.

    I thought that is pretty much of the same conclusion that I just reached, after reading his brief bio and work elsewhere; and I presented my comment therein here: “Underrated Raymond Tallis — RE: Underrated Tallis? — An aspiring modern philosopher of ME (Mind & Emotion, including morality & ethics)!?” (StandpointMagUK; April 14).

    I think Tallis — even Daniel Dennett — has fallen trapped into the pseudoscientific reductionism of biology — and of consciousness — of the prolific neo-Darwinism writer and purveyor Richard Dawkins; and I am eager to see if Tallis would or could recover himself from the fever that I have diagnosed and prescribed in my comment above!?

    Best wishes, Mong 4/16/10usct2:59p; practical science-philosophy critic; author “Decoding Scientism” and “Consciousness & the Subconscious” (works in progress since July 2007), Gods, Genes, Conscience (iUniverse; 2006) and Gods, Genes, Conscience: Global Dialogues Now (blogging avidly since 2006).

  43. Hector Moraleson 18 Apr 2010 at 10:54 pm

    And the Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life: and man became a living soul

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.