Jan 08 2010

Ray Tallis on Consciousness

Raymond Tallis is an author and polymath; a physician, atheist, and philosopher. He has criticized post-modernism head on, so he must be all right.

And yet he takes what I consider to be a very curious position toward consciousness. As he write in the New Scientist: You won’t find consciousness in the brain. From reading this article it seems that Tallis is a dualist in the style of Chalmers – a philosopher who argues that we cannot fully explain consciousness as brain activity, but what is missing is something naturalistic – we just don’t know what it is yet.

Tallis has also written another article arguing that Darwinian mechanisms cannot explain the evolution of consciousness. Curiously, he does not really lay out an alternative, leading me to speculate what he thinks the alternative might be.

The Evolution of Consciousness

While Tallis is clearly a sophisticated thinker, who does not appear to have an agenda (and therefore deserves to be taken seriously) he constructs what I feel is a very flawed argument against the evolution of consciousness.

His primary point seems to be that consciousness is not necessary and would not provide any unique survival advantage, and therefore purely Darwinian mechanisms would not select for it. He writes:

Even if we were able to explain how matter in organisms manages to go mental, it is not at all clear what advantage that would confer. Why should consciousness of the material world around their vehicles (the organisms) make certain (material) replicators better able to replicate? Given that, as we noted, qualia do not correspond to anything in the physical world, this seems problematic. There may be ways round this awkward fact but not round the even more awkward fact that, long before self-awareness, memory, foresight, powers of conscious deliberation emerge to give an advantage over those creatures that lack those things, there is a more promising alternative to consciousness at every step of the way: more efficient unconscious mechanisms, which seem equally or more likely to be thrown up by spontaneous variation.

One error is Tallis’s reasoning is the unstated assumption that evolution will always take the most advantageous path to survival. There may be more efficient methods of survival than consciousness, but so what. One might as well ask why birds fly, when it is such a waste of energy and there are more efficient ways of obtaining food and evading predators.

Life through evolution does not find the solution to problems, but many solutions. Life is also constrained by its own history – so once species heads down a certain path its descendants are constrained by the evolutionary choices that have been made.

Consider, for example, that many forms of life on earth have very limited (if any, depending on your view) consciousness. The entire invertebrate world, including clams, sea stars, worms, etc. lack sophisticated central nervous systems and do just fine without anything approaching human consciousness.

In fact Tallis’s point that there are more likely solutions than consciousness conforms nicely to the natural world – evolution seems to have solved the problem of survival much more often without resorting to consciousness. Humans are the exception, not the rule.

His arguments are ultimately extremely evolutionarily naive. They are excessively adaptationist, for example. Not everything that evolves was specifically selected for in all of its aspects. There are many epiphenomena – properties of life that arise as a side consequence. That is because life is messy.

Tallis also fails to consider possible advantages for even primitive consciousness, or how it may emerge out of neural functions that themselves provide useful functions. M.E. Tson goes over this issue in an interesting article. But I will give my take.

The most primitive roots of consciousness may have been in the affinity and aversion to various stimuli in the environment – the ultimate roots of emotion. This could be as simple as a bacteria moving toward food and away from toxins.

As behavior became more complex, so did the systems of aversion and affinity, allowing for pleasure and pain, which in turn allow for a reward system. Once you have a chemical system that rewards certain behaviors and discourages others, you have a foothold into the evolution of complex psychological motivations and emotions. But these have to be experienced by the organism in some way – the foreshadowing of consciousness.

Another factor that could lead to consciousness is the need to filter all the information coming into the organism. With a certain amount of sophistication of visual, auditory, sensory, and chemical sensing systems the organism’s programmed responses can be easily overwhelmed. The world is complex, and not every shadow is a predator. There can also be multiple competing simuli – should an organism go after food or avoid a predator?

It is easy to imagine that the same neural system that collects all this information input would also develop a system to filter out the most useful information from the less useful, or even distracting, information – to prioritize inputs. This is a functional equivalent of attention, which is a component of consciousness.

Life does not have to evolve down such a pathway, nor does this even have to be the most likely pathway. It may, in fact, be very unlikely. It just needs to be possible.

The brain and consciousness

Tallis begins this article by acknowledging that consciousness does in fact correlate with brain activity – there is no consciousness without brain activity. He also acknowledged that most neuroscientists are content with the notion that the brain causes consciousness and is a sufficient explanation for it.

He therefore departs from the self-serving and patently false propaganda of intelligent design dualists who would have you believe that neuroscientists are abandoning materialism in droves (right, just like they are abandoning evolution), or who will pretend that consciousness does not correlate closely with brain activity (just search my blog for Michael Egnor to read my dismantling of these arguments).

Here is where Tallis departs from the mainstream of neuroscience:

It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demonstrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is.

Here is commits a bit of a straw man in saying that the position of neuroscience is that brain activity and consciousness are “one and the same thing.” I prefer the summary that the mind is what the brain does. But understanding consciousness cannot be reduced to neurons firing any more than an appreciation for a masterwork painting can be reduced to the chemical structure of paint or wavelengths of light. There is a higher order of complexity to art, just as there is to consciousness.

But this subtle straw man opens the door for Tallis to exploit, unintentionally, vagueness in the language to create the impression of contradictions where none exist (ironic for someone who has been such a foe of post-modernism). For example, he then writes:

Many neurosceptics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be indistinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of “aspects” depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness.

I find this paragraph to be an incoherent linguistic mess. You can see how the straw man of saying that brain function and consciousness are the exact same thing leads to his curious rejection that the brain explains consciousness. He then introduces another straw man – that the brain and consciousness are aspects of some third thing.

The core problem of understanding here is that language is inadequate to capture the nuance of concepts needed to wrap one’s brain (pun intentional) around the concept of consciousness and its relationship to the brain. The brain is an object. Consciousness is a brain phenomenon – a dynamic manifestation of brain function.

He extends this point when he writes:

If it were identical, then we would be left with the insuperable problem of explaining how intracranial nerve impulses, which are material events, could “reach out” to extracranial objects in order to be “of” or “about” them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are “about” the physical object. Biophysical science explains how the light gets in but not how the gaze looks out.

Again, I find this little more than word play, originating from the false premise that the neuroscience position is that consciousness is identical to the brain. And what does he mean – exactly, operationally – by “aboutness”. Does he mean the abstract concept? How an object is represented in the brain? These all have neural correlates too.

He next makes a point that I have not encountered before, so he gets some points for originality. But I think he should have consulted a neuroscientist before making this point, for he does not acknowledge what seems to me to be the obvious answer. He writes:

My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this “merging without mushing”, this ability to see things as both whole and separate.

He is saying that neuronal activity cannot explain how we have experience of multiple independent things at the same time, without those information streams becoming mushed together. But in fact our understanding of brain function accords nicely with the experience Tallis describes.

Our brain are massively parallel in their organization. And there are neurons that make millions of connections to thousands of other neurons. Networks of neurons are discrete, and can store and convey discrete sensations, thoughts, memories, etc. And yet they are meshed with numerous other networks of neurons with other discrete sensations. This setup is perfect for allowing meshing without mushing.

But also – there is mushing in that memories do merge together. We get information mixed up all the time, because the discreteness of memories in the brain is not perfect. But this probably goes along with the fact that our brains are excellent at pattern recognition – one network of neurons overlaps or connects in some way with another network, and so one thought reminds us of another – we make connections, we see patterns and associations – we mesh, with some mushiness.

This objection of Tallis is simply not valid. Nor is his next:

“A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system.”

Tallis is just overthinking the issue. What is the distinction between a “sense of the past” and storage of information about the past? Storage of information is a present physical state, but the information is about the past.

In fact neuroscientists have discovered neurons in the brain that “time stamp” events. This is where a little more knowledge of the latest in neuroscience would have helped Tallis immensely. Understanding time is just another function of the brain.

In fact Tallis next makes a very telling statement:

This is consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a “stubbornly persistent illusion”.

First, he shows how he is overthinking this issue, trying to understand the brain’s understanding of time as a basic feature of physics. But if we take Einstein’s quote at face value, that time is an illusion, that accords nicely with the standard materialist neuroscientific view of consciousness – that it is analogous to an illusion our brains construct for our conscious selves to experience. That would include a sense of time.

This is absolutely not to say that reality is an illusion. Reality exists. But we have an internal model of reality in our brains – a very dynamic model that is part of our internal processing or self-reflection. That model is a constructed “illusion” – it has a very functional an adaptive relationship to external reality, but it is not a simple reflection of it. What we call “optical illusions” are just one manifestation of the ways in which our internal model of reality is an imperfect representation of external reality.

Tallis’s final points are these:

There are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness.

I don’t think these three things can be conflated. The notion of self is again a function of the brain – there are parts of the brain, networks, that produce the sense of self as part of our model of reality. A distinct but related function is to place our sense of self inside our physical bodies, and to make it separate from the rest of the universe. These are clearly identified brain functions – functions we can localize, and turn off with interesting results.

Initiation of action also localizes, and there are disorders (such as Parkinson’s disease) that interfere with the ability to initiate actions. There are parts of the brain that generate activity – keep the neurons firing, and provide for the initiation of specific thoughts or actions. Rather than thinking about initiation as firing up neurons from nothing, it is more accurate to imagine neurons firing throughout the brain all the time (at least while awake) and this activity follows different patterns depending upon external stimulation and the internal conversation.

Free will is a more difficult concept to deal with. There are certainly those who believe free will does not exist because the brain is a deterministic (if very complex) machine. A meaningful discussion of this topic is beyond the scope of this blog post. I will just say that I think the discussion of free will falls victim to semantics as well.

What is clear is that people can make choices. Sure, those choices do not occur as a result of some non-material external will. They are just another function of brain activity.

The bottom line is that free will does not present a problem for the neuroscientific view of consciousness. The extent to which we can say that it exists is also the extent to which we can say it is a brain function.

Conclusion

In my opinion Tallis does not put forward one valid argument against a purely materialistic neuroscience view of consciousness – that consciousness is brain function. His evolutionary arguments misrepresent evolutionary theory. His neuroscientific arguments are simply false, and do not reflect the state of the science. And his philosophical arguments are failed semantic gambits that are ultimately incoherent.

But I am curious as to what Tallis thinks consciousness is, if it is not brain function and its existence cannot be explained by Darwinian evolution. I acknowledge he has written a great deal that I have not read – I do not claim to have exhaustively searched for an answer. But he is certainly being coy in these two articles, which is an interesting omission.

I am especially curious as Tallis seems to be an intellectual with whom I likely agree about a great deal. I’ll have to do some more digging.

43 responses so far