Jun 18 2013
Mind and Morality
One of the themes of this blog, reflecting my skeptical philosophy, is that our brains construct reality – meaning that our perceptions, memories, internal model of reality, narrative of events, and emotions are all constructed artifacts of our neurological processing. This is, in my opinion, an undeniable fact revealed by neuroscience.
This realization, in turn, leads to neuropsychological humility – putting our perceptions, memories, thoughts, and feelings into a proper perspective. Thinking that you know what you saw, or you remember clearly, or that your “gut” feeling is a reliable moral compass, is nothing but naive arrogance.
Perhaps the most difficult aspect of constructed reality to fully accept is our morality. When we have a deep moral sense of what is right and wrong, we feel as if the universe dictates that it is so. Our moral senses feel objectively right to us. But this too is just an illusion, an evolved construction of our brains.
Before I go on, let me point out that this does not mean morality is completely relative. I discuss the issue here and here, and if you have lots of time on your hands you can wade through the hundreds of following comments.
The neurologically constructed nature of morality means that neuroscientists (including psychologists) can investigate how our morals are constructed, just like anything else the brain does. A recent series of experiments published in Psychological Sciences did just that.
Researcher Adrian Ward looked at the effect that morality and agency have on each other. Agency is the notion that some other entity in the world has a mind and therefore it has intentions, plans, and feelings.
For a little background, when it comes to our moral feelings, we do not mentally assign agency based upon a scientific understanding and analysis of the true nature of the entity. Assigning agency, or assuming that another entity has a mind, is just one more thing that our brains do subconsciously.
That subconscious agency detection uses an evolved process. We do not simply assign agency to all other humans and not anything else. Animals have some degree of agency, and so it was important for our ancestors to behave as if predators, for example, are agents who want to kill and eat us. In fact we appear to have evolved hyperactive agency detection – we err hugely on the side of feeling as if something has agency if it simply acts as if it does.
The question then becomes – what are the rules by which our brains subconsciously assign agency to things in our environment? One apparent rule is that we tend to assume agency if an object moves in a non-inertial fashion. For example, we have no problem assigning agency and even emotions and character to two-dimensional shapes simply by how they move.
Ward’s experiments explore the relationship between assigning agency, which can also be thought of as the theory of mind – the notion that other entities have minds like we do and can think, plan, and feel, and moral calculation.
Ward found two things. First is that if an entity to which we would normally not assign agency is victimized then we assign mind to it. He studied subjects’ attitudes toward corpses and robots, and found that when they were the target of abuse the subject assigned more mind to them. If they were the target of moral harm, then they must have a mind, because only entities with minds can be morally harmed.
What this means is that, not only do we assign moral value to entities with minds, we assign minds to entities with apparent moral value. The two concepts are linked in our brains.
Ward also found that for entities to which we assign full mind at baseline, being victimized caused subjects to assign less mind to them – they were dehumanized. This may be a way of reducing our moral pain, perhaps related to cognitive dissonance.
This all makes sense when you put it together. We can do whatever we want to a rock, because a rock has no mind or agency. We should not feel any remorse or moral pain from smashing apart a rock. But if an entity has agency, if it has a mind, then all of our moral emotions come online.
This is, to some extent, a binary calculus – things either have a mind or they don’t. But, for those things that do have a mind, apparently they can have more or less of a mind; there does appear to be a spectrum.
Other research indicates, for example, that we treat in-group and out-group members differently with regard to our moral calculus. We generally assign great empathy to members of our in-group, but are capable of dehumanizing members of an out-group. We know they have agency, but they are not full mental beings. They are automatons who can be killed if necessary.
I think this attitude is reflected in our fiction. Enemy soldiers, who need to be killed in large numbers, are generally faceless automatons. They are uniforms, not people. Think of the Stormtroopers in Star Wars. A general rule of science fiction is that there are certain things the hero(s) can kill with abandon (without moral judgement): robots, insects, undead, monsters, and Nazis. Nazis can be thought of generically as any faceless evil enemy soldier, which is perhaps why so many science fiction enemy armies have a Nazi vibe to them.
Also, aliens that are friendly look more human while aliens that are our enemies and we need to kill in large numbers are often more insectoid or reptilian – they are monsters, not persons.
All of this has huge implications for our morality and ethics. We need to recognize that we have a hard-wired ability to dehumanize people – to reduce our emotional assignment of mind and therefore morality to individuals or groups.
This also has implications for things like animal research. How much agency and mind do different people assign to animals, and should this be the basis of our treatment of them, vs a more scientific approach?
What will be our attitude toward and treatment of robots as they become more and more human appearing and acting? What will happen when we encounter aliens – how will they fit into our moral calculus.
Understanding how our brains construct morality will not determine morality for us, but it will hugely inform the conversation.