Jun 18 2013

Mind and Morality

One of the themes of this blog, reflecting my skeptical philosophy, is that our brains construct reality – meaning that our perceptions, memories, internal model of reality, narrative of events, and emotions are all constructed artifacts of our neurological processing. This is, in my opinion, an undeniable fact revealed by neuroscience.

This realization, in turn, leads to neuropsychological humility – putting our perceptions, memories, thoughts, and feelings into a proper perspective. Thinking that you know what you saw, or you remember clearly, or that your “gut” feeling is a reliable moral compass, is nothing but naive arrogance.

Perhaps the most difficult aspect of constructed reality to fully accept is our morality. When we have a deep moral sense of what is right and wrong, we feel as if the universe dictates that it is so. Our moral senses feel objectively right to us. But this too is just an illusion, an evolved construction of our brains.

Before I go on, let me point out that this does not mean morality is completely relative. I discuss the issue here and here, and if you have lots of time on your hands you can wade through the hundreds of following comments.

The neurologically constructed nature of morality means that neuroscientists (including psychologists) can investigate how our morals are constructed, just like anything else the brain does. A recent series of experiments published in Psychological Sciences did just that.

Researcher Adrian Ward looked at the effect that morality and agency have on each other. Agency is the notion that some other entity in the world has a mind and therefore it has intentions, plans, and feelings.

For a little background, when it comes to our moral feelings, we do not mentally assign agency based upon a scientific understanding and analysis of the true nature of the entity. Assigning agency, or assuming that another entity has a mind, is just one more thing that our brains do subconsciously.

That subconscious agency detection uses an evolved process. We do not simply assign agency to all other humans and not anything else. Animals have some degree of agency, and so it was important for our ancestors to behave as if predators, for example, are agents who want to kill and eat us. In fact we appear to have evolved hyperactive agency detection – we err hugely on the side of feeling as if something has agency if it simply acts as if it does.

The question then becomes – what are the rules by which our brains subconsciously assign agency to things in our environment? One apparent rule is that we tend to assume agency if an object moves in a non-inertial fashion. For example, we have no problem assigning agency and even emotions and character to two-dimensional shapes simply by how they move.

Ward’s experiments explore the relationship between assigning agency, which can also be thought of as the theory of mind – the notion that other entities have minds like we do and can think, plan, and feel, and moral calculation.

Ward found two things. First is that if an entity to which we would normally not assign agency is victimized then we assign mind to it. He studied subjects’ attitudes toward corpses and robots, and found that when they were the target of abuse the subject assigned more mind to them. If they were the target of moral harm, then they must have a mind, because only entities with minds can be morally harmed.

What this means is that, not only do we assign moral value to entities with minds, we assign minds to entities with apparent moral value. The two concepts are linked in our brains.

Ward also found that for entities to which we assign full mind at baseline, being victimized caused subjects to assign less mind to them – they were dehumanized. This may be a way of reducing our moral pain, perhaps related to cognitive dissonance.

This all makes sense when you put it together. We can do whatever we want to a rock, because a rock has no mind or agency. We should not feel any remorse or moral pain from smashing apart a rock. But if an entity has agency, if it has a mind, then all of our moral emotions come online.

This is, to some extent, a binary calculus – things either have a mind or they don’t. But, for those things that do have a mind, apparently they can have more or less of a mind; there does appear to be a spectrum.

Other research indicates, for example, that we treat in-group and out-group members differently with regard to our moral calculus. We generally assign great empathy to members of our in-group, but are capable of dehumanizing members of an out-group. We know they have agency, but they are not full mental beings. They are automatons who can be killed if necessary.

I think this attitude is reflected in our fiction. Enemy soldiers, who need to be killed in large numbers, are generally faceless automatons. They are uniforms, not people. Think of the Stormtroopers in Star Wars. A general rule of science fiction is that there are certain things the hero(s) can kill with abandon (without moral judgement): robots, insects, undead, monsters, and Nazis. Nazis can be thought of generically as any faceless evil enemy soldier, which is perhaps why so many science fiction enemy armies have a Nazi vibe to them.

Also, aliens that are friendly look more human while aliens that are our enemies and we need to kill in large numbers are often more insectoid or reptilian – they are monsters, not persons.

All of this has huge implications for our morality and ethics. We need to recognize that we have a hard-wired ability to dehumanize people – to reduce our emotional assignment of mind and therefore morality to individuals or groups.

This also has implications for things like animal research. How much agency and mind do different people assign to animals, and should this be the basis of our treatment of them, vs a more scientific approach?

What will be our attitude toward and treatment of robots as they become more and more human appearing and acting? What will happen when we encounter aliens – how will they fit into our moral calculus.

Understanding how our brains construct morality will not determine morality for us, but it will hugely inform the conversation.

41 responses so far

41 thoughts on “Mind and Morality”

  1. Bill Openthalt says:

    Thanks for a very nice post, Steven.

    As far as animals are concerned, it is quite clear that people tend to assign full human agency to their pets, and are even capable of ranking humans (even close relations) lower than their pets. And of course, humans tend to favour their kin, their neighbours over the other inhabitants of the city, their countryfolk over foreigners, etc.

    It is probably useful to see morality as a large-group-building device. If one doesn’t personally know all the members of a group, one needs a framework to determine how to deal with strangers. The better relative strangers cooperate, the more successful they will be.

  2. Yehouda Harpaz says:

    > We need to recognize that we have a hard-wired ability to dehumanize people

    How do you know that it “hard-wired”, rather than learned?
    Before that you the term “evolved process” , which sounds to me also meaning hard-wired (by evolution). If that interpretation is true, how do you know that it is hardwired process?

  3. Harpaz – that’s an excellent question. The inference is from the universality of the trait. It does not seem to be culture-specific. I am not sure how culturally extensive the research has been, but as far as it has been investigated it appears to be universal. I can look deeper into that specific question.

  4. mufi says:

    Bill said: It is probably useful to see morality as a large-group-building device. If one doesn’t personally know all the members of a group, one needs a framework to determine how to deal with strangers. The better relative strangers cooperate, the more successful they will be.

    Which goes a long way towards explaining morality as a cultural variable. That is, different cultures arrive at different solutions for dealing with strangers.

    In small-scale tribal societies, killing a stranger that you happened upon was no big deal – it may even have been positively sanctioned by the tradition (e.g. as Jared Diamond observed in tribal New Guinea) – whereas in larger, more complex agrarian societies, let alone in modern-industrial societies, we’ve acquired a more universalist moral stance (albeit, with some important exceptions, like those Steven mentioned).

    However, I doubt that social scale and complexity can explain all of the variation here – e.g. a moral framework might be more universalist and less violent (at least in principle) than another – even when both have emerged from similar socioeconomic backgrounds.

  5. daedalus2u says:

    It may be that the ability to dehumanize others is “hard wired”, sort of like how the ability to learn a first language is “hard wired”, even though every language must be learned and essentially any language can be learned.

    I am quite sure that dehumanize specific “others” is not “hard wired”, and is only learned. I discuss my explanation of how xenophobia happens.


    I don’t think it is possible for it to be innate because the neural networks that do the pattern recognition cannot be innate. As I see it, recognizing a phenotype can only occur after the neural network pattern recognition has developed, and then categorizing that phenotype as human/non-human can only occur after that.

    Whether it is innate or not innate, it is still unacceptable. If you are unable to recognize someone as a human being, you should recognize your deficit and compensate for it.

  6. daedalus2u:

    I’m willing to bet that most people aren’t going to be able to recognize such a deficit in themselves because they probably don’t consciously express it. It’s very easy to have the sorts of biases that show up on, for example, implicit association tests, without realizing they might have greater applicability.

    For example, white jurors might be more willing to sentence black defendants to death, but if asked about it, they would probably rationalize their decision by latching on to some piece of evidence from the case that makes the black defendant more “deserving” than a white one. But it’s not a conscious decision to treat a black person as less human, so it’s very hard to see it in yourself. (I mean, how many jurors get multiple, identical death penalty cases with the only difference being the race of the defendant?)

  7. Bill Openthalt says:

    @ Yehouda
    The confusion might stem from the fact that “humanising” is not sufficiently specific, as it doesn’t render the hierarchy implicit in the relationships between humans. Individuals only know a very small number of people, and they only really care for those close to them. Humans favour kin, and that’s a small number of people. Absent (the perception of) personal knowledge, we simply do not care.

    Active dehumanisation in times of conflict is easier to understand, as one’s own survival depends on one’s ability to harm the opponent.

    What seems to be hardwired is the need to belong to (a hierarchy of) groups, which implies a us/them distinction, which in turn leads to dehumanisation.

  8. SARA says:

    It’s a fascinating post.

    I am one of those people who assign my pets as much agency as humans. I am constantly reminding myself that they really don’t think as complexly as I tend to act like they do.

    But even logically acknowledging that, I still place them higher in my personal “in” group than most of the humans I know.

    I guess what I’m saying is that morality has an emotional bias that is not really discussed here. And I think that emotional bias is also helping to define out “in” and “out” groups.

  9. Bill Openthalt says:

    @ SARA
    Emotions are how we become aware of the results of the calculations/considerations of our subconscious parts (like the one that evaluates social closeness).

  10. pseudonymoniae says:

    Related to Yehuda’s point, while I buy arguments of universality as indicating a probable evolutionary or hard-wired component, I defintely don’t see these as absolute. At a minimum clarification is in order. I would note that nearly all humans share fairly similar environments. We develop in societies with social norms and rules and similar survival demands and these norms themselves have likely undergone thousands of years of selection. Thus, the people who survive today receive strong and broadly covergent societal demands to think and behave in specific ways, regardless of what the human “hard-wiring” must look like. (Even when people do not follow these universal rules of human behavior we tend to exclude them from normative analysis — they are “abnormal” or deviant. Because of the high cost of failure to follow such rules, we should expect that humans who are not classified as abnormal would learn to apply for fake these universal traits, even if they were my hard-wired.)

    As for my own question, I can see how humans may have developed a hard-wired ability to sense “life” e.g. based on non-inertial motion or other factors, because living things tend to be interesting, and no living things often are not. But this doesn’t directly implied that we have a hard-wired ability to detect agency. Life detecting might arise very early in development out of an obligate process (eg assuming relatively normal intelligence), and perhaps as youg humans learn about the concept of agency they find it very easy to apply agency to these easily identified living things? Accepting that the vast majority of humans develop within social environments where agency detection is important, it follows that there might very well be universal social support for this transformation from life detection to agency detection.

    Therefore, is it not possible that humans rely on a largely obligate (hard-wired) neural machinery capable of detecting certain key features of our environments, like life, but that this trait is almost universally coopted by human social demands for use in agency detection? At least a weak form of this hypothesis seems fairly reasonable to me.

  11. Steven

    I recently finished reading two books that address the issue of morality (or what humans call morality) from some interesting perspectives.

    On Being Certain: Believing You Are Right Even When You’re Not, by Robert A. Burton MD

    Supernatural Selection: How Religion Evolved, by Matthew Rossano PhD

    I strongly recommend both to you and members of your readership who are interested in this topic.

    Thank you for another fine post.


  12. Heptron says:

    “for entities to which we assign full mind at baseline, being victimized caused subjects to assign less mind to them – they were dehumanized. This may be a way of reducing our moral pain, perhaps related to cognitive dissonance.”

    Dr. Novella, just to make sure I understand this, were you saying that if I assign full mind to something and it victimizes me, I will dehumanize it?

    Also, the first thing I thought of while reading this was people who have issues with hoarding. Maybe they assign too much mind to too many things and feel guilty about getting rid of them. I always felt bad about throwing away toys when I was a kid.

  13. No – if you see someone victimized by someone else, you will reduce your estimation of mind in the person being victimized.

  14. DOYLE says:

    If we are hard wired to survive as organisms,doesn’t it follow that we are hard wired to dehumanize based upon a kill or be killed premise.

  15. daedalus2u says:

    Another way of thinking about this is as a social power hierarchy. Those at the top are *by definition* the most important, those at the bottom are *least important* and if you are low enough down you are no longer human and so have no human rights (because human rights are only for human beings).

    To a large extent, this is the whole point of top-down social power structures.

  16. Hoss says:


    “Another way of thinking about this is as a social power hierarchy. Those at the top are *by definition* the most important, those at the bottom are *least important* and if you are low enough down you are no longer human and so have no human rights (because human rights are only for human beings).

    To a large extent, this is the whole point of top-down social power structures.”

    I think it has more to do with “in groups” than a “social power hierarchy”. In this “social power hierarchy”, an individual will be more likely to dehumanize groups outside of the spectrum they associate themselves with regardless of their place on that spectrum. The people who run government, to an extent, dehumanize the general public, but in contrast the general public also dehumanize the people in government.

    There are multiple spectrums people associate themselves with in society and politics. Generally the fewer people in the spectrums(not all spectrums have the same value though) an individual associates with, the greater the political disenfranchisement for that individual.

    Mostly political power belongs to the “majority”.

  17. Hoss says:

    Mostly, political and social power belongs to the “majority”.

  18. Bruce Woodward says:

    “No – if you see someone victimized by someone else, you will reduce your estimation of mind in the person being victimized.”

    At first this seems counter-intuitive to me, but when you think about it, it makes evolutionary sense in that we will feel more “attracted” and willing to please the “stronger” person. Those who run to defend the one being victimised will often become the target of that victimisation themselves (no matter how many Disney movies tell you otherwise).

  19. Bruce Woodward says:

    Dick raises an interesting point in that do we dehumanise those who are ill or sick? If so do we have thresholds or points at where we start to give people less agency?

    And at what point do we stop assigning baseline mind to a human? We associate all kinds of agency to dead bodies, but will spit on someone who supports a rival football team.

    I guess what I am trying to get at is that is there a point where an illness or disability switches us over from assigning baseline mind and therefore dimishined agency to seeing them as not having a baseline mind enough to start giving them more agency again?

  20. Bruce Woodward says:

    I think it best not to derail this thread with that issue, perhaps best to take it to one of the other threads where mental illness denial has been thrashed out.

    I think the idea that an illness, whether it be percieved by society or real, might dehumanise people is an interesting one. We are talking about perceptions here, and the reality or non-reality of mental illness is not the issue, the perception of it and how it fits into our assignation of agency is quite fascinating to think about.

  21. Bill Openthalt says:

    @ Dick and Bruce
    We have a subconscious functional part that evaluates our relationship to people we meet. This is based on knowledge of the person, but also on our (again subconscious) appreciation of their usefulness (be polite to an apparently powerful person) and risk (a sick person can make you sick, better stay away). The results of these evaluations are brought to our consiciousness module as feelings. If people behave in ways you do not know, they might be dangerous and you will feel uncomfortable to downright hostile. These feelings are generated by modules that are barely influenced by the rational module, so even if you manage to control your behaviour, you will still experience the emotions.

    Agency is necessary for “humanisation”, but we recognise agency in non-human actors as well, so it is not sufficient. To be seen as “belonging to my (current) group”, people need to show the essential characteristics of membership, so someone exhibiting obvious non-group behaviour (like wearing the wrong jersey) will be placed outside the group. Depending on circumstances, this leads to ceasing to consider them “human”, while recognising they have agency.

    Human is actually wrong unless you accept that subconsciously, humans define “human” as “part of my group”. Humanity is an abstract concept without meaning to the more ancient brain parts.

  22. BillyJoe7 says:


    “Or banned from commenting on Novella’s blog”

    CS didn’t get banned for disagreeing, he got banned for hijacking every thread to promote his agenda and for attacking the person rather than the argument and for continuing to do these things after being warned.

    I don’t think you’ll ever get banned because you are no way near being as good as CS in doing what he did.
    That’s a compliment.
    In fact, you will likely be quietly ignored.
    That’s not a compliment.

  23. Bruce Woodward says:


    Yeah, I think that people dehumanise a lot more easily than we would ever want to admit. It brings up a pet hate of mine in that I really dislike the idea of nationality and patriotism; it creates artificial barriers in the minds of most people. News reports here in the UK often cite disasters in other countries as having a certain number of British casualties, and the extent of the report depends on how large that number is as opposed to the total number of casualties. One British death seems to trump 100 foreigner deaths… this is in a country where being patriotic is almost frowned upon (or at least used to be before the 2012 Olympics).

    What interests me is the idea that we see some people as so non-human that we then start giving them “false” agency. Do ill/disabled people lose agency the further from the normal that they go and is there a point at where they start to be given more agency because of their apparent lack of mind?


    What the DSM says or does not say is not the issue, those with mental illnesses behave differently from “neuro-typicals” and will therefore invoke different levels of agency and humanisation, this is a fact of society and it is not relevant to this discussion whether you endorse the medical side of it or not. I think your view might be useful if you are willing to put down your DSM bashing for a bit and engage in what the thread is actually discussing.

  24. daedalus2u says:

    I discuss a lot of the details in the blog post I linked to above.

    I think the mechanism for human agency detection is based on communication, if you and the entity you are trying to communicate with have consilient “theories of mind” you can communicate. If you can communicate then you detect human agency in that person. If you can’t communicate, you don’t and that triggers xenophobia via the uncanny valley effect.

    What constitutes “humanness” is polydimensional. What constitutes “what is human enough to treat well” is (I think) solely a function of the human social power structure.

    “Humanness” can be projected. That is what most people do, and that is what is always done when human traits are attributed to non-human objects or agents. It is the misperception of agency. Usually that misattribution is to impute agency where none exists. Xenophobia is to deny agency where it does exist.

    This is where a lot of the politics of race, religion and discrimination come in. The recent report of a politician wanting to ban abortions because he believed that a 20 week fetus was masturbating. Newborn infants don’t have the control over their muscles to do something like that. That imputation of motivation is pure projection due to hyperactive agency detection. That allows the male politician to focus on the “rights of the fetus” and discount the rights of the woman the fetus is inside to zero. The fetus is a moral agent because it “wants” to masturbate, the woman is not a moral agent because she wants an abortion.

    I am pretty sure that there is some “hard-wired” stuff that puts infants in a different category. Infants can’t communicate, and very likely don’t have a “theory of mind”. I think it is only after they acquire language that they become perceivable as “human agents”. I think that that time is also when the infancy-derived agency expires.

  25. daedalus2u says:

    At SBM there is a blog post by Dr Gorski about a woman killing her autistic son. I think a lack of consilience between the neurologically typically developing theory of mind and the particular theory of mind of someone autistic is what causes such antipathy. They then rationalize those feelings of antipathy as something else.

  26. Bill Openthalt says:

    @ Bruce

    What interests me is the idea that we see some people as so non-human that we then start giving them “false” agency. Do ill/disabled people lose agency the further from the normal that they go and is there a point at where they start to be given more agency because of their apparent lack of mind?

    We are favourably disposed towards the members of our group. If they are in trouble, we will help them at (reasonable) cost to ourself. Obviously, there is a cost/benefit analysis, and once the costs exceed the actual and potential benefits, the motivation to continue to expend resources will diminish. We don’t get insight in the actual calculation, but we feel demotivated, disheartened, etc. Small societies with limited resources do give up on old people, handicapped babies, etc.

    The overabundance of resources we enjoy today is exceptional, and it allows us keep alive extremely premature infants, brain-dead adults, and perform expensive heart surgery on 85 year olds. Compensating for the raw deal some individuals get is possible, and it makes us feel good (because we ignore the real cost of the resources we’re expending as they are provided by “society”). But for those people directly confronted with the costs, the calculations can sometimes result in them “giving up”.

  27. daedalus2u:

    I briefly blogged on transhuman topics about 4 years ago (under a different name). In one post, I talked about communication and understanding between humans and non-human entities, and it became useful to talk about this interaction in terms of “mutual sentience.” While I did not develop my idea as thoroughly as you have, I think there is a lot of, as you say, consilience between your ideas about shared theories of mind and my ideas about mutual sentience.


  28. edamame says:

    Mr Steal: I agree you should not be banned as a troll, because you are not a troll. Trolls put in some effort to encrypt their nonsense under a patina of rationality. You have done no such thing.

  29. edamame says:

    Dr Novella wrote:
    our brains construct reality – meaning that our perceptions, memories, internal model of reality, narrative of events, and emotions are all constructed artifacts of our neurological processing

    This is a bit close to postmodernism for my liking….To say that reality is a construct is to make too tempting the slide to the claim that reality is a mere construct, a useful fiction.

    I would say that our brains discover reality, even though our perceptual access to reality is filtered through our brains, and therefore imperfect. Hence the need for replication, multiple confirmations of the same result using different methods, peer review, etc..

  30. Kawarthajon says:

    Steve, I find it interesting that you assign this dehumanizing trait to everyone. Let me ask you whether you think that there are some people who are more naturally resistant to this? For example, despite intense dehumanizing of Jews in Germany during the Nazi reign, there were people (few, of course) who were able to resist the propaganda and help out the Jews, despite enormous personal risk. In any human rights campaign, it seems as though there are always people who are ahead of others in terms of bringing equality to people who are treated unequally, while there are others who are more likely to encourage and support the oppression and dehumanization of certain groups. What are your thoughts about this?

    I believe, as others have mentioned, that there is a strong element of socialization in the process and it becomes a kind of mob mentality, although some people are resistant to this mentality and are better at maintaining their own beliefs against the pressure of the mob.

    BTW, a great example of how the dehumanization process works from a socialogical point of view can be found in Gwynne Dyer’s documentary “War”. It is a timeless examination of the process, which, I can imagine, was used way back in ancient times all the way up through the present day (i.e. Greeks vs Barbarians, Romans vs Gauls, Han Chinese vs Mongols, Christian vs Muslim, Settlers vs Natives, America vs USSR, etc…).

  31. daedalus2u says:

    I think what Dr Novella was meaning was that we all generate our own representation of reality, that is we have no access to any actual reality, only to the representation of reality that we construct with the sensory data that our senses transmit to our brain.

    Knowing that what we think of as “reality” is only our representation of reality as we know it is not the same as constructing a reality in the postmodern sense (if I understand postmodernism correctly which may be in doubt). I am pretty sure that Dr Novella is thinking along these terms, that we can’t “know” what actual reality is, we can only know what our representation of that reality is, and that we should not confuse our representation with actual reality, and when we find non-correspondence between our representation and actual reality what needs to change is our representation, and that not all representations are equivalent as postmodernism claims.

  32. daedalus is correct – it is the internal model of reality, our experience of it, that is constructed. There is a real external reality, and our internal model obviously has a functional relationship to it, but it is highly biased and imperfect. That is why we need things like logic and science.

  33. Dick Steele was a sockpuppet. Sorry he slipped through. Gone now.

  34. ccbowers says:


    I think it’s pretty clear that Steve did not mean that our brains contruct reality in a metaphysical sense (or like Deepak Chopra might say), but I see how that quote read quickly may appear that way. The rest of the post makes it clear what he means- he is referring our internal understanding of reality constantly being contructed, altered, and recontructed by our brains.

    And what he is describing is a bit more than a simple flawed “discovery,” which I think is also not exactly right, but perhaps I misunderstand you. The flaws are often systematically altered, and it is difficult for people to realize this individually because we are under the illusion that we are always accessing and remembering THE reality. Yes, this does have a post modernist flavor to the idea, but let’s not let that cause us to bias our attitudes against the idea when it is an important one. It is this ‘neuropsychological humility’ that Steve mentions that the general public has little clue about.

    That does not necessarily deny that there is an actual reality, however, and you are correct in that multiple approaches of attacking the same questions will help improve our resolution and accuracy.

  35. ccbowers says:

    I guess I didn’t refresh my screen, and did not see the D2U and Steve comments before posting mine. I like how D2u ended that comment

  36. Bill Openthalt says:

    @ daedalus2u

    (if I understand postmodernism correctly which may be in doubt).

    All interpretations of postmodernism are equally valid, though philosophers show a distinct preference for those interpretations that result in tenure. Anyway, take your pick, it’s all in your head. (to quote Mr Tweedy).

  37. daedalus2u says:

    Bill, if all interpretations of postmodernism are equally valid, then I do not want to understand it. In other words, the concept of postmodernism does not map onto the neural networks that I use for thinking.

  38. Martin Lewitt says:

    Human social intelligence tends to imbue more than animals and humans with agency, but anything which impacts our lives that is beyond our control. Wind, storms, fire, coincidence, the vagaries of circumstances which don’t seem to be anyone’s fault, i.e, fate. We can even imbue such things or processes with moral agency, the laws of physics can “punish” bad decisions, natural selection gets imbued with intent, evolution with design and social darwinism imbues them with moral legitimacy, markets are imbued with an “invisible hand” that has been regarded as moral or immoral rather than amoral. Collectives and collective identities can be imbued with agence. For example, Hegelianism viewed collective identities like races, classes and nation states as organisms with agency and with moral rights that supersede those of the individual. Our minds can demonize and dehumanize collectives, not just because they are the other, but even as a defense mechanism to blame others for our own failings.

  39. Bill Openthalt says:

    Daedalus, who said postmodernism has anything to do with thinking?

  40. Pjaypt says:

    I have not read all of the posts, so I may be repeating something already said.

    I think sometimes we are to binary about the hard wired/learned question. Many times, what is hard wired is the capacity to learn something more easily then another, even when the to are linked. It may be that dehumanizing others is hard wired in the sense it is easier to learn.

  41. palebluedot89 says:

    I highly recommend the independant video game Thomas Was Alone to anyone interested in the idea of attributing agency to shapes.

    It’s a puzzle game at its core, but the characters you work with are just shapes. The trick is that the shapes are given names by the narrator, and personalities. The tall one that can jump far is very proud of himself and a little self absorbed. The short one can jump very high and is jealous of all the others, but some parts of the game are not possible without him. There are others but I don’t want to spoil too much. The game is basically a metaphor for friendship, all of their different skills come together to make the game possible. I could go on for a while about it but it would be kind of OT. In any case, it is very interesting that a game can get you that interested and dare I say emotionally invested in shapes, but this research sheds some light on that.

Leave a Reply