Jun 18 2012

Facial Processing

When you see a person for the first time your eyes quickly scan their face and in less than a second your brain has gathered a tremendous amount of information about this person, processed that information, and come to many simultaneous conclusions. Think about all the different kinds of information we quickly and simultaneously process – the age and gender of the person, their race (or more generally, the genetic group with which they belong), their personality and mood, their attractiveness, the status of their health, and whether or not we have ever seen them before (do we know them).

Our brains perform this processing quickly, efficiently, and subconsciously, and so we tend to take it for granted. There are regions of the brain dedicated to processing sensory information about human faces. We are still teasing apart all the various aspects of this subconscious processing, which not surprisingly is very complex and involves multiple layers.

It also always fascinates me to find that there is a scientific community and robust research, complete with ongoing controversies, about the smallest area of knowledge – how we process facial information. Modern science has drilled down deeply on even narrow questions, which I feel is part of its strength.

It is not surprising that humans are so good at facial processing. We are social creatures who live in cooperative groups. The information gained from another’s face can be critical to survival. Since humans are essentially tribal it makes sense that we would specifically process facial information to recognize when someone belongs to our group vs another group – and not just some other group, but which other group. Are they a member of a tribe with which we are currently hostile or friendly? This is apparently the root of stereotyping. When we are not familiar with a specific individual we judge them based upon the only available information that we have, the group with which they appear to belong.

Research has found this to be generally true. Within fractions of a second we can recognize a familiar face, based on specific structural information. This is a complex processing task also (something at which humans are still better than computers). We need to be able to recognize individuals from different angles, in different lighting, in different contexts, while displaying different facial expressions, and perhaps even within a disguise. Think about famous actors playing different roles, even with extreme makeup on. I remember watching Pirates of the Caribbean (the second one) and recognizing actor Bill Nighy behind all the Davy Jones makeup. There was something about the way his mouth moved and the expression in his eyes that “clicked” in my brain, and then I could see Bill Nighy.

Further, we are able to determine when someone resembles a person that we know vs being that person. Even subtle differences are enough for us to determine that the person is a “lookalike” but not the person themselves.

When we determine that we know someone that information supersedes the next stage of facial processing about gender, social category, etc. In other words – we now see the person as an individual rather than a generic member of a group or category. This also helps explain the “cross-race effect” - the fact that it is more difficult to recognize individual members of an unfamiliar race or group (yes, it is a real effect).

Facial recognition and information processing is a excellent example that can be used to investigate how the brain organizes and processes information in general. Questions actively being researched include the degree to which facial processing occurs in parallel vs in sequence. There does appear to be early and late stages of facial processing – a hierarchy of information. Some research, however, suggests that there are different modules processing facial information at the same time and the results of these parallel processes are mixed together. Our net response to a human face, therefore, may be pulled in multiple directions simultaneously.

Further, our response to a human face is not fixed, but is influenced by our current social situation. Psychologists are therefore able to manipulate such responses by priming and other methods. This generally reflects our neurological function – there are built in biases and processes, this is then affected by our memories (do we recognize an individual, past experiences with a group), and then further modified by our current mood and situation. This also makes sense as the brain is a tool for adapting to our environment and situation.

Further, there appear to be conscious and subconscious aspects to facial processing. There are some conclusions that we are immediately consciously aware of, like whether or not we recognize the individual and our estimate of the age and gender. However, we have other reactions that are subconscious, such as how trustworthy we feel the other person is. Research has shown that we estimate both trustworthiness and dominance quickly upon viewing a face. Researchers have teased apart some of the variables that contribute to this, for example features of maturity tend to equate with dominance. Related to this are studies that look at our judgment about whether or not someone is a criminal. Criminality judgments are at least partly explained by perception of high dominance and low trustworthiness.

This is a type of subconscious heuristic or emotion – our brains evolved to make social judgments quickly and subconsciously that are of some adaptive value, but not necessarily always accurate. We trade a decrease in accuracy for an increase in speed of decision-making. We see someone and immediately have a feeling about whether or not they are safe to approach, probably erring on the side of being cautious. Now, however, those same evolved reactions can affect judgments concerning a police lineup or a defendant in a trial – situations in which we want maximum accuracy.

Individuals also vary in terms of their ability to detect various aspects of the human face – such as ability to recognize individuals or to sense the emotional state of another person. This involves not only inherent ability but the amount of attentional resources that we allocate to the task. In one interesting study, for example, giving subjects the social hormone oxytocin increased their sensitivity to “hidden” or subtle emotional expressions.

The science of facial information processing and recognition is fascinating in itself, but I am also interested in what it tells us about brain organization and functioning generally. The same story emerges no matter what narrow aspect of brain function researchers are looking at. There appear to be multiple factors simultaneously at work, with conscious and subconscious elements, with a mixture of inherent and situationally malleable tendencies. Further, there are generic neurological factors, such as attention, that strongly influence the specific function.

Finally, the more we study aspects of brain function like facial processing the more apparent it becomes that our conscious experiences and choices are just the tip of a subconscious iceberg. We largely operate under the illusion that our thoughts and behaviors are the result of a conscious rational process, when in fact a large body of research, looking at human behavior and brain function from many perspectives, finds that our thoughts and behaviors are more determined by subconscious processing – our evolved tendencies modified by recent memory and our current situation.

Share

52 responses so far

52 Responses to “Facial Processing”

  1. SARAon 18 Jun 2012 at 9:23 am

    I wonder how much of that unconscious processing is influenced by unconscious learning? Do we learn to categorize from what we watch on TV? By what we see our parents doing? Or is some portion just hard wired?

  2. locutusbrgon 18 Jun 2012 at 9:44 am

    I would think that the vast majority of our “higher” brain functions are on auto pilot(out of our conscious control). If social identification and risk stratification took the concentration or precision of calculus it would be onerous. Like all biologic organisms we are a evolutionary sum of dead ends, innovations and trade offs. The brain has finite resources obviously. Just like mechanical respiration we can affect it and control it at some level, but really it takes over on its own most of the time. In my opinion it is only egoism and human arrogance that convinces us that we are in conscious control of these things. Steve I guess that my lack of expertise in your field make me much less surprised by these findings.

  3. DevoutCatalyston 18 Jun 2012 at 11:05 am

    I wonder truly how well crows can recognize human faces, and how do they do it? Can they differentiate identical human twins, or impersonators?

  4. elmer mccurdyon 18 Jun 2012 at 11:37 am

    I think the world would be a better place if we recognized one another by sniffing each other’s crotches.

  5. Bronze Dogon 18 Jun 2012 at 1:14 pm

    This also helps explain the “cross-race effect” – the fact that it is more difficult to recognize individual members of an unfamiliar race or group (yes, it is a real effect).

    Of course, this means someone who is subject to the effect should recognize it’s their lack of familiarity that causes the problem, not an alleged homogeneity of the unfamiliar group. Getting your average racist to realize that is the hard part.

    On non-human animals’ individual recognition, I remember a show that analyzed the sounds of prairie dogs and they found that they essentially shouted “names” for the things they spotted, which was at an individual level. They wouldn’t just shout ‘coyote’ if one approached, but a name for that specific coyote. The researcher found out something else: If she approached in different outfits, they would say variants of her “name” according to outfit. Of course, prairie dogs have different specs on their senses than humans, so no telling what exact methods they use.

  6. daedalus2uon 18 Jun 2012 at 5:32 pm

    SARA, virtually all of it is. Essentially all of face recognition is learned and is not innate. There simply isn’t enough information in the genome to code for face structure and then to code for the neuroanatomy to recognize the facial structures that the “face genes” code for. It has to be learned.

    Much of that learning likely takes place very early after birth, as the visual circuits are remodeled to optimize pattern recognition. Over time, that plasticity is reduced and it becomes more difficult to learn new faces, new languages, new languages without an accent.

    People not exposed to certain sounds as infants are not able to reliably hear or say them as adults. There was a lot of work done with cats which showed that if they were not exposed to certain objects as the visual centers were being programmed, they were unable to see them later in life.

    Blood hounds can distinguish identical human twins.

  7. etatroon 18 Jun 2012 at 7:54 pm

    Blood hounds can distinguish identical human twins.
    By their faces or by their scents?

  8. jt512on 18 Jun 2012 at 11:00 pm

    Daedalus2u wrote:

    Essentially all of face recognition is learned and is not innate.

    Then how do you explain prosopagnosia?

  9. daedalus2uon 19 Jun 2012 at 8:04 am

    jt512, the lack of the neuroanatomy to do the signal processing to do facial recognition. As Dr Novella mentioned, people are less able to distinguish faces in ethnic groups they are less familiar with. In a sense, such people all have a degree of selective prosopagnosia (but perhaps to a sub-clinical degree).

    etatro, it was by scent.

  10. BillyJoe7on 19 Jun 2012 at 8:23 am

    locutusbrg; “In my opinion it is only egoism and human arrogance that convinces us that we are in conscious control of these things.”

    Actually, it doesn’t require egoism or arrogance. It comes naturally. Everyone thinks they are in control until they realise that this cannot be so. Either that or there is a bit of magic going on in there. A ghost in the machine. A spirit or a soul. And we don’t believe that. Do we.

  11. steve12on 19 Jun 2012 at 9:14 am

    daedalus2u:

    I see where you’re coming form, but there are definitely specialized modules in the ventral stream, where higher order vision is processed, that seem to be “hard wired” for all sorts of visual features independent of learning (V4 will process color, MT will process motion). We simply do not find people who process basic edges in small receptive fields in LOC and objects with large receptive fields in V1.

    It’s not w/o controversy, but fusiform face area (FFA) has been implicated time an time again as a brain area that is essential for processing faces (see Kanwisher et al.), though others believe it reflects expertise (and we’re all face experts, essentially), see Gauthier’s work. Damage to this area leads to prosopagnosia.

    I engage in the identical reasoning you do here all of the time, but the brain can beg to differ!

  12. Shelleyon 19 Jun 2012 at 12:58 pm

    Interesting.

    Back in the day when I was in grad school, I was interested in (but never had the opportunity to pursue) the idea that when we have a close relationship with someone, we ‘learn’ an attraction or repulsion to any similar characteristic facial features that occur in others. Consequently, we might be attracted to (or repelled) by a new acquaintance based on nothing but that feature.

    The idea was initiated by something I read by Descartes, oddly enough.

    He wrote, “As a child I was in love with a girl of my own age, who was slightly cross-eyed. The imprint made on my brain by the wayward eyes became so mingled with whatever else had aroused in me the feeling of love that for years afterwards, when I saw a cross-eyed woman, I was more prone to love her than any other, simply for that flaw—all the while not knowing this was the reason. But then I reflected and realized it was a flaw: I am smitten no longer.” René Descartes

    There is a key (conditioned?) emotional response to features we associate with love, arousal, fear etc.

  13. daedalus2uon 20 Jun 2012 at 7:32 am

    Steve12, I guess it depends on what the definition of “learn” is. My working definition is an interaction with the environment that changes neuroanatomy which results in changed future behaviors.

    Much of the “fine structure” of those neuroanatomy changes is unconscious and happens automatically, a “tuning” of pattern recognition neuroanatomy to achieve a more appropriate balance of type 1 and type 2 errors. That balance is set by physiology, but is not (always) subject to conscious control. The anecdote Shelley mentions is an example. The pattern recognition neuroanatomy noticed this correlation and attributed causation to it, and configured itself to always attribute causation.

    That this learning occurs in specialized parts of the brain is irrelevant to whether or not learning is happening.

    I am familiar with Kanwisher’s work.

  14. steve12on 20 Jun 2012 at 11:17 am

    daedalus2u:

    I wasn’t commenting on whether we’re conscious of our face learning mechanisms. We’re obviously unaware of all but a small fraction of what the visual system does.

    When you said…

    “Essentially all of face recognition is learned and is not innate. There simply isn’t enough information in the genome to code for face structure and then to code for the neuroanatomy to recognize the facial structures that the “face genes” code for. It has to be learned.”

    …I took that to mean that you were drawing an information limitation argument to argue against Kanwisher’s position, which I think would be in error. But after your reply, I’m not really sure what you mean, TBH.

  15. daedalus2uon 20 Jun 2012 at 5:39 pm

    I have read some of Kanwisher’s work, and I am not in disagreement with any of it.

    I agree that we are experts in human faces. There may also be specialized structures that have evolved to deal specifically with faces. How those specialized structures get loaded with the specific pattern recognition neuroanatomy that instantiates facial recognition is a form of learning and occurs after infants are born.

    There are a variety of nature vs nurture debates. I simply wanted to emphasize that facial recognition neuroanatomy must occur after an infant is born and so has examples of faces in the vicinity to use as models to tune its facial recognition neuroanatomy.

    Facial recognition cannot be largely genetic. That people are better able to recognize faces of their own ethnic group results from greater exposure to that ethnic group during formative time periods, not because of shared genes. It is very much like learning a first language with the accent of the group that you learned it from. Language is learned, it is not coded for in the genome. The neuroanatomy that allows for the generation of neural structures that instantiate language is coded for (in ways that are completely mysterious) by DNA, but the specific mapping between sounds and meanings is not.

    In the same way the neuroanatomy that recognizes letters is not coded for genetically, but people can learn to read, and then it can be difficult to look at letters without unconsciously trying to decode them into words and sentences. Letters not in the alphabet you learned are more difficult to recognize and remember.

  16. steve12on 20 Jun 2012 at 6:38 pm

    “I have read some of Kanwisher’s work, and I am not in disagreement with any of it.
    I agree that we are experts in human faces. There may also be specialized structures that have evolved to deal specifically with faces. How those specialized structures get loaded with the specific pattern recognition neuroanatomy that instantiates facial recognition is a form of learning and occurs after infants are born.”

    Kanwisher doesn’t believe that FFA reflects expertise, she believes that FFA is module that has evolved to recognize faces. Gauthier and others believe that we are face experts by exposure. Of course, Kanwisher wouldn’t claim to have evidence that it’s evolutionarily hard coded, this is just what she believes and she knows the difference.

    What does it mean for a structure to have evolved to “deal with” faces vs. facial recognition? I don’t mean picking out individuals, or races, etc. I mean “face recognition” specifically as it’s used in the cog neuro literature, e.g., this is a face, not a tree.

    “There are a variety of nature vs nurture debates. I simply wanted to emphasize that facial recognition neuroanatomy must occur after an infant is born and so has examples of faces in the vicinity to use as models to tune its facial recognition neuroanatomy.”

    Must is a little strong (actually a lot). The FFA could indeed be hard coded genetically to develop in such a way that it at some level recognizes faces vs. other forms. We just don’t know. Speaking this dispositively on the matter is just as wrong as concluding it must be genetically coded because FFA seems so specialized.

    “Facial recognition cannot be largely genetic. That people are better able to recognize faces of their own ethnic group results from greater exposure to that ethnic group during formative time periods, not because of shared genes. “

    Agreed, but that’s not face recognition.

    “It is very much like learning a first language with the accent of the group that you learned it from. Language is learned, it is not coded for in the genome. The neuroanatomy that allows for the generation of neural structures that instantiate language is coded for (in ways that are completely mysterious) by DNA, but the specific mapping between sounds and meanings is not.”
    In the same way the neuroanatomy that recognizes letters is not coded for genetically, but people can learn to read, and then it can be difficult to look at letters without unconsciously trying to decode them into words and sentences. Letters not in the alphabet you learned are more difficult to recognize and remember.”

    I don’t think recognizing a face vs. other forms is analogous to recognizing letter/word forms as far as what we’re quibbling about goes because can be plausibly shaped by natural selection and one cannot be. Also, some of your analogy is not about recognition.

    I’m not saying that facial recognition is hard coded – I tend to favor the expertise view because it’s more parsimonious, but FFA specialization is incredibly strong for how specific it is. The jury is out.

    Something of interest here might be Project Prakash, a humanitarian project/experiment that will restore site to kids with eye ailments (can’t remember the specifics) then see how their visual abilities develop after being depreived sight during a critical period.

    Another thought: if face recognition “cannot” be hard coded, why can color and motion and grating edge orientation be hard coded? If they’re not, why do they develop with the cortical organization that they do?

  17. daedalus2uon 20 Jun 2012 at 7:31 pm

    I don’t think that edge coding is hard coded. It pretty much can’t be because the connections from the retina have some degree of randomness, there isn’t a mechanism for tracing the path of the connection from a specific spot on the retina to a specific spot in the visual cortex in the absence of visual signaling.

    It is known that the mapping of the retinal cells to the visual cortex in chick embryos requires visual signals to work. In the absence of signals, the pathways never get refined and vision never develops.

    As I remember what I have read of Kanwisher, I don’t recall her saying that face recognition is hard coded. In looking at some of her work, she pretty clearly says it isn’t.

    http://web.mit.edu/bcs/nklab/media/pdfs/OP236.pdf

    On page 12, she says:

    “Second, six-month-olds can discriminate individual monkey faces but nine-month-olds, like adults, have lost this ability [35]. This loss of an initial ability with nonexperienced face types, rather than just improved ability for experienced types, is similar to the loss of initial ability seen in language for discriminating nonexperienced phonemes.”

    I agree with her, and the analogy to letter recognition holds. Letters can be recognized as components of language, in which case they are treated differently than if they are symbols that are not in the alphabet that the individual has learned. The differential treatment of letters vs non-letters occurs via differential remodeling of neuroanatomy, what I call “learning”.

    Face recognition is mostly a form of communication. Facial expressions are an important part of language, and that facial language needs to be learned.

  18. daedalus2uon 20 Jun 2012 at 7:43 pm

    Sorry, html fail end italics

  19. daedalus2uon 20 Jun 2012 at 7:44 pm

    end italics now end italics now

  20. steve12on 21 Jun 2012 at 3:03 am

    I think we have a different definition of hard coded. That these areas require some input to become active doesn’t mean that they can’t be evolutionarily programmed to detect certain kinds of input that relates to certain forms, given input. Of course they need that input to develop – doesn’t mean there’s not some genetic coding that will lead to the same end: a face detector (or what have you.). Maybe the genetic coding counts on a particular environmental event? I’d call that coded for face detection.

    And Kanwisher acknowledging this is not tantamount to her saying that the FFA is not a naturally selected face detector.

    “I agree with her, and the analogy to letter recognition holds. Letters can be recognized as components of language, in which case they are treated differently than if they are symbols that are not in the alphabet that the individual has learned. The differential treatment of letters vs non-letters occurs via differential remodeling of neuroanatomy, what I call “learning”. “

    It’s not the same because letters are made up – there was no evo pressure to make a letter detector because they didn’t exist. Phoneme recogntion (as KAnwisher pointed out) is different. it’s like a face in that it was subject to evo pressure. A phoneme and a letter are two entirely different things at the level of recognition, and Kanwisher knows this. This is why she said phoneme and not letter.

  21. steve12on 21 Jun 2012 at 3:04 am

    same issue with the italics!

  22. steve12on 21 Jun 2012 at 3:07 am

    Knowing Nancy Kanwisher, I figured there’s be something like this in the pdf you site above:

    “This theory proposes that a face template has
    developed through evolutionary processes, reflecting the
    extreme social importance of faces. ”

    Can’t get more direct than that….

  23. daedalus2uon 21 Jun 2012 at 7:49 am

    Yes, letters are made-up, but so are every other component of language including phonemes, words, sentences, ideas. Language can also be instantiated through other means, as in American Sign Language, Braille, Morse Code.

    We know that language is not coded for genetically, but that it is learned. We also know that the neuroanatomy that compels language learning is innate (although we don’t know what that is or how it works). Why is there any reason to think that facial recognition is different? They are likely pretty similar.

    I don’t dispute that there are hard-wired processes that compel neutoanatomy to self-modify so as to instantiate human face recognition of high fidelity using observed faces as templates. I suspect that those processes are analogous to the processes that compel language acquisition.

    I suspect that the mechanism is that infants are born with generic face recognition, that they can recognize the gross shape of a face, maybe just a blob. That generic face recognition pattern recognition then gets “pruned” so that the number of faces that can be recognized becomes smaller, but the specificity becomes higher. The trade-off of type 1 and type 2 errors becomes modified for the specific faces the infant is exposed to, at the expense of face-like objects the infant is not exposed to. This explains the acquired reduced ability to recognize monkey faces, and why faces of other ethnic groups are more difficult to recognize. If the infant is not exposed to certain kinds of faces, the ability to recognize them with high fidelity is pruned away. To some extent this pruning is probably irreversible. Just as the ability to learn a first language goes away at some time.

    People can learn new languages, but virtually all people who learn new languages acquire them with an accent that is different than how the language is spoken by the people the new language is learned from, and this accent is stable over many years, decades even.

    What this tells me is that the neuroanatomy that instantiates the ability to learn a first language isn’t there any more, and so the ability to learn a first language without an accent is lost. Presumably this occurs so that the brain resources (volume, neurons, blood flow, substrate utilization) can be used for things that evolution has determined are “more important” than retaining the ability to learn a new first language. This could be due to the loss of certain structures or the loss of plasticity in other structures. It is probably some combination.

    Over evolutionary time, the ability to learn multiple first languages was not very important because tribes and groups of people didn’t speak multiple first languages, they spoke a single first language. Once you learned that, you could communicate with everyone you needed to communicate with.

  24. ccbowerson 21 Jun 2012 at 9:27 am

    “Yes, letters are made-up, but so are every other component of language including phonemes, words, sentences, ideas.”

    Yes I had the same reaction to Steve12 explanation of the distinction between letters, phonemes, and faces. It is not whether they were ‘made up’ or not… that is irrelevant, and whether they were subject to ‘evo pressures’ is a separate issue from this.

    “We know that language is not coded for genetically, but that it is learned”

    These recent exchanges seem to be based upon a different understanding of what ‘coded for gentically’ means. In looking at how this is being used, I’m not sure what is meant here, and that seems to be what the disagreement is about.

    “People can learn new languages, but virtually all people who learn new languages acquire them with an accent that is different than how the language is spoken by the people the new language is learned from, and this accent is stable over many years, decades even.”

    I’ve noticed that for people with accents there are often certain words or phrases that are spoken with little or no accent. I have yet to determine if they are words or phrases learned early in life as a second language or if they are words or phrases used or heard often so that native pronunciation is reinforced. I think it can be either, since I can think of examples of both.

    I’ve also noticed that accents can be very situation specific. If I call my mother (or other relative with an accent) while at work, I will hear almost no accent, but if she is visiting relatives the accent is significantly stonger. When she is home it is somewhere in between.

  25. daedalus2uon 21 Jun 2012 at 10:20 am

    CC, yes I think you are right, that the term “coded for genetically” is not well defined and I think the standard usage is wrong.

    My understanding of the “standard usage” (which I disagree with), is that things like how many arms a person has is “coded for genetically”. This is not correct. What is coded for genetically is a process to grow a phenotype where that phenotype will have a number of arms and usually that number is two.

    We know the number of arms is not coded for genetically in a “hard-coded” sense because things like thalidomide can change the number of arms without changing the genetics. If things like organ number were coded for genetically, then people like this

    https://en.wikipedia.org/wiki/Abigail_and_Brittany_Hensel

    would have to have a very specific and very different type of genome. They don’t. This example essentially proves that very little in development is “hard-coded”. If it were “hard-coded”, then a disruption in development that caused these differences would very likely also affect the much more subtle processes such as brain neuroanatomy, language acquisition, memory and so on. This example essentially falsifies the hypothesis that there is a “top-down” control of body form encoded in the genome. If there was a “top-down” “hard-coded” control of body form in the genome, then each non-typical body form would require its own unique genome.

    I think this is a lot of the problem with how genomics is being looked at today. People are looking for the “genes for disease X”, but physiology is much more complicated than that. Autism is a field I am quite familiar with, and is thought to be “mostly genetic”, but there is no gene that is responsible for more than a few percent of autism causation (as determined by GWAS with multi-thousand cohorts). If genes for a condition can’t be found, in my opinion, there is no justification for saying a condition is “mostly genetic”.

    People disagree, and point to twin studies where MZ twins share more traits than do DZ twins. That excess sharing is usually considered to be due to shared genetics and not shared environment. What would a twin study say about the genetics of organ number? MZ twins sometimes share organs, DZ twins very rarely do (a mosaic individual could be thought of as DZ twins with shared organs).

    one last try at italics fix

  26. ccbowerson 21 Jun 2012 at 10:44 am

    d2-

    “We know the number of arms is not coded for genetically in a ‘hard-coded’ sense because things like thalidomide can change the number of arms without changing the genetics.”

    Your description of the standard usage is a bit extreme (I think you acknowledge this), and I’m not sure that most people knowledgeable about the subject view it that way. One cannot completely separate genetics from environment, but there are circumstances, such as your ‘number of limbs’ example, in which genetics determines the results in 99.99% of likely environments (yes that is a made up number). That is because the vast majority of possible environments do not influence the result, and you brought up one of the situations in which it does: the exposure to a chemical during development. I don’t view that as removing the role of genetics in determining the limb number any more than being near a circular saw during development does.

  27. daedalus2uon 21 Jun 2012 at 11:52 am

    I think that considering traits that develop “normally” in 99.99% of cases to be “genetic” is not correct and does lead to erroneous ways of thinking.

    The thalidomide example does demonstrate that genes don’t code for the number of arms, but rather that genes code for developmental signaling pathways which determine the number of arms but which can be disrupted by things like thalidomide.

    There is a very big and very fundamental difference between those two framings of the data (which I think we agree on) that 99.99%+ have 2 arms and a few individuals have a different number.

    Abigail and Brittany Hensel are an extreme case, but a case where there is no known non-genetic cause and no known genetic cause. We can rule out a direct genetic cause because they are a one-of, a sporadic appearance of a unique and uniquely complex phenotype.

    Production of a complex body form can only occur via control by a complex control system that has a number of degrees of freedom commensurate with the degree of complexity that is controlled. In the case of Abigail and Brittany Hensel, the complex control system that controls the (very) complex and unique phenotype they developed is not instantiated in the genes. If it was instantiated by the genes, then their phenotype would require a very different genome than the genome of their parents and siblings.

    We can also rule out a complex environmental insult. The pregnancy they resulted from was reportedly uneventful. An environmental insult that compelled development of such a complex body form would be exceedingly complex. It would require complex intervention at every state of development, in the first trimester and at every stage later on.

    That something like thalidomide can disrupt many aspects of fetal development does tell us that while there can’t be a few genes that are responsible for controlling the complexity of fetal development (not enough data), there can be a few signaling pathways that do, and when thalidomide disrupts those pathways, there are pleiotropic disruptions in phenotype development in many tissue compartments.

    You won’t find the complex control system that controls body plan in the genes because it isn’t there. The complexity of the body plan is an emergent property of individual cells doing their own individual thing, interacting with their close neighbors via relatively few signaling pathways. The complexity comes from the complex geometry of the developing fetus and the complex geometry of the range of the different signaling pathways that individual cells are using to communicate with and so control the proliferation and differentiation of themselves and the cells they communicate with.

    Gene regulation is mostly on/off. The signaling that regulates gene regulation is differential due to the differential distances that the signaling molecules need to diffuse before activating a sensor (or not). Because the fetus becomes large compared to the distance these signaling molecules can diffuse, the same signaling pathways can be used to do things that are separated either by time or by space.

    The signaling that triggers development of fingers can use the same pathway that triggers the development of toes because toes and fingers are separated by space that is large compared to the dimensions of the fingers and toes. Thalidomide can disrupt development of toes and fingers by interfering with the common signaling pathway.

    People are trying to find a “top-down” control of body plan in the genome. It isn’t there. This is the essence of the search for genetic causes of neuropsychiatric disorders. People are looking for the genes that “cause” neuropsychiatric disorders. All neuropsychiactric disorders are disorders caused by the neuroanatomy of the brain (neuroanatomy in the sense of physical arrangements of matter that affect behavior of the brain). The minutia details of that neuroanatomy can’t be regulated by the genome because there isn’t enough data in the genome to do so. The minutia details have to be regulated by local signaling made extremely complicated by the already existing extremely complex local neuroanatomy.

  28. elmer mccurdyon 21 Jun 2012 at 12:48 pm

    I was thinking of adding some bold or color to the mix, since the end italics thing doesn’t seem to be working, but nah.

  29. steve12on 21 Jun 2012 at 3:01 pm

    “I don’t dispute that there are hard-wired processes that compel neutoanatomy to self-modify so as to instantiate human face recognition of high fidelity using observed faces as templates. I suspect that those processes are analogous to the processes that compel language acquisition. ”

    Then we don’t disagree re: what Kanwisher et al. think about face recognition. I did not think that you would have agreed with the above quote given some of your other statements, but this is the correct reading of Kanwisher. I think that our disagreement was simply out of the characterization of “hard coded”, as you guys discussed

    Re: letters and phonemes, I’ll add that phonemes and letters are not the same, and their genesis is important where recognition is concerned. We did not make up (i.e., culturally invent) phonemes as we did letters. We culturally invented letters as an abstraction to capture the information conveyed by phonemes, which arose in the development of language. Ergo, phoneme recognition and face recognition are similar in their ecological salience (and therefor recognition of these might be coded genetically in some way, regardless of what you might call it) while there’s no reasonable way to postulate the same for letter recognition.

  30. steve12on 21 Jun 2012 at 3:02 pm

    The italics is not growing on me….

  31. BillyJoe7on 22 Jun 2012 at 7:37 am

    We’ll let’s see if we can put a stop to that then…

  32. BillyJoe7on 22 Jun 2012 at 7:38 am

    Stop NOW!

  33. BillyJoe7on 22 Jun 2012 at 7:40 am

    Okay NOW

  34. BillyJoe7on 22 Jun 2012 at 7:40 am

    :(

  35. DevoutCatalyston 22 Jun 2012 at 7:44 am

    The new server is in Italy, Pisa I think.

  36. sonicon 22 Jun 2012 at 5:23 pm

    I think daedalus2u is onto a better model in that his conception of the situation accounts for more of the evidence–
    for example–
    http://www.newscientist.com/article/dn12301-man-with-tiny-brain-shocks-doctors.html
    Clearly it is possible for the brain to function in a number of configurations. Clearly there is a ‘norm’. But any model that involves lots of hardwiring is going to find it difficult to account for the odd cases-those found by Lorber, for example.

    It seems DNA might be thought of as an instruction code for the making of a life form and that the code includes a number of different instructions that are based on contingencies.
    It appears that some of what has been called ‘junk’ DNA is actually the regulatory scheme by which the DNA directs the building of the life form based on the various contingencies.
    It is amazing to think the brain is built by a series of algorithms that are coded in the form of DNA. And with only the four symbols!

    An excellent book on the topic of environment and development is here–
    http://www.amazon.com/The-Dependent-Gene-Fallacy-Nurture/dp/0805072802

  37. daedalus2uon 22 Jun 2012 at 7:21 pm

    In thinking more about it, I don’t think I agree that there is a “face template” that neuroanatomy has evolved to detect. The quote from the Kanwisher paper posits an extreme version, which can easily be taken out of context to suggest she is advocating a high fidelity hard-wired template, rather than proposing an alternative to the expertise hypothesis which I agree is wrong.

    The template she is arguing for is very simplistic, and the existence of the kind of template she is arguing for does not explain high fidelity facial recognition. I don’t think that an innate face template is required to achieve the results observed. Simply a large enough volume of neural substrate that has the capacity to self-modify into a high fidelity pattern recognition system would be enough. It doesn’t need to be pre-configured into a face-like template (which is how I am interpreting the face template hypothesis).

    I am not so much arguing against a “face template”, as I am arguing for a “things that humans do while interacting with other humans” template. Looking at faces is a part of that and also a part of language acquisition, but the acquisition isn’t so much face specific the way it would have to be if there were a “hard-wired” face template.

    I think the compulsion to look at the “face” of an animal is something that prey animals and predators do also, and not for communication purposes, and that may be innate and hard wired. You can get children to look in a certain direction if you look in that direction and make an exclamatory face. It takes them multiple trials to be able to suppress this. I was once doing this with a toddler to give myself time to do magic “slight of hand” tricks, but at my skill level I needed fairly long time intervals, which the toddler would give me because of the compulsion to follow the gaze of an adult who appears to be looking at something interesting. He knew I was tricking him to look away while I did my trick, but he couldn’t help himself. Perhaps I scarred him for life.

    It may also be the end result of trying to find the part of an animal that first signals a later action. That is going to be easier to evolve than a face template, is useful in more situations and would accomplish the same things and more.

    A compulsion to attend to what and how other humans are doing while interacting with each other and funneling that sensory information into language processing regions would also work to generate language that is not sensory mode specific (body language, speech, gestures, text). I think that is a better hypothesis than a “face template” hypothesis. It also better explains the evolution of acquisition of facial communication via facial muscle movements. Humans have very complex facial muscles compared to non-social animals. The utility of those complex face muscles only occurs along with the complex neuroanatomy to control those muscles and the complex neuroanatomy to decode that information. It took many evolutionary steps to achieve the facial muscles, neuroanatomy to control them, and neuroanatomy to decode the meanings of those movements. To achieve the final end, there had to be positive selection in all three systems (muscles, control, decoding) all along the way. A template hypothesis doesn’t do that. An attending hypothesis does.

    Looking at faces, or the source of sounds which in humans is the face, may be innate for humans. Once you start looking at a face, neuronal remodeling in sensory processing regions is going to make that a higher fidelity recognition system.

    I think that neuroanatomy evolved to generate a face recognition template that matches the faces that one is exposed to through neuronal remodeling. This is different than saying neuroanatomy evolved a face recognition template, which I don’t agree with. In other words, any “template” is generated after the fact by the remodeling of neuroanatomy that evolved to generate a high fidelity template of what ever it was exposed to, be it a face or something else. In virtually all cases with humans it is a face.

    There was a Twilight Zone episode

    https://en.wikipedia.org/wiki/The_Eye_of_the_Beholder

    If humans had evolved a face recognition template, that would result in beauty being non-subjective. There would be a “template” that humans evolved to recognize as the optimal face.

    I think that if a human infant were exposed to monkey faces instead of human faces, that the infant would develop high fidelity facial recognition of monkey faces and would have poor recognition of human faces. The initial “template” must be very simple because the visual cortex isn’t wired up with the retina to give high fidelity visual images at first.

    It is the same way that the auditory pattern recognition isn’t wired up at birth. There is some, due to exposure to sounds in utero, but making the pattern recognition high fidelity requires exposure to high resolution stimuli. For visual stimuli, that can’t happen in utero because there are no high resolution images in utero.

    I would be interested in language acquisition in marine mammals that communicate with sound. The fetus would be exposed to essentially all adult generated sounds in the vicinity. The generation of sound requires air, so the fetus may not be able to respond, but should be able to hear and start decoding language in utero.

    The feeling that babies are cute is not something that is specific to human babies. There is a default “babies are cute”, which holds even for non-human babies. When it is your own baby, the level of “cuteness” is of a very different fidelity. That higher fidelity “cuteness” is due to neuronal remodeling and is also known as parental bonding.

    I am not sure I agree with you on your last statement. Once any sensory modality becomes used as a communication media, then I think the language acquisition neuroanatomy generates high fidelity pattern recognition neuroanatomy that “short-cuts” normal non-language sensory processing.

    Gestures for communication as in American Sign Language were “made-up” the way that letters and words were made-up, and the way that sentences are made-up. Spoken words are made up of phonemes the way that letters are made up of line segments. Letters in a language are not sensed as collections of line segments. There was some interesting work where people were assigned to read text on a computer screen that was comprised of some lower case letters and some upper case letters. People didn’t notice when they were switched while they were reading. It was completely obvious to people who were just looking at the screen but not reading it.

    I can acquire information via reading many times faster than I can acquire it by listening to someone speak. If reading required non-language sensory processing before it could be decoded into language, that would not be the case.

  38. steve12on 22 Jun 2012 at 10:46 pm

    I more or less agree with you. I lean toward FFA is probably being an expertise related area that’s flexible, though the studies that are most in line with this (GAuthier’s) used experts in things with face-like structures or faces (cars, birds and made up stimuli called “greebles”, you can look those up if you like, I don’t feel like explaining).

    But really, supposing only takes us so far. Just because something makes sense doesn’t mean it’s true, and I can’t dismiss Kanwisher’s ideas (based on data, not supposition) simply becasuse they seem sub-optimal.

    Also, we know the amazing thing about the brain is it’s plasticity – no one’s denying that by saying something might be “hard-wired” (I’m not going to get charged up about the labels for things that no one understands). And I\invoking this doesn’t explain anything, though I does invite one to skeptical, fair enough.

    Another point: Kanwisher et al. are talking about face recognition. When you say high fidelity face recognition, I don’t know what you mean – do you mean picking out an individual? If so, that’s considered a different process. Discerning faces is not the same process – ERP work showed this quite well (See Tanaka’s Joe No joe paper – he has other work replicating this). Most people think that if there is a face template, as you’re referring to it, it would be low fidelity, not high. IOW, can i see that something is a face and not a chair. Probably global processing, gist. low spatial frequency kind of representation.

    I didn’t understand most of the language stuff, especially when you said:

    “I can acquire information via reading many times faster than I can acquire it by listening to someone speak. If reading required non-language sensory processing before it could be decoded into language, that would not be the case.”

    Who knows if this is true? IS there work that shows this? Introspection is a great jumping of point for investigation, but that’s about all. And letter forms have to be extracted visually in order to be read, and this ability can be lost with patterns of loss to visual areas that process local info without loss of literacy (see Farah’s meta analysis of patterns loss in agnosias re: global vs. local processing organization). A cost has even been found for neurally and RT-wise for repetition with different fonts (Chauncy & Holcomb).

    I’m no expert in this are, but language evolved auditorily and gestrually, but not lexically, right? You would think that if there is anything analagous to a face template for language (and there are genes like FOXP2 that are neccessary, if not sufficient, for language) it would have co-evolved in the modalities language was communicated with. That our brains are flexible enough to transfer our abilities to written language isn’t surprising, but I would be very surprised if there were a letter form template, not so much for phonemes. I ahve no idea if there is one though.

  39. daedalus2uon 23 Jun 2012 at 12:21 pm

    We don’t know how language evolved, but we know that it did evolve. It probably started out as odor recognition which many non-social mammals do, then included gestural language and body language, then included facial expressions and only later involved sound generation, detection and decoding into meaning.

    What I mean by “high fidelity” face recognition is face recognition with low type 1 and type 2 errors.

    I am not dismissing the hypothesis of a facial template, I just hold it to be less likely than the scenario(s) I have outlined. As far as I know, my scenario explains all the data I am aware of, including all data that was in the Kanwisher paper, while the facial template hypothesis does not.

    A very large constraint on potential face processing neuroanatomy is that it evolved. We know this because our early ancestors didn’t have facial processing neuroanatomy, they didn’t have faces, or neuroanatomy, they were single celled. Once an organism has a face, and neuroanatomy, then it could evolve that neuroanatomy to have a template of a generic face, but evolving a template for face recognition is only of utility to social organisms, so being social had to come before evolving a face template.

    Many mammals don’t recognize conspecifics based on vision, they do so based on smell. If our ancestors first recognized conspecifics based on smell (likely because single celled organisms recognize conspecifics based on quorum sensing compounds they release and detect), then those smell based recognition systems would get co-opted to detect individuals based on other sensory modalities.

    Smell recognition may still be the archetypal social recognition mechanism. Maternal bonding is triggered via smell. Maternal bonding is the archetypal social behavior in mammals. All mammals do it, even non-social mammals.

    http://www.ncbi.nlm.nih.gov/pubmed/9262400

    Smell detection is much easier to evolve than image detection. All that is needed are the right receptors expressed in the right regions. There is essentially no requirement for pattern recognition. Pattern recognition requires specific relative spacing and connection of neurons. If the retina does not map to high fidelity to the visual cortex, then high fidelity pattern recognition cannot be done on visual signals until there is a high fidelity mapping. I would expect that the fidelity of the signal processing increases across the neural processing chain together, rather than there being a high fidelity module that gets connected to low fidelity input signals.

    The first smells an infant is exposed to, may prime the infant to attend visually to the source of those smells which leads to attending to the face and eventually to high fidelity face recognition.

    If that is how social recognition evolved, we would expect to see the brain regions that do the different sensory demodulation into person recognition to be close together and for some types of trauma to affect multiple detection modalities. Looking at PubMed, that appears to be the case that face and voice recognition disorders tend to co-occur.

    http://www.ncbi.nlm.nih.gov/pubmed/21569784

    My experience in acquiring information faster via reading is data. It is an n=1 anecdote, but other people have reported the same thing. I am aware of no data that suggests reading is always a slower information acquisition modality, I suspect there is none, but I haven’t looked for it. In any case, my anecdote disproves it.

    Any use of an alphabet is very late in human evolution, the last 10,000 years or so. Most humans have been illiterate until very recently, last hundred years or so. That is not enough time for de novo letter recognition neuroanatomy to evolve. Already existing neuroanatomy had to be co-opted via plasticity to be used for that purpose. If that information processing neuroanatomy was gated by pre-existing template recognition, then the capacity to increase the bandwidth of that neuroanatomy would be limited.

    If the evolution of symbolic language originated via gestures, then visual stimuli may have a more direct involvement in language than do sounds.

    When people get brain damage, as from a stroke, that damages parts of the brain that are crucial for decoding language, there is plasticity such that other parts of the brain can compensate. If acquisition of specific skills required an already existing “template”, the degree of plasticity would very likely be much less. In other words, if a facial template is necessary for face recognition, and the neuroanatomy that instantiates the facial template is damaged, there would be no way to recover function. If the facial template is generated from neuroanatomy through plasticity, then function could be recovered.

  40. tmac57on 23 Jun 2012 at 1:10 pm

    daedalus2u,since when did your statements become soooo slanted?

  41. sghon 24 Jun 2012 at 5:30 am

    @daedalus2u
    There is evidence to suggest that there is less plasticity in face recognition than in language. For example, patient J.M. (Schmalzi et al., 2009) suffered from congenital brain abnormalities leading to visual agnosia and proposagnosia and had a pattern of performance on neuropsychological tests which agrees with there being a specialised neuroarchitecture at birth for the processing of faces. Johnson and Morton (1991) and Johnson (2005) have done a lot of work on what they called CONSPEC and CONLERN which I won’t go into here, because I’m lazy, but there’s a fair amount of evidence for that as well. Neonates also have an innate attentional bias towards faces and face-like stimuli (Johnson et al., 1991). All of this generally seems to suggest that there is at least a crude template of a face at birth, which of course necessarily must be encoded in the genome. Steven Pinker has said that he gets infuriated when people say that “there isn’t enough information in the genome for X” and I’m inclined to agree. While it is true that the human genome contains around 20-25k protein-coding genes, this doesn’t mean that you can’t have the development of specific neurocognitive architecture of absurd complexity. One gene doesn’t invariably serve one function, and different protein expressed in combination can cause widely different effects than any of these in isolation. The face template previously mentioned doesn’t seem to be specific to humans, however. I think the Pascalis and Kelly paper goes on to talk about how monkeys can go on to develop a human-like ability to recognise human faces when being raised among humans (and away from conspecifics). It also says that a similar pattern can be seen in human infants, although of course, one cannot for ethical reasons take this line of inquiry to its next logical step.

    This ended up being way longer than I had hoped. Point is, language isn’t encoded per se, and neither are faces. However, the neuroarchitecture which facilitates the development of both abilities are probably written in our genome (i.e. Chomsky’s Universal Grammar and a crude face template for language and faces, respectively). Although, for some reason (which I have never quite understood) there seems to be more plasticity in the language domain than in the face processing domain (e.g. early-life lesions to the perisylvian cortex seem to cause reorganisation of the language faculty, whereas early lesions to the fusiform gyrus almost invariably cause prosopagnosia). If anyone has any explanation as to why that is I would really like to know. I do generally get where you’re coming from though, but one has to be really careful when one makes claims about the extent of specialised neural machinery as these issues are bound to be contentious. Language has been, and still is, very controversial with respect to the extent to which people are born with language-specific neural machinery (e.g. Steven Pinker, 1994 vs Elizabeth Bates, 1994/1996). In terms of faces, there does seem to be some consensus that at the very least, children are born with an attentional bias that specifically evolved to orient to faces.

  42. BillyJoe7on 24 Jun 2012 at 7:00 am

    Empathy with everyone else on this thread.

  43. daedalus2uon 25 Jun 2012 at 8:21 am

    With all due respect to Steven Pinker, there isn’t information in the genome to code for things like a face template because the genome didn’t evolve such that a face template could be coded for. There are not “templates” for anything in the genome. The genome only tells individual cells how to proliferate and differentiate. The resulting patterns that end up being the phenotype are due to the interaction of those cells with the environment.

    Is there a template for a two-headed person in the genome (see above)? Is there a template for every congenital difference? If the phenotype develops because there is a “template” of that phenotype in the genome, then the genome has to have a “template” of every phenotype trait that can be expressed. There isn’t enough information content in the genome to code for a template of every potential phenotype trait.

    You can’t have a high fidelity face recognition template at birth because visual processing is not high fidelity at birth. The development of high fidelity vision requires high resolution visual input and there isn’t high resolution visual input in utero.

    Current data is insufficient to differentiate between a hypothesis where the genome codes for an innate face template, or codes for neuroanatomy that is sufficiently general and can instantiate multiple face templates and which is primed at birth to prune and self-modify to become a finely tuned face recognition that keys on the specific faces the infant is first exposed to. The general pattern recognition neuroanatomy that prunes to match what it is exposed to is enormously easier to evolve, much easier to encode in the genome and doesn’t depend on facial features specific to modern humans. It is simpler to evolve and could evolve over a much longer period of time. Why would we adopt a hypothesis that is more complicated (an innate human face template) when a simpler hypothesis fits the data?

    What evolutionary value does an innate face template have over a compulsion to attend to sources of sound and visually key on those sources of sound? There is exposure to sound in utero, so pattern recognition of maternal voice could develop in utero and be a very strong cue for infants to key on.

    We don’t actually know if there is more or less plasticity in the facial domain than in the language domain because those occur over very different time scales and infants can’t report or be tested with sufficient precision. Language is an interactive process where people need to develop the neuroanatomy to both decode meaning and generate language that encodes meaning. Face recognition is pretty much one-way and passive.

    Maternal bonding occurs very quickly (hours) and persists for a lifetime. It only takes a single exposure to something to form a memory of that exposure. A memory happens to be instantiated in neuroanatomy such that the memory is accessible to consciousness. Why couldn’t there be unconscious memories also instantiated by neuroanatomy which are just as fast and just as persistent?

    I agree that there is a lot of thought that there are “templates” of various sorts encoded in the genome and expressed in neuroanatomy in utero. I think that much of this thinking is mistaken and derives from the human compulsion to see “top-down” control due to hyperactive agency detection. Humans are primed to detect agency, even when it is not there. This is the same hyperactive agency detection that wants to see a “mind” in “control” of the brain, causing the brain to do what it does from the “top-down” control by the mind.

    There are known mechanisms by which high fidelity pattern recognition can be instantiated following exposure to high resolution signals (Hebbian remodeling). What is the mechanism by which any type of pattern recognition can be instantiated by the genome? Pattern recognition is necessarily an emergent property of many neurons, millions at least. How do those millions of cells “know” how to connect and arrange themselves to recognize a pattern coming in on nerve signals over nerves that have not yet processed sensory input? There isn’t any non-teleologic mechanism.

  44. steve12on 25 Jun 2012 at 10:58 am

    Daedalus – I’d check out some of the cites I provided above – they’ve already testred some of your very reasonable suppositions. It’s always nice when someone else has done the leg work, and it obviates the need for speculating one’s way through a question. I’ll again bring up Project Prakash, which has shown surprising face detection ability in kids who’ve had their sight restored way past what was thought of as critical periods for developing form recognition.

    And I’ll just throw this out there again to you et al. doing the same: don’t say what the brain (or nature generally) can and cannot be or do when no one understands the mechanisms involved, and this is doubly so with the products of evolution. This sort of reasoning will almost always lead you astray. I would think in terms of liklihoods, keep an open mind, and look at existing work in testing your ideas.

    We absolutely CAN have a face detector, phoneme detectors etc., even if this seems unlikely (and I agree that it’s less parsomonious adn should be treated with skepticism). There is an incredible amount of stability in the cortex’s visual ventral stream for body areas, landscapes, faces, ecological size, just as there are clear modules in visual cortex for edges, through shapes color and motion. We have litte idea how they come about.

  45. daedalus2uon 25 Jun 2012 at 4:57 pm

    It turns out the newly sighted do not recognize objects by sight that they can recognize via touch.

    http://www.ncbi.nlm.nih.gov/pubmed/21478887

    Do you have a citation for recognition of faces?

    The only ones I could find relating to faces there was a long delay between the restoration of sight and the testing of face recognition.

    The report on S.B. by Gregory states:

    “S.B.’s first visual experience, when the bandages were removed, was of the surgeon’s face. He described the experience as follows:— He heard a voice coming from in front of him and to one side: he turned to the source of the sound, and saw a “blur”. He realised that this must be a face. Upon careful questioning, he seemed to think that he would not have known that this was a face if he had not previously heard the voice and known that voices came from faces.”

    http://www.richardgregory.org/papers/recovery_blind/contents.htm

    This seems to indicate that S.B. was not able to recognize faces by sight a priori, but needed to know that he was looking at a face to be able to appreciate that it was a face. That such an association can be made in seconds shows how difficult it would be to demonstrate a “template” before there was exposure to such a visual stimuli.

    I agree that it is possible to have visual detection neuroanatomy pre-coded by DNA. I just don’t think there is any compelling evidence for it, or to reject the more parsimonious idea that facial recognition develops just like every other type of visual pattern recognition develops. If the visual processing system has plasticity such that detection of some objects needs to develop, how is that consistent with facial recognition not needing to develop.

    Faces can still be “special” in a no-template development via plasticity model. Other sensory modalities, sound (mother’s voice), smell (mother’s odor), touch (mother’s kiss, taste (mother’s milk) can all be additional cues that key the infant to attend to the visual object that is the source of all of these good things.

    There is sound exposure in utero, so sound pattern recognition at birth doesn’t need to be pre-coded by DNA. Olfactory neurons are directly connected to specific parts of the brain, so no development is needed for olfactory detection.

    I think that insects probably do have pre-coded visual pattern recognition neuroanatomy. But the insect retina and associated immediate signaling is completely specified and is fixed so there is a one-to-one mapping between visual signals and the decoding of the images those visual signals represent.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2871012/?tool=pubmed

    Vertebrate visual system is not pre-coded and fixed, so there has to be plasticity to get the light detection cells and the visual data processing cells connected together with the right mapping. Without the connection with the right mapping, there can’t be high fidelity pattern recognition.

    It is perfectly acceptable to figure out what evolution can and can’t do. Evolution is not magic. Evolution is not teleological. Evolution can only generate organisms via physically realizable processes and those organisms must obey the laws of physics. We should be extremely skeptical of features that violate some of these principles.

    I don’t dispute that people have postulated a “time window” beyond which visual plasticity is zero. Showing that this is not the case does not demonstrate that there is a face template that is coded for by DNA.

    When people state that a feature is directly coded by DNA, there needs to be some evidence that this is the case and that development via plasticity models are excluded by data. Plasticity can explain every feature of the brain that I am aware of. I appreciate that my view that plasticity explains essentially everything is a minority view, but I think it is human hyperactive agency detection that compels people to look for a “top-down” control system that is directing the development of observed features.

    Wanting there to be a “top-down” control system is a major example of flawed thinking behind much of science. There is no “top” in physiology. All global aspects of physiology are only emergent properties of bottom-up systems.

  46. sghon 26 Jun 2012 at 8:34 am

    @daedalus2u
    Let’s try to avoid putting up strawmen. I never claimed there’s a human face template, I claimed there’s a template for faces generally. Nor did I claim that this was high resolution. Johnson and Morton’s (1991) hypothesis, for example, explicitly states that the early bias to faces is driven by a subcortical low spatial frequency detection mechanism. Whether their hypothesis is veridical is a matter of on-going debate, of course. Infants up to 9 minutes old have been found to have an attentional bias towards faces, so either there is a genetic basis for this preference, or it occurs within in the first 9 minutes of life. There’s a group of Italian researchers (e.g. Simion et al., Macchi Cassia et al., Turati et al., etc, etc – you will forgive me if I don’t remember the years of publication out of the top of my head) who’ve found that this innate bias is driven by a bunch of properties such as congruency, top-heaviness, the presence of an outline (hairline) and so forth. By manipulating all of these things independently one can increase/decrease the attentional bias of infants.

    I am unfamiliar with any potential mechanisms for which this could develop in utero, but there is pretty overwhelming evidence that it does, and that it is very general to begin with (e.g. preference for congruent objects that are top heavy), but soon develops into a higher resolution face recognition mechanism, together with, as you rightly point out, the improving visual acuity of the visual system. Of course, any talk of a template is a bit misleading because all this would mean is that you have a particular system that responds to the general geometric relationships that tend to be present in faces, such as the properties identified by the Italian researchers mentioned above.

  47. steve12on 26 Jun 2012 at 12:23 pm

    My bad, Daedalus. Apologies. You have, once again, single handedly bested my entire field through pure rationale. I’m sorry for implying that you might be more careful or have a richer understanding of the literature before making dispositive pronouncements about the how the brain works at the systems level.

    I now realize how wrong I was……..

  48. daedalus2uon 26 Jun 2012 at 12:24 pm

    sgh, you (and others) are making the fallacy of the excluded middle.

    “Infants up to 9 minutes old have been found to have an attentional bias towards faces, so either there is a genetic basis for this preference, or it occurs within in the first 9 minutes of life.”

    These are not the only two options consistent with the data of facial attention at 9 minutes. A preference for looking at objects with a spacial frequency similar to that of faces is sufficient, and that need not be genetic, it could be developmental.

    A difficulty with the data generated to support the CONSPEC and CONLERN ideas is that it uses dark eyes/mouth on a white background. We know that humans evolved in Africa. Presumably that is where any “face template” also evolved. We should expect any genetically encoded “face template” to better match the faces experienced over long evolutionary time, which would be black faces with light eyes.

    When researchers are using stylized images to test infant visual cuing, they don’t know what aspects of the images the infants are actually cuing on. It might be a simple spacial frequency, or something else. Adults have such highly tuned face recognition pattern recognition that pareidolia is extremely common.

    http://www.pnas.org/content/102/47/17245.full

    In primates, many if not most spontaneous birth occurs at night where visual cues to attend to faces would be absent.

    Infants have much less contrast sensitivity than do adults. Images that are compelling to adults may not be visible to infants and they may be cuing on something other than a face-like pattern.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2765046/?tool=pubmed

    Spontaneous waves of activation would be sufficient to allow Hebbian remodeling to achieve sensitivity to spacial frequency because spontaneous waves do contain relative spacial information. Spontaneous waves don’t contain face-like template information.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2946625/

    I simply don’t agree that there is overwhelming evidence that a visual human face template develops in utero. I don’t think there is sufficient evidence to reject the null hypothesis that the visual system develops the ability to discriminate objects by their geometry after being exposed to objects with different geometry so as to produce the neuronal remodeling to instantiate pattern recognition.

  49. sghon 27 Jun 2012 at 7:38 am

    What precisely do you mean by “developmental” in this context? I said that it is either genetic (as there is little input from face stimuli in the womb) or it “occurs” in the first 9 minutes, which is a bad way of saying that it develops within the first 9 minutes. A spatial frequency preference cannot account for the evidence at all. If you look at the work I referenced earlier, by Simion, Turati, Macchi Cassia, etc., you’ll find that they kept the spatial frequencies the same while modulating other properties (such as top heaviness and congruency) and found that this modified the extent of the attention bias (I can’t remember exactly how old the neonates were, but I seem to recall that they were virtually straight out of the womb). Of course, if spatial frequency alone accounts for the attention bias then varying other variables shouldn’t have any effect. Also, I think in Farroni et al. (2002), they found that 5 year old infants respond preferentially to gaze direction (which is very odd seeing as you need relatively high resolution to discern the pupil from the sclera, around .5-1 c/d).

    It’s an interesting proposition that given that there is some sort of template, it ought to respond more readily to black faces over white faces, irrespective of the child’s “race”. Of course, this would mean that the template we’re referring to is sufficiently specific for this, which might not be the case at all. For example, Pascalis and Kelly as I mentioned above, cited some really cool studies done with monkeys where they lost their conspecific face recognition ability (analogously to how Japanese children lose their ability to differentiate between the English phonemes L and R in the language domain). I don’t know about this one, I haven’t seen any work on it, but it would certainly be interesting to test.

    I don’t mean to invoke authority here, but this is really sort of dragging on. The consensus among face researchers, seems to be that there is innate specialisation for face recognition (see e.g. Pascalis and Kelly, 2009; Kanwisher, 2000; Chien, 2011; Johnson, 2005), but there is less agreement as to the extent of this specialisation. Indeed, the only ones that I’m aware of that have actually questioned this are the Italian researchers previously mentioned, who’ve argued that these are domain general factors – that is, it’s not the geometric properties of faces per se that trigger the attention bias, but other more general geometric properties (such as the previously mentioned top heaviness and congruency) that are properties of the visual system as a whole (or something along those lines). This position is generally not “mainstream”, as it were, and Chien (2011) found that these biases disappear by the age of ~3 months, which agrees with Johnson and Morton’s CONSPEC/CONLERN model (and in fact a proposition by Pascalis and Kelly that the low spatial frequency detector CONSPEC is suppressed like a lot of other developmental reflexes, e.g. the grasping reflex). I don’t have the expertise in this field to advocate one view over the other, I can just reflect the consensus.

  50. sghon 27 Jun 2012 at 7:43 am

    Oh, and one last thing. How would you suppose we test the hypothesis that all these different biases develop superduper fast, as you suggested (if I’m understanding you correctly), as opposed to being already present at birth? 9 minutes is evidently not enough and anything shorter than that is not practically feasible.

  51. ccbowerson 27 Jun 2012 at 9:59 am

    “We know that humans evolved in Africa. Presumably that is where any “face template” also evolved. We should expect any genetically encoded “face template” to better match the faces experienced over long evolutionary time, which would be black faces with light eyes.”

    Actually that may not be true for a couple of reasons. The most important is that I think it is incorrect to assume that Africa = “black faces.” Africans have the most skin color diversity in today’s world, and I think it is a mistake to homogenize the people in that region. If the range of skin color is greatest in Africa today, I’m not sure exactly what that says about the skin color of humans 100,000 years ago, but it would make more sense in this context that such a “face template” would be sufficiently flexible to account for a range of skin colors. If there are differences depending on skin color, I’m not sure that we can assume your “black faces with light eyes” is a better option

  52. daedalus2uon 27 Jun 2012 at 10:04 am

    The working definition of “neurodevelopment” that I am using is the process by which physiology directs/produces changes in neuroanatomy to produce changes in brain function, behavior or properties. Normal development is what occurs in the absence of xenobiotic effects. This is normal as process, not normal as outcome. Physiology doesn’t exhibit teleology, it does what it is configured to do, not to produce a result that it “wants”.

    I consider “neuroanatomy” to be the physical configuration of matter in the brain that instantiates brain function, behavior or properties. Any changes (not involving xenobiotic effects, drugs, trauma) in brain function, behavior or properties can only occur via neurodevelopment and is due to the changes in neuroanatomy that neurodevelopment produces.

    The idea that there can be a purely genetic process in neurodevelopment is not correct. The properties of the brain are emergent properties from large ensembles of cells working together, each cell doing something different, but the large ensemble of cells instantiating the emergent behavior of the brain, thinking, memory, pattern recognition. For brain cells to work together, they must be controlled to work together and must exchange signals to synchronize their respective different behaviors. This synchronization requires signaling between cells. The genome inside one cell doesn’t “know” what the genomes inside other cells are doing unless there is communication between cells.

    Hebbian remodeling is an example of a process in neurodevelopment mediated through communication between cells. The cells “wire together” as a consequence of “firing together”. The resulting neural network depends on both the genetic instructions in the cells and also the pattern(s) of signals. The patterns of signals affect the ongoing development of the neural network, so it becomes extremely complicated very quickly. This type of development is the product of many non-linear coupled interactions, so it is chaotic in a mathematical sense.

    I think that the problem some people have with understanding my approach is that they are stuck in thinking of “neuroanatomy” as what you can see if you open up a brain, and there is the default idea (perhaps unconscious) that the brain “works” by having the “mind” do “stuff” with the “neuroanatomy”. My conceptualization is that there is only neuroanatomy (the physical arrangement of matter in the brain) and it is that neuroanatomy that is responsible for everything that the brain does. We don’t know or understand all of what neuroanatomy is, or how it instantiates what the brain does.

    From my perspective, if all change in the properties of the brain are consequences of neurodevelopment. That includes forming memories. Memories can be formed in less than a second. Memories are instantiated by changes in the neuroanatomy of the brain (by processes we do not understand). Maternal bonding occurs very rapidly (minutes). I have no conceptual problem with face pattern recognition happening very rapidly.

    If we want to understand how pattern recognition neural networks could develop, we need to figure out how cells could connect themselves together to be sensitive to a specific pattern. It is easy to see how Hebbian remodeling could produce good pattern recognition by having the network instantiate more gain from prior pattern exposure. How could a network develop pattern recognition without a pattern? Various parts of the visual system do exhibit spontaneous firing, which triggers activation which spreads out in waves. Waves of this type do have spacial and temporal information. Depending on the spacing between nerves and the wave propagation velocity, spontaneous waves could be used to generate pattern recognition of spacial frequency, timing frequency, and some other things.

    I think it is quite likely that the human visual system is optimized to detect spacial frequencies that are important. This is instantiated by things like eye size, lens shape, retina cell density, minicolumn size and density, cell firing thresholds and so on. This may (probably does) include spacial frequencies found in faces.

    There could be generation of pattern recognition of shapes that can be instantiated by spontaneous wave propagation. Linear waves could generate linear pattern recognition, linear waves propagating in diverse directions could program linear pattern recognition in many orientations. Circular waves propagating from random single initiation points could program circular pattern recognition. There might be pattern recognition for little circles inside a big circle. Something like that might “look” like a face to an adult researcher, but a face would not produce the highest detection signal.

    How to test these? I am not sure. Maybe looking at fMRI, EEG or MEG in utero, to see what patterns of neuronal firing are occurring could be used to infer whether or not certain brain regions are active and so whether or not they have instantiated pattern recognition yet. I don’t know if fMRI has been done in utero yet. It might not work because the O2 levels in utero are very different, but then fetal hemoglobin is different too so it might.

    Looking at the evoked response due to a flash of light in utero might allow an inference of spacial frequency detection, but the fMRI signal is not due to neuronal firing, it is due to differential hemodynamics. There might not be the strong correlation between fMRI and neuronal activity in utero that there is postnatally.

    I take pretty strong objection to people stating that something is “genetic”, without the identification of the genes and/or DNA that is responsible. We are in the post-genomic era. If there is the claim that something is genetic, lets see the genes that are responsible. My perspective is that the people who have been claiming stuff is “genetic” are having difficulties finding the DNA that they claim is responsible.

    I especially take especially strong objection to claims that brain properties are “genetic”, and especially the idea that intelligence and IQ are “genetic”. The scholarship in the genetics of IQ and intelligence field is very poor with very fundamental aspects remaining undefined (like what is intelligence and how can it be reliably measured). Intelligence is a property of a phenotype, not a genotype. I am most familiar with the genetics of autism, and there is no gene that is responsible for more than a few percent of autism incidence (from multi-thousand GWAS). Autism is a disorder with extremely high heritability, but they can’t find the genes.

    Autism (like all neuropsychiatric disorders) has to be a problem of development. Either neurodevelopment lead to a dysfunctional neuroanatomy that instantiated bad functionality, or neurodevelopment didn’t lead away from a dysfunctional neuroanatomy instantiating bad functionality. No doubt genes are important, but genes are not and cannot be the whole story.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.