May 03 2011

Categorizing Brain Function

This week on the SGU I will be interviewing Jon Ronson about his latest book, The Psychopath Test, just being released in the US. I am not going to write about the book here (I will do that after the interview, although I have already read a preview copy). Rather, as a prelude to the interview I want to discuss some background thoughts about how we think about brain function in the context of psychology and psychiatry. What I am actually going to give you is my own current synthesis, acknowledging that there is lots of wiggle room for interpretation and opinion, and my own thoughts have been constantly evolving over the years.

Hardwiring

It is somewhat of a false dichotomy to think of brain function in terms of hardware and software. That compelling computer analogy tends to break down when you apply it to the brain, because in the brain hardware and software are largely the same thing. Memories are stored in the same neurons that make up the basic structure of the brain, and experiences can alter that structure (to some degree) over time. The brain is neither hardware nor software – it’s wetware.

But it is still useful to think of brain function in terms of long-term structures in the brain – modules and networks that make up the basic functioning of the brain and change slowly (if at all) over time, and short term structures and processes that subsume short-term memory, our immediate experiences, mood, and emotions, and our attention and thoughts. The latter is as close as we get to “software”.

First let’s consider what processes lead to the basic neuroanatomy of the brain – the factors that determine, for example, that the occipital lobe will process visual information, and that it will do so in a very specific way. We can talk about Wernicke’s area in the brain, because everyone seems to have one, it is always in the same place (although sometimes can be on the opposite side) always serves the same function (to process language), and always makes the same connections to other parts of the brain. Yes – there is variation in neuroanatomy, just as there is variation in every biological parameter, but the consistency at this scale is very high.

As we delve into finer and finer details of anatomy, then individual variation becomes greater and greater. While everyone has a Wernicke’s area, some people seem to be born with greater language facility (perhaps “potential” is a better term) than others. What determines this?

It is clear that the ultimate cause is our genes – they contain the instructions for growing a brain. To borrow an analogy from Richard Dawkins (which was pointed out to me by a friend of mine, Daniel Loxton), the genes are not a blue print, but are rather a cook book. In other words – they do not determine where every neuron goes. They determine the rules by which the neurons are laid out, but by following those rules greater complexity emerges. Patterns are repeated and neurons are mapped, to the body and to sensory input. Our sensory cortex, for example, maps itself out to the surface of the body. This is a dynamic process that requires information input, and it is for this reason that the brain can contain much more information than the entire genome, let alone just those genes that code for brain proteins.

In addition to genes there are also epigenetic factors – environmental factors that influence how the genes are expressed. Genes can be turned on and off in various cell populations, and further this is not a binary state – meaning that genes can be turned on to various degrees. A particular gene can be a little active, making a small amount of protein, or can be very active and crank out large amounts of its protein. The environment in the womb, for example, exerts powerful epigenetic influences on gene expression in the developing fetus. This includes the stress of the mother, the diet of the mother, and the levels of various hormones in the blood.

The third factor is developmental. The genes, modified by epigenetic factors, may have a plan for the brain, but that plan still needs to be executed in the developmental process. And that process can go awry, or be interfered with by external factors, like infection, or the presence of a twin.

The combination of genetic, epigenetic, and developmental factors then result in the final structure of the brain. Now it gets really interesting, and increasingly difficult to make categorical statements.

Environment

The brain is an organ evolved to interact with the environment, to be adaptive and to learn. That is the whole point of having a brain – to respond to the environment much more quickly than genes themselves can allow (even with epigenetic factors, which do allow for single generation responses to the environment). It is therefore no surprise that after birth (and one can argue even before birth), as the brain grows and matures, it is adapting itself to the environment and responding to all the various inputs it is receiving. Experiences, culture, family life, and other environmental factors all influence brain function.

The never-ending question, however, is to what degree are the functions of the brain determined by hardwiring (shorthand for the genetic, epigenetic, and developmental factors I described above) vs environmental factors. Here is where opinion seems to outweigh evidence. My personal opinion is that both are involved in almost every aspect of brain function, and to various degrees. Some aspects of brain function are dominantly determined by hardwiring. This applies to all the basic functions of the brain, like vision, motor function, wake-sleep cycle, breathing, and the like. Other aspects are perhaps dominantly determined by environment, such as language and culture. And many things are somewhere in the middle.

Most relevant to psychiatry is the question of personality. To what degree are individual personality traits determined by hardwiring vs environmental factors? Here our ability to categorize brain function is stretched to the breaking point. Scientists argued bitterly about where to draw the line around the category of “planet.” They had to deal with only a few variables – size, shape, gravitational influence, the presence of moons, and perhaps a couple of others. And yet what they found was a confusing continuum of objects, and no truly objective way of using the identified variables to come up with an operational definition for “planet” that was not controversial.

Psychologists and psychiatrists have hundreds of variables to consider, that interact with each other in complex ways. Categorization is all but hopeless. However, there still appear to be “islands of stability” – or personality profiles that peak above the noise and can be identified and treated as a real entity. But we can never get away from the complexity.

Let me back up a bit, however, and get back to personality traits. The first challenge is identifying what these traits actually are. Is there really a part of the brain that determines how extroverted vs introverted we are? Is extroversion even a real brain function, or is it the end result of deeper underlying functions? This gets to one of the problems with thinking about human psychology. We generally are identifying three factors: mood, thoughts, and behaviors. We largely rely upon people to tell us how they feel and what they are thinking, and we can observe behavior. We then infer from these three end-results what the underlying personality traits might be. We are like chemists before the periodical table of elements was formulated. We are not sure if we are dealing with the fundamental units of personality (although I think we are in some cases). It is still very much a work in progress.

However, there is another layer of complexity in that mood, thought, and behavior occur within various simultaneous contexts. Movies exploit this all the time – we may see a character behaving in a certain way that seems puzzling, or that makes us jump to certain conclusions about their personality. Only later is the context revealed, and we realize that the character was simply reacting to their situation in a way we might feel is reasonable. The issue of context is critical.

So mood, thought, and behavior are end results of underlying personality tendencies interacting with the environment. The environment not only includes the immediate situation, but also the recent experiences of a person, and even the long term experiences that may have taught them to react in a certain way, their family life, their culture, and any subcultures in which they may be involved. Before any conclusions can be drawn about a person’s personality, we must therefore know a great deal about their individual context.

Another layer of complexity is that individual personality traits, assuming we can even identify them, do not exist in isolation but also interact with each other. Someone who is extroverted and aggressive will behave differently from someone who is extroverted and shy, or extroverted and highly empathic.

The number of variable we are dealing with now are staggering, and the result is chaos (in the mathematical sense).

Conclusion – The Challenge of Psychology/Psychiatry

At this point it should seem like folly to place a label on someone’s psychological condition, and to some extent it is. However, as I said, there are recognizable islands of stability in the chaotic sea of psychology. Some people have a personality trait that is at one or the other extreme end of human variation, and tends to dominate their mood, thought, and behavior. For example, someone may have their anxiety cranked up to maximum, to the point that they are anxious in situations that would not make most people anxious. Their anxiety controls their life, and overshadows other aspects of their personality.

They are still an individual, with many other personality traits and their own complex individual context, and therefore they are different from every other person with anxiety. But it is still meaningful to think of them as a person with an anxiety disorder, and to treat the anxiety to bring it down to a more functional level. The overwhelming complexity of the human brain does not mean we should throw up our hands and abandon all attempts to help people with what can meaningfully be called psychological “disorders”.

But it does mean that we need to proceed with extreme caution. We need to be skeptical of the tentative labels that we use to help guide our thinking about treatment. No person can be reduced to a label – to a single feature about them. People are not “schizophrenics” – they are complex individuals who have a suite of personality tendencies that together fit into a vague and fuzzy, if still recognizable, category we call “schizophrenia.” And this is not just being PC – it reflects the importance of recognizing how we think about brain function at every level, with all of the limitations that are implied.

I had all this in mind when I read The Psychopath Test by Jon Ronson, which details his personal journey to understand just a single psychiatric diagnosis and the quagmire that led him to. I look forward to discussing his book with him this week.

Share

49 responses so far

49 Responses to “Categorizing Brain Function”

  1. daedalus2uon 03 May 2011 at 10:03 am

    Nice post. Only one thing to disagree with, the primacy of genes and the idea that subtle differences in regional brain anatomy must be ultimately due to differences in genes. That isn’t precisely what you said and I am pretty sure it isn’t what you meant, but it could be interpreted that way.

    Yes, ultimately the structures of the brain do depend on genes, but differential neuroanatomy in different individuals does not necessarily depend on differential genetics.

    Every physiological process that is important is regulated by physiology to accomplish certain things. That includes the sizes of different regions of the brain. During neurodevelopment, those regions will grow larger until they are “large enough” and physiology generates signal(s) that stops growth.

    The “large enough” signaling has to happen many times during brain development in utero. That “large enough” signaling is transmitted between cells, so it depends on the geometry of the configurations of those cells and also the diffusion of signaling molecules. Anything that perturbs that signaling will change the timing of the “large enough” signal used for stopping growth. Because there is already active control using that “large enough” signal, there is no threshold for changes in that signal to change the size of the region it is controlling.

    It is likely that there are only a few “large enough” signals. There would be no reason for evolution to require many signals, at first there must have been just one, it is likely that archetypal signal was elaborated on to produce a multiplicity of signals (if a multiplicity are necessary). Those few “large enough” signals are then used multiple times, each time there is a neuroanatomy region that needs to grow and then stop growing. I suspect that some of the neuroanatomy of the brain is produced this way, with fluctuations in growth generating structures that later expand beyond their signaling range, sort of how inflation expanded quantum fluctuations.

    The thresholds for those “large enough” signals have been set by evolution to produce the neuroanatomy that humans exhibit. That is the dispersion we see in sizes of different brain regions must be due to differences in when the “large enough” signal triggered the stopping of growth.

    If evolution favors tribes with a dispersion in abilities (which it apparently does), then evolution will favor dispersion in the sizes of the brain regions that instantiate those abilities. In a limited gene pool, there is limited genetic diversity. Dispersion in sizes of brain regions can still be achieved by coupling the signaling that regulates size to noise via stochastic resonance.

    If the dispersion in size of brain regions is due to environmental noise, then genes for abilities that depend on the size of particular brain regions will not be found. I think that this is what the large genome studies are finding. They are finding a few genes that when “broken” cause disorders, but the vast majority of the common disorders have no good candidate genetic causation.

  2. Gehackteon 03 May 2011 at 10:05 am

    Thanks very much for your opinion on this, and I’m looking forward to the SGU episode.

    Recently I was poking through the science library at the University of Toronto and they had a lot of interesting psychology books that book stores don’t seem to carry chapters/indigo specifically), which is too bad because they seemed much more informative, and less… questionable.

    I may have to go through and do an ISBN hunt so I can order some of them, to get a more rounded opinion on the subject.

  3. Steven Novellaon 03 May 2011 at 10:28 am

    daedalus,

    I am talking about more than specific disorders. There is also how the different brain modules are wired, and how they are wired together. There are also different brain proteins that are involved with brain function independent of the pattern of neuronal connections and the size of regions. These are largely determined by genes, modified to various degrees by epigenetics.

    You are simply pointing out one factor that seems to be determined more by development. Remember – I pointed to three factors: genes, epigenetic factors, and the developmental process.

    I do think it is an interesting idea that some aspects of development evolved to be variable, in order to specifically generate greater diversity than is found in the genome. We see this also in other aspects of biology, like the immune system,, that needs to generate great diversity in the developmental process.

  4. elmer mccurdyon 03 May 2011 at 10:42 am

    Well, I guess this is relevant enough to a question I’ve been wanting to ask that I feel I can ask it without being too far off topic. A recent LA times article on chronic pain included the following quote: “Within hours of an acute injury, we see small little nerve fibers sprouting. Sometimes they not only transmit pain and increase sensitivity, but they also begin to produce their own pain.”

    I was going to say that I emailed Dr. Lipman about this and got no response, but looking at the article just now and comparing it to other versions available online I see that they’ve clarified the attribution. But anyway as long as I’m here, can somebody provide a citation for this phenomenon? I’ve been looking at various thing about neurological changes associated with chronic pain and haven’t seen any references to it, nor have I managed to find one by searching, and I would like to find something better than a newspaper article. Alternatively I guess I could contact, um (checking…), Dr. Lynn Webster, medical director of the Lifetree Clinical Research and Pain Clinic in Salt Lake City.

  5. magra178on 03 May 2011 at 12:14 pm

    I read the description of the book, and it includes his involvement with the broadmoor patient claiming to have lied to get a lighter sentence. I listened to that “this american life” episode where Ronson chronicled meeting him, and it left me uneasy. He didn’t come off very skeptical in it, so I’m really interested to hear the interview, and hopefully, read the book, as I do enjoy Ronson’s storytelling.

  6. petrossaon 03 May 2011 at 12:46 pm

    I tend to disagree a bit. To me the dichotomy isn’t a dichotomy a tall but just ‘the program’ being in place at birth, and another ‘program’ created during development after birth.

    A horse get’s on it’s legs within minutes after birth. That surely can be called ‘hardwired’ It can’t develop the necessary structures in a few minutes. They were put in place preprogrammed during gestation. In fact everything your body does without conscious intervention was there beforehand. It got finetuned maybe, but not created.

    So to say there is a dichotomy is already a false assumption, evidently the logic build on it can’t be correct.

    So to my mind your post is logical coherent piece but with not based on solid facts.

    The brain, cliche as it may be, is really a computer. It does have basic preconfigured modules carved in stone (brainstem for example), it has basic preconfigured modules somewhat more flexible but preprogammed nonetheless (limic) and it has loosely preconfigured modules (neo-cortex) which are more flexible.

    Overspecialization is the bane of any profession but nowhere more so then this one. It’s so easy to get lost in the multitude of complex interactions one doesn’t see the forest because of the trees.

    One confuses ‘us’ with ‘US’. We assume that our consciousness is US and reasoning backwards it must be so that it can’t be preprogrammed.

    But ‘us’ is just a tiny part, an unwanted largely ineffective sideeffect of an overly developed central control system. US however is the entire body with all it’s independent neural control systems which tell ‘us’ afterwards what happened.

    We create the illusion of continuity the same way a 16bit sampler creates music from scratchy cd’s. The real life interaction takes place on a much lower level and get’s transmitted upstairs for integration later on.

    We are the monkey on the back of a mammal. We can control it to a point, but only in sofar it tells us what it does.

    All this wouldn’t be possible without ‘hardware’

  7. Steven Novellaon 03 May 2011 at 12:55 pm

    petrossa – I know I was working through a lot of concepts quickly, but I think you missed some of my points.

    I wrote:
    “Some aspects of brain function are dominantly determined by hardwiring. This applies to all the basic functions of the brain, like vision, motor function, wake-sleep cycle, breathing, and the like.”

    Which is exactly what you are saying. Also, I never said there was a dichotomy. My point is that there are many factors that determine brain function, and the balance of these factors is different for different functions, and various functions interact with each other to produce end results (of mood, thought, and behavior) and so it’s tricky to reason backwards to basic underlying function.

  8. petrossaon 03 May 2011 at 1:58 pm

    True, i read to fast. Stupid habit i should get rid of. Sorry.

    However i stick by my point:

    US, we, consciousness, character whatever you want to call it is a mere sideeffect of dataflow.

    In my mind i see it as a untuned tv that captures the universe background noise. On thatnoise you can lay a template like from an punchcard. The holes that show up are to the spectator mostly always lit due to the speed the pixels on tv change.

    This ‘template’ is a person. The person doesn’t really exist as such it’s just seems so. So that part can never be traced to a specific neural network.

    But the ‘hardwired’ survival systems can be. They are fixed, we call them emotions. Their aspect only varies due to the interpretative analysis of the neo-cortex’s ‘emotion interpretation’ module.

    Our emotions are common over the whole mammal group, and by association i assume over all the vertebrates with a brain.

    It’s the way they get expressed that differs.

  9. Jeremiahon 03 May 2011 at 2:00 pm

    petrossa and daedalus:
    It’s the functional set of instructional apparatus that determines the geometrical nature of the structure, not the sizing of the structure that governs in some way the functional duties of its apparatus.

  10. James Foxon 03 May 2011 at 2:04 pm

    Great post Steve. I work with parents and their teenage children who have been identified as being at risk. I constantly find it fascinating how often issues such as undiagnosed ADD/ADHD, Learning Disorders, Depression, and/or Bi-Polar Disorder play a role in these circumstances and family dysfunction. Parents who had similar struggles as a youth are often completely unaware that their child is facing similar issues they had or have been dealing with their whole life. Clearly there is a strong genetic component in many of these situations which has led me to be more inquisitive about parental mental health and dysfunction histories when making recommendations for evaluations and services for their adolescents.

  11. Daniel Loxtonon 03 May 2011 at 4:14 pm

    To borrow an analogy from a friend of mine, Daniel Loxton, the genes are not a blue print, but are rather a cook book.

    I found this is a very useful analogy, but I should mention (as I do in the book) that it isn’t original to me: I got it from Richard Dawkins.

  12. ccbowerson 03 May 2011 at 10:08 pm

    “The brain, cliche as it may be, is really a computer.”

    Recently I’ve been more skeptical of this claim (in reference to being able to “upload a brain,” and some singulatarian claims). I see this the brain – computer relationship as nothing more than an OK analogy. Steven brought an important point of software and hardware not really being appropriate terms in the brain since there is no good distinction between these to be made in biology (although they are ocassionally OK conceptual terms during discussions)

  13. petrossaon 04 May 2011 at 6:46 am

    Jeremia
    Nowhere size was mentioned. At least not by me. The only issue at hand is that preconfigured structures are put in place during gestation as a result of genetical coding. Environmental issues during gestation may impact full development (alcoholic mother, too much testosteron or something) but basically the layout is the same.

    ccbowers

    I guess we are crosswired here about the concept: ‘computer’
    In my definition computer means:
    A computer is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format.

    Within this definition the brain is a biocomputer.

    What everyone here is at at odds with is the definition of ‘US’. Existentialism. I can’t stand philosophy so i see things in more down to earth way.

    ‘US’ is just a tiny, dysfunctional in survival terms, part of the human species. However since ‘US’ can’t handle that concept it tends to make up all kinds of stories to give itself meaning where there isn’t any.

    A quote from myself on religion:
    “Unfortunately there are lots of people with a less developed notion regarding the origin and nature of conscience whom take themselves very seriously. So immensely serious that it is for them unacceptable that their existence has no meaning. And then they will look for something which will give their existence the grandeur they imagine it to have .

    Old books such as the bible, koran, torah come in very handy, because just like the writings of Michel the Nostredame they can be interpreted in any which way to suit whatever you want to believe.

    The simple solution that we simply are procreating little primates that exist because we exist is too humiliating to them.

    We logically have an anthropocentric world view. We assume ourselves to be superior because we believe we are superior. ”

    The brain is a mechanism for controlling the body. As such it’s by definition predetermined. The abstraction it creates as a side effect plays no role in it other then to the abstraction itself.

  14. Steven Novellaon 04 May 2011 at 7:45 am

    Daniel – thanks for the clarification.

  15. daedalus2uon 04 May 2011 at 9:31 am

    Dr Novella, I think we are essentially in complete agreement, just talking about different specific aspects, while seemingly minor, can have large implications. There is a fad in science now to impute just about everything to genes. I think (hope?) that this fad is starting to fade as the large GWA studies don’t find the actual genetic associations that were assumed to be present.

    I think the fad of genetics in all disorders is unfortunate because it minimizes the potential impact of treatment. Changing the genetics of someone is going to be extremely difficult if it is even possible at all.

    I completely agree that epigenetics is extremely important. Most epigenetic programming occurs during differentiation. The only difference between a liver cell and a nerve cell is epigenetic programming. That difference is not small and is something we are only beginning to understand. There may be even more epigenetic programming of cells according to location than we appreciate. Differentiation of cells into somatic cells with different properties is also signaled and so is also subject to the same kinds of noise-mediated dispersion that every other signaling pathway is sensitive to.

    Jeremiah, I am involved in a discussion on intelligence on an unrelated thread, so differential sizing of different brain regions is something I am thinking about right now. Regulation of all aspects of neuronal function is also what I am doing research on at the moment. It takes neuronal resources to do neuronal computations. The more nerves you have working on a problem, the more resources you have to do computations to solve it. If you have a larger Wernicke’s area containing more neurons, then you might have the potential to do more language-type processing of some type.

    We know a first language has to be learned, so whatever Wernicke’s area is doing with language, it is something that it learned to do after the language was learned. The Wernicke’s area can’t be pre-programmed to do something with “data” that the brain was not able to instantiate when the Wernicke’s area was formed. In other words, a language concept can’t be instantiated before the brain knows a language. Neurological structures to manipulate a language concept can’t be generated before the ability to instantiate that language concept exists.

    The size of the infant brain at birth is limited by the size of the maternal pelvis and every cognitive, sensory, motor, and other CNS mediated control system requires some volume for the neural network that instantiates those abilities. A large part of human evolution must have been trading-off the various brain functions while maintaining a total brain size that can be successfully born. In the absence of medical C-section, cephalopelvic disproportion kills a percent or so of mothers and/or infants per pregnancy. What this tells me is that the evolutionary value of a large brain is so great that having one a little bit bigger is worth a significant risk of maternal death.

  16. Shelleyon 04 May 2011 at 9:40 am

    When we look at something like psychopathy, we’ve got those clear cases that fit cleanly into those “peaks” – clusters of characteristics that fit together well. The problem is that some individuals lie pretty close or on the edge of those peaks. They may not fit tightly within that cluster, but they are more like those clusters than anything else. So if you needed to generalize about how that person might behave or think or feel, that cluster or category is going to give you a fair idea.

    Having said that, situational factors have the biggest impact on actual behavior and not personality (in the right circumstances, the psychopath can behave in a selfless manner). So we can predict generally how people will behave, but we are not very good at predicting how specific individuals will behave: situational factors play a very big role.

    There’s a book out recently that traces the Russel Williams case in Canada (he was a well-respected colonel in charge of a large military base who murdered 2 women, including one under his command – and committed a number of other crimes).

    The author’s thesis is that Williams is a ‘New kind of monster’ and argues that he is not a psychopath because he doesn’t neatly fit into that category (he sometimes behaved in a way that appeared selfless).

    The author is quite wrong. We dont need a new category; people need to recognize that there is play around the edges of those categories. Some psychopaths function quite effectively in society (see Hare’s book on this topic.)

  17. daedalus2uon 04 May 2011 at 1:02 pm

    Petrossa, I understand your definition of computer, I think it does not quite go far enough and I think your definition misses the fundamental differences between brains and computers.

    To me, a “computer” is a Turing Equivalent. That is it can perform the operations that a Turing Equivalent can and so can compute any computable function.

    http://en.wikipedia.org/wiki/Turing_machine

    The definition you propose:

    “A computer is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format.”

    I find problematic. To me, an important aspect of a “computer” is that the data is stored in substrate independent ways, that is the physical form of the data storage (magnetic core, magnetic tape, printed tape, relays, or transistor states) doesn’t affect the properties of the data. Also, the “program” and the “data” are or can be stored in the same memory.

    The “output in a useful format” constraint depends on things outside the computer. Changing things outside the device should not change whether the object is a computer or not. Changing the boundary of what is “inside” or “outside” the device shouldn’t change whether it is a computer or not either. Does a brain that is isolated from the environment become a non-computer because of that isolation? For something to be useful, an entity must make a value judgment that is is useful. A Turing Machine can produce a tape of all zeros. Is that “useful”?

    Computers use hardware to run programs that manipulate data. The “program” is also data, but it is data of a special type. There is a distinction between hardware and software, there isn’t necessarily a distinction between software and data.

    Brains do not run programs. Data in brains is not stored in substrate independent ways. Brains do not operate on software.

    I don’t like the term “wetware”. It implies that there is something different about the brain that makes it not hardware or software. I don’t see any need for a separate term. The brain does not use software. Software is data that is held in memory that is used to modify how the data held in memory is manipulated.

    We know that everything a brain does, it does because of physical things that happen inside it. In that sense the brain is only hardware, but that hardware changes with sub-second time constants. The hardware of computers of our usual experience remains unchanged over time. The hardware of our brains does not. The hardware of the brain can be changed under the direction of the brain. That is how memories are formed, there are physical changes in the brain that are produced by the physiology of the brain that causes the now changed brain to have memories.

    A truism of computing is that anything you can do in software you can also do in hardware, and with less “overhead”, but also with less flexibility. A Turing Machine can do anything that any other computer can do, just at a different rate.

  18. Jeremiahon 04 May 2011 at 1:20 pm

    Daedalus:
    Brains are able to be larger because their functions evolved to handle more tasks. They didn’t find the need to handle more tasks because they first found the need to got larger. And in any case small brains in some species can outperform large brains in other species. Functional efficiency is the key, not size.
    The general rule in nature is that form follows function. Biological forms are no general exception.

    You say that “the evolutionary value of a large brain is so great that having one a little bit bigger is worth a significant risk of maternal death.” That’s simply getting it backwards. The evolutionary development of its functional and instructive systems, and especially with our increasingly intelligent use of language, required a larger brain to follow that development. Which if it could, it would, and in our case did.

  19. petrossaon 04 May 2011 at 1:29 pm

    daedalus2u

    I’ve covered the turing principle in my posts. Read carefully:

    first:
    “US, we, consciousness, character whatever you want to call it is a mere sideeffect of dataflow.”

    then

    “What everyone here is at at odds with is the definition of ‘US’. Existentialism. I can’t stand philosophy so i see things in more down to earth way.

    ‘US’ is just a tiny, dysfunctional in survival terms, part of the human species. However since ‘US’ can’t handle that concept it tends to make up all kinds of stories to give itself meaning where there isn’t any.”

    This contains the turing idea. I call it ‘US’ by lack of a better word.
    What you describe is the effect of a computer running. an application running on an OS running on a computer if you will. It’s not an objectively definable thing. The abstract that defines itself and as such is incapable of looking beyond those limits.

    To the abstract, it is the man. What it does is important, meaningful.

    To the uncaring universe it has no consequence, and to natural progression/survival it’s a negative factor.

    After a mere 100.000 years we as a species are in dire straights. After 200 million years with a brain the size of a walnut a crocodile has already outperformed us by 199.900.000 years.

    As things are looking now i hardly see our species reaching 1 million years.

    Effectively, objectively as a species we are a dead end, turing test or not.

  20. D.P.on 04 May 2011 at 4:47 pm

    It is somewhat of a false dichotomy to think of brain function in terms of hardware and software.

    Actually, this dichotomy is not perfect even when it applies to computers, because there is such a thing as firmware. It can be seen as a form of software but it is hard-wired inside of a device (such as graphics cards). Without functioning firmware, the device is completely useless, but firmware provides only the most low-level functionality, you cannot take real advantage of the device without additional software (drivers and libraries).

    Another thing that does not fit in this dichotomy is FPGA, which is an integrated circuit, but it is designed to be programmed using hardware definition language. So, it is a modifiable hardware.

    So, the idea of hardware as something fixed and software that can be easily changed is not true in general. Usually It is a good approximation as long as we deal with the von Neumann architecture. Brains use signal processing, which
    is very different from the Von Neumann architecture, and emulation even the most simplified model of brains on the Von Neumann computer is extremely expensive.

  21. trrllon 04 May 2011 at 5:33 pm

    I have a quibble with your description that is probably somewhat peripheral to your main point. It’s probably just a matter or emphasis, but I think that your description tends to reinforce the idea that the structure of the brain is defined by genetic and environmental factors (and most commonly by the interaction between the two). I think we should give more thought to the idea that there may also be some real randomness. You write somewhat vaguely of developmental factors, and mention that development can go “awry,” but that implies that there is some “correct” developmental program.

    I wanted to raise the possibility that there might be some degree of randomization actually built into the process. There are some evolutionary reasons to think that this might be advantageous. Even in populations where there is not a great deal of genetic diversity, it would be advantageous for behavior to be to some extent unpredictable, e.g. in predator-prey interactions. And it is easy to envision biological randomization mechanisms. For example, the activity of a gene may depend upon the binding of a small number of transcription factors, and ion concentrations within a growing neuronal process could depend upon a small number of channels randomly flickering open and closed, so this stochastic noise could easily be amplified to influence connectivity. If this is the case, even identical twins, raised in absolutely identical environments, might exhibit differences in behavior in identical circumstances.

  22. daedalus2uon 04 May 2011 at 8:35 pm

    Jeremiah, humans are unique among mammals in how many females die giving birth because their infant’s brain is too large. Evolution is trying to minimize the number of deaths from an infant having too big a brain while still maximizing the number of descendants that mother and infant will have. That trade-off, in “the wild” results in infants and their mother’s dying because the infant brain is too big. Big brains and the usefulness of the mental activities that big brains could accomplish didn’t happen one before the other, they had to happen simultaneously because you can’t have one without the other.

    trrll biological systems do use stochastic resonance. The control pathways of physiology are coupled to the environment at the level of noise. That is when everything is working right. I am not sure that “random” is the right term because that has the connotation of haphazard while physiology is (in general) not at all haphazard.

    Physiology is comprised of multiple non-linear coupled processes. It is like the weather, inherently chaotic and not predictable over the long term. We know there will be weather every day a year from now, that weather just can’t be predicted now because there are so many coupled non-linear parameters at work. Some of them are not knowable now even in principle, for example the pattern of ionization in the atmosphere due to cosmic rays that are light months away. The details of that ionization will influence the details of droplet condensation which will influence heat and mass flow and ultimately the details of weather.

    Details like that influence neurodevelopment too. If disrupting events happen too early in development, physiology can’t correct them and the adverse effects are permanent. An adverse effect later might be compensated for, or even over compensated for so that the organism ends up bigger and stronger. That is what hormesis does, but the dose and timing really matters and those details are not well understood.

    Physiology is highly non-linear. That is why models that attempt to fit physiology to something linear, like X% environment and Y% genetics will fail.

  23. Jeremiahon 05 May 2011 at 12:52 am

    daedalus
    Again, your explication that, “big brains and the usefulness of the mental activities that big brains could accomplish didn’t happen one before the other, they had to happen simultaneously because you can’t have one without the other, ” is even worse than just getting the sequential aspects of the process backwards.
    This is not the chicken or the egg came first conundrum. The process has been easily observed to have a causative sequence, and not one where two things somehow coincidently cause each other.
    And you must know that one can have a big brain with less usefulness than some very useful small brains have been known to exhibit.
    And you seem to have mistaken the sequential aspects of all causation as necessarily linear. They are not.

    Everything else you’ve written here is just a smokescreen.

  24. petrossaon 05 May 2011 at 1:41 am

    daedalus2u

    Zo werkt het ongeveer: (quote me again)
    “There was once a mammal. It needed a lot of little bits of operating systems in order to let all components of its body function properly. Over time they became so numerous that it needed a system to coordinate the other bits . That system became so complex that it was capable to reprogram itself in order to be able to assimilate the ever increasing flow of information. It called itself: conscience.
    Objectively impossible to determine if it exists, since conscience itself determines what are the criteria defining conscience.

    That conscience, in an attempt to preprogram future acts of the body, starts tell a tale to itself.
    A continuous flowchart enabling it by correlating previous events and by means of extrapolation to arrive at a predefined future action.”

    As Jeremiah so correctly observed: it is a gradual process. Furthermore i’d be very curious to see the data that supports that woman die in childbirth as a result of too big babyheads.

    Since the head is flexible and gets elongated during birth and is not bigger then the shoulders it seems improbable that your statement could be true.

  25. rlquinnon 05 May 2011 at 5:38 am

    Dr. Novella:

    THANK YOU for discussing psychology. I so rarely get to read it in the context or opinion of a genuine skeptic, aside from the occasional lesson in sense and perception, which is always related to illusion and magic.

    A faculty mentor of mine and I were recently discussing issues in psychology. One of which we tended to agree on is the arbitrariness in much of personality psychology, where labels can change based on which scale is used.

    However, there still appear to be “islands of stability” – or personality profiles that peak above the noise and can be identified and treated as a real entity.

    I like your description and will probably bring it up at a future meeting. It added clarity and legitimacy where I (admittedly) wouldn’t have looked for it before.

  26. Shelleyon 05 May 2011 at 8:41 am

    To riquinn

    Arbitrariness? Labels do not really change depending on what measure is used. Those “islands of stability” are are fairly important in psychology, as is something known as ‘concurrent validity.’ If there is a new measure for psychopathology, for example, the researcher needs to demonstrate that it makes sense given what we already know about psychopathology – does it correlate with established measures? and what is there that’s new that it brings to the table?

    I frequently work with anxiety disorders, and there are a number of measures used in that domaine. Some allow a basic general measure of anxiety, which I might use as a basic check if I think anxiety isn’t the most important problem. Others offer more specific information: is this social anxiety or panic attacks? The distinctions are important, and it is important that we recognise both the advantages and limitations of various measures.

    But arbitrary? Not really.

  27. ccbowerson 05 May 2011 at 9:17 am

    “And in any case small brains in some species can outperform large brains in other species. Functional efficiency is the key, not size.”

    It is meaningless to compare across species that are very different from each other. How can we acess the absolute “performance” of brains that are very functionally different? It really makes little sense until we get to species that are very closely related, and at that point we generally don’t see large differences in size. For a given species however, it is irrelevant since it has its own physiological/develomental constraints for evolutionary change. In a given senario, a larger brain may be what is required for additional ‘functions.’

  28. petrossaon 05 May 2011 at 12:26 pm

    [quote]However, there still appear to be “islands of stability” – or personality profiles that peak above the noise and can be identified and treated as a real entity.[/quote]

    Mmmm. Sounds very ethnocentric to me. What’s normal (“islands of stability”) varies enormously across time and culture.

    As such “islands of stability” can not universally be true. For example in some cultures it’s deemed perfectly normal to stone a rape victim to (near)death whilst in the other that’s considered a heinous act of psycho-pathological nature.

  29. Jeremiahon 05 May 2011 at 1:24 pm

    # ccbowers
    >In a given senario, a larger brain may be what is required for additional ‘functions.’<
    So then does the brain get itself larger in anticipation of the need for additional functions? Or is anticipation itself the product of an evolving function?

  30. rlquinnon 05 May 2011 at 1:53 pm

    Shelley:

    Perhaps it is different in your school/clinic/hospital, and it may be different elsewhere—I’ll admit as a grad student my experience is limited—but anxiety disorders are not taught as part of personality psychology (aside from a mention with neuroticism in the big five). And I am talking about personality psychology, not psychology in general. These would be examined in abnormal, cognitive, or behavioral neuroscience. Anxiety disorders come with highly predictable behaviors, and we have a number of tools to help patients overcome or work with their anxiety, such as cognitive behavior therapy. When anxiety interferes with normal functioning, it moves from being a personality trait to a disorder. That’s not arbitrary.

    “Much” was extreme, and you were right to call me on it. I retract. However, I still take issue with a number of false dichotomies in personality psychology, such as splitting Type A and B personalities, or introverts and extroverts (not all tests include ambiversion). Those “islands of stability” are exactly that. It is unfair to pigeon hole some one who exists in the sea of gray as part of that island. That’s what I mean when I say “arbitrary.”

    Perhaps it wasn’t clear in the last sentence of my first comment, so I’ll restate: I appreciate a different view or description of those subjects of which I am skeptical. It helps me to look at the topic in a new way and understand it. I found Dr. Novella’s illustration helpful in that manner.

  31. petrossaon 05 May 2011 at 2:40 pm

    # Jeremiah
    So then does the brain get itself larger in anticipation of the need for additional functions? Or is anticipation itself the product of an evolving function?

    Neither. It’s a feedback loop.

  32. Jeremiahon 05 May 2011 at 3:40 pm

    petrossa, I know that, but apparently ccbowers doesn’t. I’d add however that it’s an evolving feedback loop. Some argue that it’s self-evolving as a consequence but that gets me in over my head.

  33. Shelleyon 05 May 2011 at 4:24 pm

    To rlquinn,

    Agreed. :-)

    I took exception to the “much” and mostly agree with you, and though there are some fairly solid measures around personality, the areas around the edges of the peaks (the grey areas) can be problematic. I’m also not enamored with the type A/B personality stuff which is (IMO) are more ‘pop’ than psychology. Also, I would say that the introversion/extroversion dichotomy can be useful provided we recognize that it can be either a (temporary) state or a relatively enduring trait, and it does matter which it is.

    Categorization is tricky: we really can only know what people will do generally and we are not terribly good at specifics. Still, knowing generally can be useful at times. When you NEED to predict behavior, it might be the only thing you have, and at least it gives you a hint (profiling is notoriously difficult, but at least somewhat useful).

    On the other hand, personality-style tests have been widely used and abused with no regard for test limitations, no question.

    Steve has done his usual great job of pointing out the difficulties in this area.

  34. daedalus2uon 05 May 2011 at 5:27 pm

    Rather than try to speculate as to the truth value of my statement in probabilistic terms, one can look it up. PubMed has 445 citations under the heading of cephalopelvic disproportion. An open access one

    http://www.ajcn.org/content/72/1/291S.full

    says that cephalopelvic disproportion is responsible for 1-5 deaths per 1000 live births in Bangladesh.

    Feedback loop? That term does not mean what you think it means.

    Most of the increase in brain cell number occurs in utero, before birth, and before the brain has learned how to do anything. A brain can’t “know” that it doesn’t have enough cells to do something before it has tried to do that something and failed. A brain can only try to do something after it has begun learning how to do it. In the case of language, that only occurs some time after birth, long after the number of cells in the brain is pretty much fixed.

    The “islands of stability” are the “strange attractors” discussed in mathematical chaos theory. They are metastable islands of stability that can be reached via trajectories that are only differentially isolated due to the chaotic trajectories. This is why the brain self-regulates itself in a chaotic state near the critical percolation threshold. It takes only differential signaling for the brain to change state and enter one of the states described by the “strange attractor”.

  35. petrossaon 05 May 2011 at 5:43 pm

    # Jeremiah

    Due to the fact that cultural drivers can affect genetics quite quickly (in evolutionary terms) and culture is a result of higher order processes there is clear feedback between the ethereal consciousness and real life basic building blocks.

    As such one could argue that there is a selfevolving feedback part. Imo one sees this happening with adaptations of the white matter in the brain. The environmental impact of a modern (as in the last 10.000 years) society is slowly causing the suppression of pathways that handle low level/high level interaction traffic. There is just too much noise from the anachronistic limbic system (it’s totally incapable of dealing with the demands of society) gets tuned out.

    As a result we (will) see more and more occurrences of white matter discrepancies. Unfortunately like with all evolution many must fail for viable ones to take the lead.

    Hence much work for the profession at large. I suggest they start by throwing DSM away and start from scratch.

  36. Jeremiahon 05 May 2011 at 6:09 pm

    # daedalus2u
    > A brain can only try to do something after it has begun learning how to do it. In the case of language, that only occurs some time after birth, long after the number of cells in the brain is pretty much fixed.<

    How wrong or backward can you get here? The brain has to relearn language every time it's replicated, i.e., a new brain is born? The cell structure hasn't changed to accommodate that need since there's no way the function can anticipate a repetition of the way it has come for centuries to be used? Where can I find that at BioMed Central?

  37. daedalus2uon 05 May 2011 at 6:53 pm

    Jeremiah, yes. Each individual learns language for themselves only after their brain has gotten to the right stage to do so. That brain is not relearning anything, it is learning the language for the first time.

    That is why essentially any infant can learn essentially any language as a first language and speak (or sign) it with no accent.

  38. Jeremiahon 05 May 2011 at 7:45 pm

    You didn’t answer the more important parts of the question, which were whether or not the replicated brain’s cell structure has changed to accommodate that language learning need; and if not, was this because, as you’ve indicated, there’s no way the function can anticipate a repetition of the ways it has come to be used in the evolutionary past?

  39. daedalus2uon 05 May 2011 at 8:20 pm

    Jeremiah, I don’t understand your question.

    The brain cells of infants destined to learn English are not different from the brain cells of infants destined to learn Chinese. Any infant can learn either English or Chinese depending on what environment that infant is raised in.

    The brain cells have proliferated, differentiated and are mostly completely in place before the infant learns either English or Chinese or sign language.

    Virtually all brains can only learn a first language once, but can learn multiple first languages during that time frame. If it doesn’t happen in a certain time frame, that brain is out of luck, it mostly can’t go back because the plasticity to do so isn’t there any more.

    The number of brain cells doesn’t change that much (usually it goes down). What changes are the connections between brain cells, not the cells themselves.

  40. Jeremiahon 05 May 2011 at 8:38 pm

    I’m wondering how you could have said earlier we don’t know the correct meaning of feedback loop when you can’t seem to understand a question that involves your version of its meaning.

    But I’ll try to make the question simpler:
    Has what you referred to as the “fixed” number of brain cells evolved over time to accommodate the need to learn what Chomsky and others would refer to as an evolved capacity for language?

  41. Jeremiahon 05 May 2011 at 8:44 pm

    And in addition, or in the alternative, has there been an evolutionary change in the connections that you refer to which was in some way due to the need to learn more and more complicated linguistic patterns?
    And did any of this accompany the happenstance that human brains grew larger?
    And if so, was that all purely coincidental?

  42. daedalus2uon 05 May 2011 at 11:00 pm

    Yes, there has been change in the number of nerve cells in the brain over evolutionary time, and the connections that those cells make to each other has changed over evolutionary time.

    The number of nerve cells in a particular individual’s brain is not a consequence of “feedback” over how many nerve cells that particular individual needs in order to instantiate the learning of a language.

    In other words a brain does not try to instantiate language, find it has insufficient neurons to do so, then send a feedback signal that triggers proliferation and differentiation of more nerve cells until the brain does have enough nerve cells to instantiate language at which time the brain sends a feedback signal to cease the proliferation and differentiation of nerve cells because now there are enough.

    The proliferation of brain cells happens before those brain cells are (mostly) “doing anything” except proliferating. They can’t be doing things like language because they haven’t learned language yet. Maybe they are arranging themselves on a more primitive level in order to instantiate the next steps of learning a language, but that is quite speculative.

    If you think there is feedback in the number of nerve cells and brain function, could you tell us how it works? There might be very tiny numbers (relative) of nerve stem cells, but those are (usually) pretty minor.

  43. Jeremiahon 05 May 2011 at 11:56 pm

    “Maybe they are arranging themselves on a more primitive level in order to instantiate the next steps of learning a language, but that is quite speculative.”

    Not according to Chomsky. But at least you can see the necessity to be less certain of your proclamations in that respect.

    As to feedback in the numbers, it’s never been just about the numbers, it’s about the evolved functional capacities of the neurons.

  44. petrossaon 06 May 2011 at 1:57 am

    # Jeremiah, # daedalus2u

    Due to the fact that complex language has been seen across species it follows language is a basic preinstalled module.

    All what gets learned are the specifics, not the basics. As such you both are right and wrong at the same time for different reasons.

    It’s like the ballistics module. You get born with it, practice makes it perfect and you can predict the trajectory to throw the stone to the spot the animal you want to kill will be in the future.

    But throwing the stone in a certain desired direction needs no practice. That comes preset.

    Brain plasticity can lead to intensively used modules to get both bigger in size and neural interconnection.

  45. Jeremiahon 06 May 2011 at 2:24 am

    The so-called module for the shared aspects of a communication function still needed to evolve in the human brain to let language become expressed in symbolic writing. And it’s the development of the written symbolism that accelerated the evolution of the neocortex. There was nothing preset about that development.

  46. petrossaon 06 May 2011 at 1:56 pm

    I fail to see the difference between the appendage/eye coordination that let’s animals build complex structures and humans using the same mechanism for expressing other forms of output in art/writing what have you.

    Energy being at a premium evolution refines and reuses, it foes’t just start from scratch. Hey let’s build a completely new writing module. Doesn’t make sense

  47. Jeremiahon 06 May 2011 at 2:59 pm

    Nevertheless, we learned to do it, and other animals so far haven’t.
    So there’s a difference in there somewhere.
    http://webspace.ringling.edu/~dhiggins/type/Printable_Files/lect1.pdf

  48. petrossaon 07 May 2011 at 5:56 am

    # Jeremiah

    Sure we are homo sapiens. And then what? I dolphin isn’t likely to use sign language so doesn’t need a system to transfer it to hardcopy.

    A species develops that what suits it best, evidently.

    Apparently our species needs written language. We see that as a superior gift since we can pass on knowledge across time.

    But it’s only superior to us, making it a circular argument.

    Baseline is how long does your species survive? At this point in time we our ‘superiority’ only serves to augment the ancient programming of the limbic system leading to total chaos, mayhem, mass bloodshed, unnatural procreation, genepool pollution.

  49. Jeremiahon 07 May 2011 at 12:55 pm

    “Then I commended mirth, because a man hath no better thing under the sun, than to eat, and to drink, and to be merry: for that shall abide with him of his labour the days of his life.”

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.