Aug 23 2010

Kurzweil vs Myer on Brain Complexity

There is an interesting blog debate going on between PZ Myers and Ray Kurzweil about the complexity of the brain – a topic that I too blog about and so I thought I would offer my thoughts. The “debate” started with a talk by Kurzweil at the Singularity Summit, a press summary of which prompted this response from PZ Myers. Kurzweil then responded here, and Myers responded to his response here.


You can read the exchange for all the details. I want to focus on just a couple of points – predicting our efforts to reverse engineer the brain, and the question of how complex the brain is.  Kurzweil has predicted in the past that we will reverse engineer the brain – model its function in a computer, basically – by 2030. It was reported that in his talk he said 2020, but Kurzweil has clarified that this is not correct, he said 2030, sticking to his earlier predictions.

That’s a minor (but interesting) point, and Myers points out that it was not the focus of his original criticism. I agree with Kurzweil on some basic principles. First, we do have an active research program that is using computer modeling to reverse engineer the brain. These efforts are progressing nicely, and I do think that eventually they will succeed. I also agree that some technologies progress at an exponential rate, and they surprise those who were making predictions based upon a linear progression. Kurzweil gives an excellent example of this – the genome project. This project started out very slow, and many thought it was lagging behind predictions, but as technology improved the effort to decode the human genome accelerated geometrically and actually finished years ahead of schedule. Now we can decode the genome of other species in a fraction of the time, and the pace continues to accelerate.

So Kurzweil has a legitimate point here – information-based technologies are accelerating, and if you account for that acceleration you get a better handle on predicting its future course. I do think, however, that Kurzweil is cherry-picking a bit also, for some information-based technologies have fallen short of prediction, such as speech recognition (an area of his particular expertise). Recognizing human speech works, but the technology has seen diminishing, rather than accelerating, returns in terms of accuracy, and this has delayed its adoption – which is not nearly as much as Kurzweil predicted in the past.

I think the example of speech recognition represents a factor that Kurzweil, in my opinion, seems to underappreciate. While our information tools may get better at an accelerating rate, some problems become exponentially more difficult as you try to eke out incremental gains. In other words, it seems that for some technologies (to use symbolic figures) each 1% improvement is 10 times more difficult than the previous incremental improvement. This offsets our exponential progress. The complexity of the genome project was linear – decoding that last 10% was as difficult as the first 10%, so it was the perfect example for Kurzweil. But other problems, like understanding how the brain works, are not linear in complexity. As our knowledge of the brain deepens, we are getting to greater and greater levels of complexity.

Further, while I think Kurzweil’s characterization of technological progress is generally correct when you consider the broad brushstrokes of advancement, it is very difficult to apply them to any individual technology. There are hurdles, roadblocks, and breakthroughs with any individual technology or scientific problem that are impossible to predict.

On the point of predicting the future I am somewhat between Myers and Kurzweil. Kurzweil has some legitimate points to make, but I think he over applies them and cherry picks favorable examples. Myers also has some legitimate criticisms – Kurzweil does not quantify some problems (like how much of the brain we currently understand), and does not account for the fact that we do not know how much we do not know. There may be hidden layers of complexity of brain function we haven’t tapped into yet. But I think that Myers overall is a bit harsh on Kurzweil and does not give partial credit where it is due.

Will we reverse engineer the brain by 2030? I guess we will have to wait and see. Kurzweil gives himself a bit of an out by saying that we will reverse engineer the “basic functions” of the brain – this is vague enough that you can declare victory at any point along the way. You might argue we understand the brain’s basic functions now. I think we will succeed eventually, even to the point of being able to make an artificial brain, but I would not hazard a guess as to when.

Brain Complexity

The more interesting point of contention, and a real teaching point, is the question of how much we can infer about the complexity of the brain by looking at the genome? A separate question is whether or not you can reverse engineer the brain by examining the genome. Here both Myers and Kurzweil agree – you cannot. But Kurzweil says he never made that claim – it was misreported or misinterpreted. So we can put that aside – no one is arguing that the design of the brain is in the genome. You have to examine the brain to reverse engineer the brain.

But Kurzweil is still claiming that we can infer something about how much complexity is in the brain from the genome. He writes:

The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

This is profoundly problematic, and reflects the fact that Kurzweil truly does not understand the process by which the brain develops. From a developmental point of view – there is no such thing as the brain prior to its interaction with the environment. First – is Kurzweil talking about a newborn infant’s brain? Does he understand the significant differences between that brain and a fully developed adult brain?

I think, to be generous, Kurzweil is trying to differentiate the design of the brain from the information contained within it (our memories, etc.). This could be analogous to a computer vs the software, or reverse engineering a generic human brain vs duplicating PZ Myers’ brain.

But that was never the point at all – the point Myers was making (which I also discussed this week on the SGU) is that the design of the brain is dependent upon interaction with the environment. Myers focused on brain proteins interacting with each other in a complex way, while I focused on the neurological functions of the brain.  The genome provides a set of processes by which brain design unfolds – but that program is dependent upon input from the brain’s environment, which includes the body of which it is part. The basic systems within the brain develop and organize themselves in response to sensory input or use. Our visual cortex requires visual stimulation, binary vision requires seeing with both eyes, our motor system requires use against gravity, our language cortex requires exposure to language, etc.

The process of brain design being a combination of genetic rules laying out neurons and connections in a pattern that is dependent upon feedback from some kind of input adds complexity and information to the brain. So again – what is Kurzweil talking about when he refers to a brain prior to interaction with the environment? He seems not to understand the process of brain development, and therefore he overestimates the degree to which information in the genome constrains information in the brain – or he underestimates the increase in information that derives from this interactive development process. Therefore his basic premise – the brain is not so complex because the genome does not contain that much information – is flawed and invalid (which was Myers original criticism).

Kurzweil adds another line of reasoning to his argument, writing:

For example, the cerebellum (which has been modeled, simulated and tested) — the region responsible for part of our skill formation, like catching a fly ball — contains a module of four types of neurons. That module is repeated about ten billion times. The cortex, a region that only mammals have and that is responsible for our ability to think symbolically and in hierarchies of ideas, also has massive redundancy. It has a basic pattern-recognition module that is considerably more complex than the repeated module in the cerebellum, but that cortex module is repeated about a billion times. There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.

Here again, Kurzweil is grossly underestimating the complexity of the brain based upon some faulty assumptions. I agree with his point that there are modules or patterns in the brain that are repeated billions of times. But they are not simply repeated. You cannot describe this aspect of brain design by simply describing one module then say – repeat 1 billion times. With each repetition there is a novel and meaningful pattern of interconnectedness to other brain regions and to the body. Kurzweil seems to recognize this when he says: “There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.” But he seems to be brushing it off too easily. We cannot assume that the pattern of interconnectedness is a simply redundant pattern.

We also have to consider that the added levels of complexity from the pattern of interconnectedness likely vary from brain region to region. Kurzweil might have a point if you are talking only about the primary visual cortex, for example – where there is a literal grid of neurons that correspond to the visual fields. Here the patterns are somewhat simple and repeated, and it is therefore not surprising that our efforts to reverse engineer these brain regions have progressed the most. But this is the lowest hanging fruit, and should not be considered representative of other brain regions and functions.

If we move to brain regions that subsume our most complex abstract thought and planning, there is no simple somatotopic pattern of neurons whose function we can easily infer. We have no idea, for example, how a pattern of neuronal connections equals a specific word, and connects to our knowledge of how to say the word, how to spell it, what the word means in all its complexity, memories of the word’s use, and its relation to other words and parts of words. But most importantly – we really don’t know yet how complex this problem even is, and so predicting how long it will take to solve the problem strikes me as utter folly.


I find the entire discussion between Myers and Kurzweil to be a fascinating topic, and an opportunity to explore various aspects of neurology in the context of a specific and interesting application – reverse engineering the brain. This amounts to an elaborate thought experiment, but those are a fun way to challenge our understanding of a topic.

Ultimately I come down closer to Myers’ position – Kurzweil does not seem to understand the brain or brain development, at least in certain key aspects, and this dooms his arguments to failure. He would do well to take the criticisms going his way seriously, and also to check his ideas with some actual neuroscientists. Myers, on the other hand, came off too harsh, but that seems to be his style. Kurzweil is an interesting mix of provocative ideas, some interesting insights, but also some serious flaws that border on crankery. This makes him a very intriguing character that I would not casually dismiss, but also I would take everything he says with a skeptical grain of salt.

26 responses so far

26 Responses to “Kurzweil vs Myer on Brain Complexity”

  1. Watcheron 23 Aug 2010 at 10:21 am

    I didn’t realize that this was such a hot topic. I’ve always viewed it as a thought-provoking aside rather than something to be discussed seriously. I’m surprised (not really) that PZ took such an extreme stance in response to this.

    I don’t really have a problem with the timeline per se, I’ll leave that to someone with a better understanding of current and future technology expectations. I do take exception to the amount of information it would take to model the brain though. I mean, even if we (simplistically) viewed each neuron as either being in an on or off state, any simulation would probably take more than a million lines of code. Is he imagining something like a massive if-then statement tree where the individual units of processing aren’t neurons, but modules? Even that probably wouldn’t satisfy the million lines of code requirement in just the cerebellum alone. Purkinje cells synapse with thousands of other cells. If modules in that area follow suit, we’re talking potentially billions of different pathways through the modules depending on how they’re connected.

  2. Iraon 23 Aug 2010 at 10:37 am

    Watcher, I think what RK is hoping to find, is that most cells and superstructures in the brain behave so similarily, that we’ll have a pretty short code (a million lines) and a huge database of states and connections on which that code operates. it’s not impossible, but pretty hopeful to think about it that way.

    My own knowledge on the subject is very meek, only what I could learn from Markham’s talks on the web on the Blue Brain project, and a course by Prof. Segev of the Hebrew University “From synapses to free will” (Highly recommended, but it’s in Hebrew). From the little I understand, 1mil lines won’t cut it, but the people of the Blue Brain think 10 years for a whole brain simulation is a reasonable guess. Keep in mind we are talking about simulation of the brain physics and electricity, not necessarily precise function, intelligence, learning and interaction, etc.

  3. B Hitton 23 Aug 2010 at 10:49 am

    I’m so glad you covered this, Dr. Novella. It really is a fascinating debate. I certainly come down more on PZ’s side and for me, the thing I just can’t let go of is the idea that we can know anything about the brain’s complexity by looking at the genome. The genome is meaningless without the context.

    I think a good analogy would be that the genome is like software which needs an operating system to make any sense. In this case the operating system would be the protein biochemistry and cell biology that determines how cells, tissues and organs develop. Needless to say, the information content of such an operating system is nearly impossible to estimate, but I think it’s safe to say that it would dwarf the information content of the genome. It’s kind of an insult to molecular, cell, and developmental biology for Kurzweil to ignore all that complexity that we only understand a tiny fraction of after billions of man-hours of research by very smart people.

  4. stompsfrogson 23 Aug 2010 at 11:02 am

    I think PZ’s cranky because futurist guys like Kurzweil write these cheques (We’ll be able to reverse engineer the brain by 2030!) that the scientific community has to put money in the bank for, or else the public gets pissy when the cheque bounces. In other words, he makes science look bad when it fails to live up to his predictions.

    People still talk about some crank or another who told them in the 70’s that they’d have flying cars by now. Aspiring science popularizers who go too far succeed at making a name for themselves and selling books and getting shows on the Discovery channel, and “science” as a whole gets blamed for their wildly inaccurate predictions.

  5. daedalus2uon 23 Aug 2010 at 12:25 pm

    I think stomps has it. In other words what Kurzweil is pushing is “vaporware”.

    If I may make a prediction, the first “uploading” of brains into electronic hardware won’t be done by doctors and scientists, it will be done by sCAM artists. Just like the first “stem cell” treatments are not being done by doctors and scientists but by scammers.

    It will be first done somewhere where the laws allow it. It will be much more expensive than stem cell treatments. The clinics will be run by brave maverick doctors and scientists. They will have a cult following just like the stem cell clinics have.

  6. cwfongon 23 Aug 2010 at 3:33 pm

    “From a developmental point of view – there is no such thing as the brain prior to its interaction with the environment.”
    The brain evolved its instinctive algorithmic “patterns” from its history of interactions with the environment. It doesn’t go into its newborn environment blindly.
    This is what Kurzwell either doesn’t understand, or more likely has a complete misunderstanding of – the multifaceted strategies involved and the abilities of any computer to replicate them without some simulation of their motivating purposes among its capabilities.

  7. petrucioon 23 Aug 2010 at 6:05 pm

    I think there was some misunderstanding about what Kurzwell meant in regards to genome and brain complexity. What he means is that nature has found ways to create very complex processing units with much simpler rules.

    While we do not have access to the ‘operating system’ (nice analogy Hitt) of the genome, and we can’t look into it to reverse eng. the brain, it’s conceivable that we could also find ways to create ubber-complex structures from not so complex rules.

    I also think it doens’t make much sense to think of lines of code simulating neurons here. Perhaps each neuron could be an electronic component – a ‘neuristor’ (transistor with several outputs), and the simulation would happen in the hardware itself, rather than software. Sure, it’s conjecture, but it shows that ammount of information and lines of code are not necessarily roadblocks – there may not be a single line of code after all if you think outside the box.

    I do not necessarily agree with Kurzwell here – I just think he has a very good point when he says people can’t really see much ahead because we have problems thinking geometrically and outside the box. Thinking about lines of code for this problem is like predicting that computers would be “cheap as a car and light as a piano by 2000” decades ago, before transistors kicked in.

    Sure, if we get stuck in the transistors and lines of code type technology, brain emulation probably won’t happen in this timeline. There is some ammount of luck and unpredictability involved.

  8. Al Morrisonon 23 Aug 2010 at 7:21 pm

    I am not sure how the following relates to the conversation; however, here it goes.

    At the very earliest stages of human development we see an undifferentiated cell mass, the morula, that has to eventually divide and differentiate into a blastocyst, which has to divide and differentiate into our different body systems, eventually forming an integrated, yet highly differentiated complete human fetus.

    When reading PZ, and Steve, I thought this is where at least part of the conversation was heading: The genome is the same in every totipotent cell of the morula. The “environment” here–the spacial arrangement and adjacent cells–affects the differentiation of each of the morula cells. So the initial development of every system, including the nervous system, occurs in utero, in the embryonic environment, in conjunction with other systems. Brain development never occurs in isolation, it is always in an environment.

    So, the question this raises (at least for me) is, “Is it not necessary to understand the mechanism by which all differentiation and concomitant and subsequent development occurs from a single, homogeneous genome to understand how the brain can begin to form?” If so, then Kurzweil has even more work ahead of him than he may think.

    Very interested in your feedback.

    Thanks. A very interesting controversy.

  9. zencaton 23 Aug 2010 at 8:14 pm

    Be certain to check into the ‘brain-based’ devices being developed at the Neurosciences Research Institute under the direction of Dr. Gerald Edelman. There are a series of machines codenamed Darwin, which are at the frontier of technologies based on human brain structures to accomplish complex tasks without standard algorithmic programming procedures.

  10. B Hitton 24 Aug 2010 at 12:11 pm

    @Al Morrison: You’re exactly right that there are important and poorly understood developmental mechanisms that are more a function of how the genome is interpreted than a property of the genome itself. This is part of what I meant by the unknown information content of the “context” (biochemistry, cell biology) of brain development rather than the instructions (genome), and is why it is wrong to claim that the design of the brain is contained in the genome.

    It’s important to reiterate that Kurzweil doesn’t aim to use the genome to reverse engineer the brain. He was merely making a point about the feasability of modeling the brain, saying that it may entail less information that it seems at first due to redundancy in the structure of biological systems.

  11. iqguy001on 24 Aug 2010 at 2:16 pm

    I don’t know neurology, but I am familiar with a common pitfall that critics sometime fall into when analyzing the feasibility of a task. A critic can make a task appear more complex than it really is by finding irrelevant complexities. So I wonder, are there elements of complexity in the brain that PZ has identified that are not necessarily relevant to the task of modeling the brain?

    If we had a discussion about how complex a cake is and how difficult it would be to reverse engineer it, no doubt a highly detailed analysis of the cake might make it appear a very complex task, and yet we know we don’t need to understand much of the complexity of a cake in order to make a cake. If you know the ingredients and have most the instructions, it’s relatively easy. Can this same principle be applied to recreating the brain?

  12. CivilUnreston 24 Aug 2010 at 5:46 pm


    The same principle (which I like to call the “black box approach”) CAN be applied to recreating A brain. We should be able to, given the right conditions, grow brains. This would be very difficult, but not totally out of the realm of possibility. Hell, if stem cell research gets unfrozen, it could just be a matter of laying down neuronal stem cells into an appropriate matrix with a 3D organ printer. We would still not really know how it worked, though.

    The problem is that Kurzweil wants to reverse engineer THE brain, not just create a brain. To have a computer mimic not just the abilities of a brain, but the actual processing that goes on inside of one is a much different task.

    A better (but still imperfect) analogy than making a cake would be making a computer. Given the components of a computer, it’s pretty easy to put one together. To reverse engineer a computer, however, requires the knowledge and ability to understand and construct circuit boards, magnetic platter, etc from scratch. To make Kurzweil’s task more difficult, he wants to reverse engineer an organic computer and then run it on non-biological hardware.

    I’m not saying we’ll have to understand the brain perfectly to, as Kurzweil says, reverse engineer its basic functions. But we sure as hell will need a lot more than just a recipe of what it’s made of.

  13. iqguy001on 24 Aug 2010 at 7:20 pm

    Thanks for the explanation CivilUnrest.

  14. TheRedQueenon 25 Aug 2010 at 1:21 am

    Since the Singularity has not happened yet, I am damn glad that not all stents are not being mis-diverted into correcting CCSVI and instead 5 stents were inserted into PZ Myer’s cardiac region today.

    He is already posting again this evening.

    A worthwhile insertion of stents indeed.

  15. johnmatthewsonon 25 Aug 2010 at 6:51 am

    The most remarkable aspect of this debate is that computer simulations are currently hardly used at all in medicine although they are being used in some neurophysiological investigations and have been for the past 30 years. This suggests that even if we did reverse engineer a brain there is yet another huge hurdle: the linking of this “brain program” to the real world of the patient and doctor.

    This leads me to wonder about how useful an “information nexus” will be for creating real world applications.

    As an example, we know a lot about magnetic fields and materials and have known a great deal about these for a long time but we still do not have a thermonuclear power plant. The development of such a plant seems to need creativity as well as brute knowledge. I am sure the physicists at JET ran millions of computations on supercomputers to model all sorts of plasmas but they only just exceeded break even on power output and the crucial steps seem to have been creative as well as logical.

  16. daedalus2uon 25 Aug 2010 at 11:24 am

    JM, the history of controlled fusion is an interesting example of technology prediction too. Like AI, it has been 20 years away for more than half a century.

    You are right, the behavior of electrons and ions in magnetic fields is know with exquisite precision. What is being attempted is pretty simple, just confining a hot plasma for as long as possible. The problem with controlling plasmas with magnetic fields derives from there being “many” parameters and the coupling between them being non-linear. That leads to inherently chaotic behavior that is inherently not predictable beyond a certain time.

    In systems of coupled non-linear parameters, “many” parameters means more than 3 or so. In biological systems there are at least thousands or tens of thousands.

  17. CivilUnreston 25 Aug 2010 at 3:49 pm

    As an addendum to daedalus2u’s comments,

    I recently heard some fusion scientists from ITER describing the difficulty of maintaining and confining a thermonuclear reaction.

    What he said the task was akin to attempting to inflate a balloon, except instead of being made of a single piece of material, the balloon is made up of thousands of tiny rubber bands.

    Personally, I think the chaps at the National Ignition Facility in Livermore will produce usable fusion power years before ITER generates their first fusion event. Of course, I could be biased because I’m in NoCal and I met the guy who runs NIF (he gives a really good presentation).

  18. Norwegian Shooteron 26 Aug 2010 at 12:35 pm

    Your post is fascinating, but on a meta atheist / skeptic topic. Yes, I mean the atheist-punching / don’t-be-a-d*ck theme. You say “Ultimately I come down closer to Myers’ position,” but you never once disagree with any of Myers’ points. All you say is:

    “But I think that Myers overall is a bit harsh on Kurzweil and does not give partial credit where it is due.” and “Myers, on the other hand, came off too harsh, but that seems to be his style.” The only problem you have with Myers is his tone. That puts you in very popular company and prompts me to try to get at why atheists / skeptics have this common reaction. Is it sub-conscious? Did you realize you didn’t disagree with Myers even once in this post? Why do you want to distance yourself from his tone?

  19. CivilUnreston 26 Aug 2010 at 4:10 pm

    Norweigian Shooter,

    Dr. Novella, in previous posts about what tone is appropriate to take in skeptical debates, has explained why he prefers to use a more moderate tone than the Gorski/PZ Myers smackdown-style.

    With this particular argument, I understood Dr. N’s position as “PZ is correct, but his characterization of Kurzweil as an ignorant fool goes a little far.”

    Kurzweil is wrong, but it’s not because he’s a complete moron — it’s because he optimistically applies exponential trends in scientific progress too generously, ignoring the complexity that progress often uncovers.

  20. petrucioon 26 Aug 2010 at 4:53 pm

    Something I received today, totally relevant to this discussion, specially to the point I was making:

  21. daedalus2uon 28 Aug 2010 at 11:10 am

    I had a realization last night of another reason why Kurzweil’s assertion that intelligent machines can be built up as self-organizing structures is flawed. The problem is that the “intelligence” of the end product is only a result of the organizing structure that constructed it.

    A good example is the visual system in humans. It is self organizing, it takes light from cells in the retina, produces signals, transmits those signals to the visual cortex, does signal processing on those signals to extract meaningful information from them. It is all self-organizing and doesn’t require conscious control to direct its construction.

    But there is a problem. The visual system is susceptible to optical illusions. Part of what makes the visual system so useful (that it is self-organizing, self-repairing, sets its own gain, fills in gaps due to equipment failure) also makes it also susceptible to optical illusions, to rare quirky visual effects that exploit flaws in the self-organizing structures and so you end up with optical illusions. You know they are not real, but your visual system doesn’t have the capacity to process the visual data any other way so you get optical illusions.

    This has to be an inherent property of any self-organizing information system. It will be unable to process information at an “intelligence” beyond that of its organizing system. At the level of the organizing system, everything the visual system is doing is exactly right. The visual system can’t perceive that there are optical illusions because it doesn’t have the capacity to detect them. If you added another level of organization to the visual system so it could detect the first order optical illusions, then it would still be susceptible to higher order optical illusions. That also makes the organization of the visual system very much more complicated and would very likely undo some of the valuable features it has (being self-repairing via local interactions).

    This does apply to human intelligence too. The genome doesn’t specify a brain that instantiates “intelligence”. The brain has to be conditioned to think intelligently. The brain has to modify itself to remove those parts that are thinking stupidly (i.e. that perceive cognitive illusions analogous to the optical illusions of the visual system) and expand those parts that are thinking intelligently. That is what an education does, and decades of thinking skeptically.

  22. trrllon 28 Aug 2010 at 12:22 pm

    The information in the genome is sufficient to specify the brain, given the boundary conditions, which include the organization of the fertilized ovum and the molecular interactions of the developmental process. The former is probably not that difficult to work out to the level of precision required–the development from the ovum has to be robust in the face of brownian perturbation, so the exact location of most individual molecules is unlikely to be critical. On the other hand, understanding the developmental process may well turn out to be less tractable than understanding the function of the brain. It would require, for example, determining the 3-dimensional structure of every biological molecule involved, as well as modeling their molecular interactions, and simulating a very large number of molecules at probably a sub-millisecond resolution over a period of (at a minimum) several months. It may well be easier to reverse engineer the brain based upon functional and structural investigations. The genome is a bit like an encrypted message–we know the maximum number of bits of information that the decrypted message can contain, but that does not necessarily get us very far without the key and the encryption algorithm.

  23. trrllon 28 Aug 2010 at 12:31 pm

    @daedelus2u “This has to be an inherent property of any self-organizing information system. It will be unable to process information at an “intelligence” beyond that of its organizing system. ”

    I’m not sure what this means. Ultimately, the organizing system for all living organisms is evolution. This does not have “intelligence” in the way that we normally think of it, although it is a powerful algorithm for generating complexity and solving problems.

    I don’t think that visual illusions are informative with respect to the difficulty of the task. Evolution optimizes to the point that the results are “good enough,” given the most frequently encountered circumstances and the costs and constraints, so it tends to take shortcuts rather than seek the perfect solution to a problem. In particular, organisms need to make rapid decisions based upon a model of the 3-dimensional world around them that is likely underspecified by the visual information available at that moment, so an algorithm that usually lets you rapidly judge the distance of prey or predator, but in rare circumstances causes you to misjudge the size of objects, will be favored by natural selection.

  24. daedalus2uon 28 Aug 2010 at 5:12 pm

    Trrll, no, the genome specifies how individual cells behave. The collective behavior of many cells then determines the emergent properties of larger assemblies of cells. The cells interact, and through that interaction change what they do, but the action of the genome is on individual cells, not on collections of cells. The genome codes for cells that assemble together to form a self-organized system. Each cell has all the instructions (the genome), the specifics of what it does depends on its interactions with the cells it is touching (or on diffusible signals from more distant cells).

    The wiring of the visual cortex is not specified in the genome. There are vastly too many cells for their interconnections to be specified by the genome. The wiring “rules” that the cells that comprise the visual sensory system are specified by the genome, and each cell along the various pathways executes those rules “blindly”, as arbitrary “rules” of DNA transcription and protein synthesis. Wiring of the visual sensory system requires sensory input. Animals held in darkness never develop the appropriate wiring and remain unable to acquire meaningful sensory data.

    The wiring “rules” are good, but they are not perfect, and the result is a visual sensory system that is susceptible to optical illusions. The optical illusion of pareidolia is a good example. Humans tend to see faces in random and non-face situations because for humans faces are important and humans have very strong pattern detection neuroanatomy that is exquisitely sensitive to detecting faces; so sensitive that it will detect faces when they are not there. If someone was never exposed to any human faces, their human face detection pattern recognition neuroanatomy would not develop as well (there might still be some innate face detection, we just don’t know, and can’t do the experiment because it would constitute horrific abuse, people with cataracts early in life don’t develop the kinds of pattern recognition that people without cataracts in those formative years do).

    Without exposure to human faces, the visual cortex can’t learn what faces look like and so can’t “tune” itself to find patterns that are similar in the midst of noisy signals. The same has to be true of “intelligent” ideas. The genome can’t specify neuroanatomy that can recognize “intelligent” ideas because there isn’t enough data in the genome.

    Similarly language is not specified genetically. Essentially any infant can learn essentially any human language as a first language. Every human language does have specific common properties showing that the neuroanatomy that generates the ability to understand a language has certain properties, but language is either learned from adults or synthesized de novo (as Creoles are synthesized).

  25. daedalus2uon 28 Aug 2010 at 10:15 pm

    If you think about bonding of newly hatched ducklings to the first thing they see, that has to be the way evolution configured their brains to work. The amount of information needed to code a hard-wired mother duck detector into a duckling’s brain would be gigantic, and couldn’t be as high fidelity as an adaptive neural network could be.

    It is much easier to generate a neural network that is primed to form an attachment to the first thing it sees. That is also how the mother bonds. She bonds to the first infant that she sees after giving birth, or after eggs hatch. That renders some birds vulnerable to parasitic eggs, due to a “bonding illusion”; a defect in the bonding and attachment system.

    But the fidelity of that bonding can be exquisitely detailed, such that a mother bird can pick our her chick among hundreds or even thousands of others. A genetically coded system could never do that.

  26. Norwegian Shooteron 30 Aug 2010 at 11:45 am


    quote: With this particular argument, I understood Dr. N’s position as “PZ is correct, but his characterization of Kurzweil as an ignorant fool goes a little far.”

    But you are reading that in. Dr. N doesn’t say PZ is correct, he says he is closer to PZ’s position, which is the most wish-washy way possible to say that he agrees with PZ. Why does he state it that way? I think he does want readers to read into the post what you have, my question is why?

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.