Aug 08 2014

IBM’s Brain on a Chip

You are currently browsing comments. If you would like to return to the full story, you can read the full entry here: “IBM’s Brain on a Chip”.

Share

38 responses so far

38 Responses to “IBM’s Brain on a Chip”

  1. Bronze Dogon 08 Aug 2014 at 8:47 am

    The massive parallel processing has the potential for being faster, more efficient, adaptable, and scalable. ….Other tasks, however, such a image recognition, is more efficiently performed by massively parallel architectures.

    Makes me wonder if it’d be good for Dwarf Fortress. I can already hear ToadyOne groaning at the very idea of starting over from scratch.

  2. Bill Openthalton 08 Aug 2014 at 11:09 am

    Steven –

    I will just say that I think it’s likely future AI will use architecture much more similar to current neuromorphic chips than traditional von Neumann chips.

    Or a hybrid. We know brains aren’t all that good at ‘rithmetic :)

  3. jsterritton 08 Aug 2014 at 12:02 pm

    “There is a huge barrier to adoption of this computer design for the average personal computer…”

    Personal computers have reached a ceiling — very few people require faster computers — so it is unlikely that demand for new architectures will come from consumers. Maybe such architectures can speed up the bottlenecks on the increasingly sophisticated “back end” (e.g., server services). Sufficiently nimble and scalable servers would be able to offload much or all of the processing that is currently being done on computers and devices. Such servers would serve memory and processing power, not just data (the neuromorphic architecture seems well-suited to this). Consumer devices can become more and more passive, with fewer components, lower energy requirements, less heat, etc. When consumer devices become simple input/output devices for more powerful computers working behind the scenes (e.g., in the cloud, at a server farm), they will become the cool and helpful products consumers will demand (I’m thinking wearable devices, ‘Minority Report’ displays, virtual reality, all kinds of “smart” technologies, a holodeck in every home and robots everywhere).

  4. RCon 08 Aug 2014 at 2:33 pm

    “There are some things that traditional computing does well – like much of what most people use their computers for”

    This seems a bit chicken-and-egg to me. Much of what people do on their computers is determined by what those machines do well.

    As to Dwarf Fortress, hardware has never been the problem – poor algorithm choice and poor coding has been. I think that’s the real reason Toady won’t let anyone see his code – he’s basically the wizard of oz.

  5. eeanon 08 Aug 2014 at 9:07 pm

    GPUs are already massively parallel, that’s sort of the whole point. And now we’ve been working with that architecture for a while. I’m sure not all parallel archs are the same though. It’s hard to get my head around how programming this chip would work.

  6. LDoBeon 08 Aug 2014 at 10:21 pm

    @eean
    GPUs and artificial neurons computers are very different. GPUs are what are called vector computers. Vector computers are able to take a large set of data and a single instruction at any given time, split the data into however many chunks there are stream processors in the GPU, and apply that single instruction to the entire dataset simultaneously, then recombine the chunks back into one piece.

    It’s basically the same as having a thousand regular Desktop processors hooked together, executing the same instructions in lockstep while different, but related data are fed to each processor.

    A neuromorphic chip allows each artificial neuron to execute its own instructions on its own data and each neuron can be programmed independently, or groups of neurons can be programmed together. They all communicate by signalling to their neighbors via synapses. If you have enough artificial neurons, and enough synapses, and the minimum programmable components are basic enough, you can have a neuromorphic chip run many different programs simultaneously, all sectioned into spatial domains on the chip, instead of needing all the neurons/processors to work in lockstep.

  7. Ribozymeon 08 Aug 2014 at 10:40 pm

    Steve, what do you think about the idea that neurons aren’t simple units, as in this chip, but that complex processing of signals happens inside individual neurons and even different processes in different zones of a single neuron? That would make it incredibly more difficult to simulate on a chip.

    With regard to von Neumann architecture, perhaps there was no choice given the available technology when the first electromechanical and electronic computers were built, that the different processes were implemented in wildly different physical processes. For instance, processing has always relied on switching (whether relays, vacuum tubes or transistors), but one of the first ways in which memory worked was through sound waves moving in a mercury filled container. That is in part my hypothesis about it, along with the fact that using discrete components for discrete processes made it far easier to design not only the machinery, but how the software worked.

  8. BillyJoe7on 09 Aug 2014 at 2:06 am

    Ribozyme,

    “what do you think about the idea that neurons aren’t simple units, as in this chip, but that complex processing of signals happens inside individual neurons”

    If you mean the Penrose/Hameroff hypothesis, we have dealt with this on another thread.
    Basically it’s wild speculation beyond the evidence.

  9. Ribozymeon 09 Aug 2014 at 3:22 am

    BillyJoe7:

    ¿Can you link the discussion? I’d like to know more about that Penrose/Hameroff hypothesis.

    On the other hand, I wasn’t saying it based on a mathematical or engineering hypothesis, but on what I know about neuroscience, in how neurons modify their parts locally (for instance, by forming spikes or by differences in neurotransmitter receptor distribution and type expression, besides how the signals modify proteins associated with and the postranslational modifications to the proteins that constitute the receptors, just to give a few examples) and how they respond differently and adaptatively to signals of very similar nature depending on where they come from and where they impinge on the neuron’s body.

  10. Bill Openthalton 09 Aug 2014 at 8:02 am

    Ribozyme –

    ¿Can you link the discussion? I’d like to know more about that Penrose/Hameroff hypothesis.

    It’s in the monster discussion on “The brain is not a receiver”, roughly here:

    http://theness.com/neurologicablog/index.php/the-brain-is-not-a-receiver/#comment-81651

  11. Steve Crosson 09 Aug 2014 at 12:10 pm

    Apologies to the regulars if this has been addressed before (I just recently discovered this blog) but can anyone comment on how much (if any) of Jeff Hawkins theories (as presented in “On Intelligence”) are similar to current thinking on brain function?

    As this novice understands Hawkins theories, the IBM approach seems to be a very intriguing possibility. The whole pattern recognition / feedback loop paradigm seems to be ideally suited to take advantage of this chip architecture.

    As to programming, if Hawkins is correct, the only programming involved would be to provide sensory/data input to allow patterns to be recognized and correlations/predictions to be inferred.

  12. Insomniacon 09 Aug 2014 at 1:10 pm

    Steve Cross : Don’t robots do that ? Taking input data and making decisions based on what is known to them ?

    As mentioned, systems with neuromorphic architectures do some things better than current computers and vice versa. Therefore if neuromorphic solutions are to be considered for personal use in the future, I expect processors could include both approaches : conventional digital units for hard computing and neuromorphic units for video/image recognition.

  13. Steve Crosson 09 Aug 2014 at 2:10 pm

    Insomniac : Well, if you want to get pedantic about it, isn’t that what people do? Take input data and make decisions based on what is know to us.

    Sorry, I’m not trying to start an argument. I’m just genuinely curious about what progress has been made on understanding how the brain works. I first read “On Intelligence” about 10 years ago and I was blown away by how much his theories seemed to explain how the world, and specifically “people” seem to work. I’m admittedly a Complete layman regarding this topic, but his hypothesis seemed to have great explanatory value.

    I’ve never had enough time to delve deeply into this topic (or any topic that doesn’t directly pay the bills) but I am curious about everything and I strive to have at least a basic understanding of as many different topics as possible.

    I know that I will never be able to be an expert on this or many other topics, but I would like to be able to identify which “experts” I should rely on to be able to separate good science from bad science. In other words, while quantum mechanics will probably always mystify me, I did make an effort to learn enough to figure out that Deepak and his kindred spirits are just blowing smoke.

    So, back to my original question. Is Jeff Hawkins book (“On Intelligence”) a good source of information for a layman, or are there better, more current (easily accessible) alternatives? I’m not really asking if neuromorphic systems will be useful to us either generally or as individuals. I’m asking if this approach might be closer to how the brain actually works as we currently understand it.

  14. trog69on 09 Aug 2014 at 9:04 pm

    I mainly care only if it’ll allow me to play my games at 4k resolution with my present GPU!

  15. trog69on 09 Aug 2014 at 9:07 pm

    Now that I’ve read the comments, thank you LDoBe, for explaining the difference in terms even I can understand.

    I can still hope, right?

  16. LDoBeon 10 Aug 2014 at 1:27 am

    @Steve Cross

    If you’re looking for an authority to disambiguate the current scientific consensus from what our community would alternatively label as pseudoscience or quackery, you’ve unfortunately come to the wrong place, and missed the point of scientific skepticism.

    Most of us here reject the idea of an authourity when it comes to knowledge, and much prefer to use the concept of scientific consensus, combined with careful scrutiny of the data provided.

    What this means is, it’s very difficult to become confident of knowledge claims, because accepting a given commentator’s conclusions on claims first requires knowing that they do their own skeptical research. Secondly one must know the biases and pet-concepts of the commentator. For example, Jay and Bob Novella both adhere to the idea of an imminent POSITIVE technological singularity. For the sake of full disclosure, I admit to believing in the same idea, but that’s not important. What is important is that this belief is not supported by research, but is rather an extrapolation of current ideas and information, and contains no hard evidence. We could be completely wrong about the singularity, and that would still be fine. But it’s important to note that our predictions of the future are colored by these unsupported and not completely rational beleifs.

    TL;DR Agood skeptic doesn’t seek authourities in arguments and conceptual debates. Rather, good skeptics are better served by doing research while pracicing cognitive humility as best they can.

  17. hardnoseon 10 Aug 2014 at 2:02 pm

    “And of course, we have to ask if such neuromorphic computing brings us any closer to AI.”

    No.

  18. BillyJoe7on 10 Aug 2014 at 5:18 pm

    :D

  19. Steve Crosson 10 Aug 2014 at 7:53 pm

    Sorry if I didn’t make myself clear. I was rather hoping that the quotes around “experts” would make it obvious that I was being somewhat ironic. I was merely asking for good sources of information — plural. I felt that a blog written by a neurologist (especially when said neurologist happens to be Steve Novella) would be a good place to get recommendations on good unbiased sources of information.

    And BTW, LDoBe, while I’m sure your advice on research and due diligence was well intentioned, when it comes right down to it, MOST of us, MOST of the time, must necessarily rely on various experts to make sense of the ever more complex modern world. The best we can hope to do is pick the the right experts. Since very few of us can be expert (or even particularly knowledgeable) on more than a few subjects, we are forced to base our opinions based on our best understanding of scientific plausibility coupled with the concensus opinion of genuine experts.

    Which is exactly why I posed my original question on this particular blog, i.e. “Is Jeff Hawkins book (On Intelligence) a reasonable description of current thinking on the workings of the brain, or is there a better source for a curious layman to get a good overview?” I never said (or meant to imply) that this would be my only source of information. But based on past experience, it seems like it ought to be a pretty good place to start.

    TL;DR Don’t teach your grandfather to suck eggs.

  20. grabulaon 10 Aug 2014 at 11:24 pm

    @hardnose

    “No.”

    Another profound response…

    Any advance or discovery that is successful in computing could get us one step closer to AI. At this point it’s hard to say if this will pan out. I believe that once we understand how to build a brain, we’ll be that much closer to AI. Using this type of technology could get us there but it’s anyone’s guess at the moment.

  21. grabulaon 10 Aug 2014 at 11:29 pm

    @Steve Cross

    Haven’t read Hawkins book so can’t comment on that.

    “And BTW, LDoBe, while I’m sure your advice on research and due diligence was well intentioned, when it comes right down to it, MOST of us, MOST of the time, must necessarily rely on various experts to make sense of the ever more complex modern world. The best we can hope to do is pick the the right experts. ”

    You follow this comment up with the right thinking but I wanted to clarify. It’s true we need to look at the proper expert in order to get a good idea on a given subject. That expert is hopefully knowledgeable about the science as well as the consensus and evidence. What’s more important however is the consensus. For example you can probably find some climatologists who deny anthropomorphic climate change. They should know better, being experts in the field but it happens, that’s why we refer to consensus.

    You’ll hear a lot of ‘too early to tell’ or ‘not enough evidence yet’ from good skeptical sources. Like my response to hardnose’s absolute statement above, it’s too early to actually tell if this will get us any closer to AI. I believe any knowledge gained in a specific field is helpful in one way or another, but not always, and it’s often a matter of degrees.

  22. Steve Crosson 11 Aug 2014 at 10:10 am

    @grabula,

    Thanks, as I tried to explain in the sentence immediately following the one you quoted, I agree completely that the concensus opinion is important to understand and take into consideration. After all, Sagan’s well known “extraordinary claims …” comment clearly implies the corollary that extraordinary claims are considered to be so primarily BECAUSE they disagree with the general concensus.

    But the concensus opinion is what I have been searching for all along. Hawkins tried to present an objective view of the (then) current thinking on brain function, along with his specific disagreements with the conventional wisdom. All skeptics realize just how difficult it is to judge our own objectivity and I was hoping to get some outside opinions on how good a job Hawkins did.

    I felt (and still do) that this site should be a good place to get a relatively unbiased view of the current general concensus. Having said that, I’m starting to get a little nervous that apparently no one here has any familiarity (or at least any strong opinions) on the Hawkins book. A decade ago, it was a huge topic of conversation in my field (I.T.) probably because Hawkins is an I.T. guy (Palm Pilot, Treo, etc.). Among other things, he completely slammed the whole AI community by claiming that their approach was all wrong because it had nothing to do with how brains (i.e. intelligence) actually worked in nature.

    Because I’m well aware of my own weaknesses, I have always regarded his beliefs as only a very interesting hypothesis (albeit an extremely plausible one, at least to me). Because it was so fascinating, I recently reread the book, but thought it might be a good idea to get a sanity check, so here I am.

  23. The Other John Mcon 12 Aug 2014 at 2:03 pm

    Steve Cross,

    I read Hawkin’s book a few years back, so I’m a pinch hazy on it, but I remember liking his idea of the brain being a “future predictor” machine. I think he nails its hierarchical and recursive nature, but to this concept I would also add that the brain is a “present guesser” machine as well, meaning that a lot of processing is going towards building up an accurate mental representation of the external physical (and social!) environment. We spend a lot of time ruminating about the past, and interpreting the present, and trying to understand ourselves and other people, as well as thinking about (and trying to predict) the future.

    In regards to your earlier comment: “As to programming, if Hawkins is correct, the only programming involved would be to provide sensory/data input to allow patterns to be recognized and correlations/predictions to be inferred.” I think *some* aspects of brain function will permit such easy implementation, specifically those relating to pattern detection (image analysis, language, etc.) because a lot of this relies on solving complex pattern-recognition problems. But I’m willing to bet my bottom dollar that in emulating a more general type of human-like intelligence, it will not be so easy and we will not be so lucky. Building independent ‘cognitive modules’ (e.g., face detection, space perception, etc.) is one challenge; hooking them all up in an appropriate and/or human-like way is a whole other challenge that isn’t necessarily a pattern-recognition problem, but will be more of reverse-engineering, systems-control, neural imaging & analysis, and computational challenges.

    Over Hawkin’s, I personally would recommend these popular books about the mind/brain, not all are necessarily AI-heavy, and in no particular order:

    Steven Pinker’s “How the Mind Works” (really anything by Pinker is magical)
    Tor Norretrander’s “The User Illusion”
    Douglas Hofstadter’s “I am a Strange Loop”
    Sebastien Seung’s “Connectome”
    Dorion Sagan’s “Up From Dragons” (a ‘sequel’ to Carl Sagan’s “Dragons of Eden”)
    Dan Dennett’s “Consciousness Explained” (really anything by Dennett is at least interesting)
    Ray Kurzweil’s books – intriguing though overly bombastic

    More professional treatments at the intersection of AI and neuroscience are available under the topic of “computational neuroscience” or “computational cognitive neuroscience”. My money is on the new field of “connectomics” a la Sebastien Seung, Olaf Sporns, and the like, who are trying to incorporate data from neuroscience and neural imaging, with the goal of turning this info into computational models.

    The ‘cortical column’ idea espoused by Markam’s Blue Brain project (and I think also mentioned in Hawkins book) is intriguing, and worth pursuing, though there seems to be considerable doubt about the concept of these columns being functionally independent units, for example:
    http://human-brain.org/columns.html

    Hope this helps? Great questions, would love to hear some other thoughts on this.

  24. The Other John Mcon 12 Aug 2014 at 2:12 pm

    Another link on the ‘cortical column’ debate I meant to add:
    http://www.pnas.org/content/105/34/12099.full

    Googling this topic should bring up quite a few more; but I’m not a neuroscientists so not quite sure what to make of it all…

  25. Steve Crosson 12 Aug 2014 at 7:01 pm

    The Other John Mc,

    Thanks so much for the recommendations. I will definitely check out as many as I can. It is not so much that I’m interested specifically in AI, but as Hawkins pointed out (correctly IMHO), we really need to understand what intelligence is before we attempt to make an artificial version of it.

    My education, particularly the self directed portion has always been eclectic to say the least, but I’ve always been very curious about the origin of consciousness. I can sympathize with the dualists to the extent that “I” don’t feel like the program output of a meat computer, however, every other explanation I’ve ever encountered has always seemed to be even less likely or understandable. My own brain’s Occam’s Razor circuitry has always been stuck on overdrive. I can actually recall reaching the conclusion in early grade school that the Tooth Fairy, Easter Bunny, Santa Claus , and God were all just stories that parents tell their kids to either make them behave well or simply just to feel good.

    I strongly suspect that, sooner or later, science will be able to explain consciousness just as well as it has explained countless other mysteries that were once thought to be supernatural. Having said that, I do agree with you that we probably still have quite a way to go. I mostly threw out the Hawkins quote about sensory data input being all that was necessary as a conversation starter. As much as I found Hawkins theories to be fascinating and plausible (at least from a layman’s perspective), I’m always suspicious of answers that appear to be just a little bit too perfect — True Believer syndrome, confirmation bias, etc.

    Having said that, I’m well aware that I don’t have the knowledge to make any definitive statemates about whose theories are best. Usually, particularly in evolving fields such as neurology, I just content myself with trying to get a good general overview of the current thinking in the field. The price I pay for being curious about virtually everything but having a finite number of hours in the day. However, I will be making a serious stab at the list you have so generously provided, especially since it does appear that tremendous progress is being made and the field may be starting to coalesce around some general principles.

    Thanks again. As I may have mentioned, I just recently discovered this place. I have been an SGU fan for many years and I had Neurologica in my RSS feed reader, but I had never “clicked through” to the comments section, based on the mistaken assumption that the signal to noise ratio would be as bad here as it is almost everywhere else. I’ve been very pleasantly surprised. I’ve been working my way through the “brain as a receiver” thread that was mentioned earlier, and I must say that even the “loyal opposition” (cough – trolls – cough) seem to be of a higher caliber than on the wild Internet.

  26. BillyJoe7on 13 Aug 2014 at 9:21 am

    Steve Cross,

    “I can actually recall reaching the conclusion in early grade school that the Tooth Fairy, Easter Bunny, Santa Claus , and God were all just stories that parents tell their kids to either make them behave well or simply just to feel good”

    At the tender age of five, whilst still in pre-school/kindegarten, my son once said in relation to the above: adults shouldn’t tell kids lies! (We had been very careful not to reveal our attitutes towards religion and other issues until our children had come to their own conclusions).

    “I just content myself with trying to get a good general overview of the current thinking in the field”

    There are some posters on this blog who would do well to follow that course. Indeed you have to be well versed in what the current thinking is in any field before you should even start reading what the fringe dwellers are saying. Otherwise you’ll never realise that they are wrong and why they are wrong and you risk ending up lapping up their fringe views.

    “I had never “clicked through” to the comments section, based on the mistaken assumption that the signal to noise ratio would be as bad here as it is almost everywhere else”

    Yes, a pleasant surprise isn’t it? By any chance have you ever followed a certain blogger referred to as “the teddy bear”. I lasted about two weeks reading the articles and the commentary and another couple of months reading only the articles before realising why he had such a shitty bunch of followers, and have never been back.

  27. Steve Crosson 13 Aug 2014 at 10:04 am

    BillyJoe7,

    “At the tender age of five, whilst still in pre-school/kindegarten, my son once said in relation to the above: adults shouldn’t tell kids lies! (We had been very careful not to reveal our attitutes towards religion and other issues until our children had come to their own conclusions).”

    Indeed, I’ve often thought that the whole idea of telling kids fairy tales as if they were true was mind bogglingly stupid.

    1) You are inevitably going to create trust issues when the kid finds out you have been lying to him.

    2) You are seriously undermining the kid’s ability to think skeptically (i.e. successfully) in the future.

    3) And most important of all for believers in any type of woo (although they will never recognize it as an issue), how in the world is the kid supposed to differentiate between false fairy tales and “true” fairy tales?

  28. The Other John Mcon 13 Aug 2014 at 3:18 pm

    Steve Cross,

    No problem…always love chatting about my favorite books! :-)

    In regards to: “we really need to understand what intelligence is before we attempt to make an artificial version of it.” My thinking on this is more along the lines of Steve Pinker (which I think he lays out in his book I mentioned, How The Mind Works). Briefly, the challenge of ‘figuring out’ mind/consciousness/intelligence is so difficult that it’s not completely clear which approach will work best. Is it computer and software engineers trying to do “AI” without a full understanding of the brain? Or is it neurologists, neuroscientists, or psychologists unraveling the mind, the brain, or studying behavior, which will allow accurate models to be developed? Is it more promising to do engineering, or reverse-engineering? I dunno, both will probably be fruitful, if not totally successful.

    My guess is that it really is going to rely on all of these fields, using various approaches, and this is the whole concept of the umbrella term “cognitive science” which is intended to be interdisciplinary because each field (should) inform the others. But again it is a great question, and I’m interested to see who gets there first! 20 bucks on neural imaging leading to computational models(see “Connectomics”)

  29. grabulaon 13 Aug 2014 at 10:06 pm

    @Steve Cross

    “Indeed, I’ve often thought that the whole idea of telling kids fairy tales as if they were true was mind bogglingly stupid.”

    I’m torn on this. Kids have active imaginations and they’ll often create their own fantasy worlds as if they are real regardless. I think it’s ok to encourage a little fantasy in imagination until they get to a certain age where they can really comprehend the issue.

    “1) You are inevitably going to create trust issues when the kid finds out you have been lying to him.”

    I don’t agree with this. I don’t have any trust issues in regards to at one time believing in Santa Claus for example. In fact I barely remember the time at all before I remember realizing that santa was really just my family. It’s possible some more sensitive kids might have issues but I don’t think this gives children enough credit.

    “2) You are seriously undermining the kid’s ability to think skeptically (i.e. successfully) in the future.”

    I don’t know about this either. In fact, if you sit down with your child once they are old enough to understand the truth, it might make for a good skeptical moment to help the child through.

    I’m working through this now with my one year old daughter. I want to encourage her imagination and help her to enjoy her childhood years before things have to start getting serious. I’m realizing that when I sit down for a bedtime story it’s not really important if they believe it’s true or not, and I don’t think it really crosses a child’s mind. As she develops I’ll help her figure things out as best I can, and I’ll answer any questions she has honestly.

  30. BillyJoe7on 14 Aug 2014 at 9:28 am

    grabula,

    I don’t think a child having a good imagination (ie an imaginary friend), is the same as having adults foistering upon the child a lie (ie a god). Most children realise soon enough that their imaginary friend is not real, but most children who have god foistered upon them by adults never realise that he is not real.

  31. Steve Crosson 14 Aug 2014 at 4:25 pm

    @grabula,

    I’m torn on this. Kids have active imaginations and they’ll often create their own fantasy worlds as if they are real regardless. I think it’s ok to encourage a little fantasy in imagination until they get to a certain age where they can really comprehend the issue.

    Believe it or not, I do realize that life, the universe and everything consists mostly of various shades of gray, in spite of my unfortunate tendency to make black and white comments about complicated issues when I get passionate. And I am pretty passionate about this issue because I agree with BillyJoe7 that too many people never grow out of the whole god delusion thing.

    I completely agree that an active imagination is important as well as a necessary component of creativity of all kinds, and I do think parents should actively encourage it. Who doesn’t enjoy a good game of “make believe”?

    But I also think that the most important part of any child’s development is learning to survive in “the real world” and being able to distinguish fact from fiction, or real from make believe, is a skill that everyone must have to be able to make good decisions and choices in life.

    I’m not advocating that we give instruction to pre-schoolerson on how to avoid logical fallacies. Rather, I just think that we should always do our best to give honest answers to every child’s natural curiosity. Children seem to be “hard-wired” to explore and learn about the world, and try to understand it. Giving them incorrect information will only make that process more difficult and frustrating. The experience of BillyJoe7′s son is a good example of how this type of thing can be upsetting to at least some children.

    In any event, I do think that our opinions are more similar than they are different. In many situations, the child doesn’t know or care if a story is “real” or not, and it often doesn’t matter. I’m only concerned about situations in which a story is presented as “true”, but is later revealed to be not true and, what’s worse, the parent knew it all along (i.e. Tooth Fairy or Santa Claus). It seems to me that this would tend to muddy the definition of “truth” as well as somewhat diminish the child’s trust in his parent’s veracity.

    Admittedly, hundreds of millions of children have survived to adulthood with no obvious major trauma after having been raised in exactly this environment, so I’m not claiming that the old approach will bring about the end of the world. On the other hand, billions of people do seem to believe in some type of deity or other supernatural woo, so perhaps it is worth a try.

    Of course, I do realize that life is not quite so simple. Some parents will probably not thank you if your child happens to prematurely (at least in the other parent’s opinion) destroy their child’s illusions about Santa Claus or anything else. But, (at least IMHO) it is never to early to learn about tolerance and that many other people have widely varying opinions because of wildly different reasons. This is also the time to (gradually) teach them how to tell the difference between good reasons and bad reasons for belief. I think we would all be better off if, in addition to parental instruction, the schools would also stress critical thinking skills.

    Side note: I’ve been trying to work my way through the monster thread on “brain is not a receiver” that was mentioned earlier in this thread — ARGHHHH!!! I really, really, really wish that someone had taught Leo100 why anecdotes are not evidence when he was a child.

  32. grabulaon 14 Aug 2014 at 9:56 pm

    @BJ7

    “Most children realise soon enough that their imaginary friend is not real, but most children who have god foistered upon them by adults never realise that he is not real.”

    The difference is that while at some point santa claus and the easterbunny are explained as being make believe, God almost never is so a child is forced to come to that conclusion, or not, on it’s own as it grows older.

    In the case of Christmas for example, Santa may come up with our family while we’re raising our children. I’m not going to sweat it but I may encourage enforcing the giving aspects of Christmas over any sort of myth. If my child asks whether santa exists or not I’m happy to explain to them the truth. The same with God.

    @Steve Cross

    I agree, I think we are probably mostly on the same page.

    As for the brain is not a receiver thread – it lost it’s worth well before the 1000 post. After a while it’s literally the same arguments rehashed over and over.

  33. BillyJoe7on 15 Aug 2014 at 9:03 am

    grabula,

    Myths were never part of our family’s christmas. Santa claus and god never came up. So there was never any reason for our kids to ask if they were real. By the time they were exposed to these myths through relationships outside the family, the game was well and truly up. I had a strict religious upbringing and it was a real struggle within myself to break free of the shackles. If pretty well ruined my adolescent years. I’m glad our kids didn’t have to go through that.

  34. mikelaughson 21 Aug 2014 at 4:16 am

    how did this devolve into a discussion comparing God and Santa Claus? Wow..maybe brain-like chips are a bad idea after all. You’ll ask for the sum of two numbers and instead it will tell you how it believes it came to be.

  35. BillyJoe7on 21 Aug 2014 at 8:28 am

    mikelaughs,

    Did’t you mean “evolve”, mike :)

    :)

  36. mikelaughson 23 Aug 2014 at 2:32 am

    Nah, devolve, as in :

    deteriorate: to deteriorate slowly over time

  37. Ribozymeon 06 Sep 2014 at 1:29 pm

    Let’s see if there is someone left that wants to reply to this, by John Wilkins:

    https://www.youtube.com/watch?v=rQ7GKXo3dss

  38. siason 12 Sep 2014 at 3:07 am

    Many other technological breakthroughs have come from using principles that occur in nature in implementing it with a twist – this kind of brain mimicking computing technology will no doubt play an important role in future. Check out Kevin Kelly from Wiredhttp://edge.org/conversation/the-technium talking about his vision of what lies ahead in the next tech revolution

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.