Apr 29 2014

Neuromorphic Computing

You are currently browsing comments. If you would like to return to the full story, you can read the full entry here: “Neuromorphic Computing”.

Share

50 responses so far

50 Responses to “Neuromorphic Computing”

  1. Bronze Dogon 29 Apr 2014 at 3:24 pm

    It’s certainly intriguing, though it’s a bit abstract to me. I understand they’d speed up neural simulations, being more neuron-like than digital circuits jury-rigged to pretend they’re analog, which is really sexy for the scientists.

    I’m curious how they’ll fit into the future of computer science. What are the pros and cons of these neurocircuts compared to conventional circuits? If I had to guess, I’d imagine they’d do well for ‘animaly’ things like robotics, fuzzy logic, and interpreting sensory data. I wonder if they might have trouble with precise math and consistency, though.

  2. Paulzon 29 Apr 2014 at 5:22 pm

    Wow, are we really at this stage already? This sounds almost too good to be true.

    Keep us posted. I’m deeply excited.

  3. hardnoseon 29 Apr 2014 at 7:21 pm

    “There is no reason why eventually we will not arrive at a piece of hardware with the ability to perform human brain processing in real time. And then, of course, once we get to that point we will then surpass it, creating computers increasingly more powerful than the human brain.”

    You are going very far beyond the evidence with that statement.

    For all we know, the brain might be a very different kind of machine than anything computer scientists have come up with.

    I don’t think a real skeptic would unhesitatingly accept science fiction scenarios. You sound more like a technology worshiper.

  4. Steven Novellaon 29 Apr 2014 at 9:13 pm

    hardnose – then give me a reason. Why won’t continued incremental improvement lead to a computer that can perform human-level processing in real time? There is nothing about the brain that is magical. There is no reason to think that the circuits in the brain cannot be duplicated in another medium. Please give me one if you think there is.

  5. Insomniacon 30 Apr 2014 at 3:13 am

    This is actually very exciting, especially for neuroscientists who can then see these products as new tools for their work hopefully in a not too distant future.

    However, one of the main motivations for this kind of research is the fact that we’re on the verge of reaching the fundamental limits of the CMOS technology. That is, we won’t be able to make them any smaller and obviously there’s concern about their tremendous energy dissipation. Indeed, only 1% of the energy used is actually useful and the rest is turning into heat.

    There is urgent needs to come up with a new paradigm when it comes to computer chips, and neuromorphic devices come on top for now. And it’s worth noting that there is not only one possible way to actually build those things. Different teams across the world imagine different technical solutions to mimic the properties of a synapse (e.g. plasticity). The general term for that is a “memristor” as opposed to conventional transistors.

    We may be far from the 100 billion neurons, but we’re not compelled to physically build as many neuronal circuits on these chips, insofar as their frequency is way higher than those of the neurons firing in the brain. Therefore the speed may make up for the relatively weak number of units.

  6. Insomniacon 30 Apr 2014 at 3:20 am

    By the way Steven, the US BRAIN initiative is more about monitoring the activity of every neuron in a human brain (although they’ll probably first try it with Drosophilia, mice etc), not about building a computer which design is based on neural circuits. That’s a different line of research.

  7. BillyJoe7on 30 Apr 2014 at 6:54 am

    A bit of trivia:
    There are about as many neurones in the human brain as there are stars in our galaxy and about as many stars in our galaxy as there are galaxies in the universe?

  8. Bill Openthalton 30 Apr 2014 at 7:21 am

    hardnose –

    For all we know, the brain might be a very different kind of machine than anything computer scientists have come up with.

    The brain is indeed a very different kind of machine; for starters, it’s not based on silicon, and its signalling mechanism is part chemical, part electrical. But as long as it is an information processing (computing) machine and Turing complete (and there is little doubt about either), its functions can be duplicated on any other Turing complete device.

    You would have to show the human brain has functions or abilities that are beyond those of a computing device; but you have already stated it is a machine…

  9. hardnoseon 30 Apr 2014 at 9:15 am

    “Why won’t continued incremental improvement lead to a computer that can perform human-level processing in real time? There is nothing about the brain that is magical. There is no reason to think that the circuits in the brain cannot be duplicated in another medium. ”

    I think there are computer-like machines in the brain, but I think it must include other kinds of machines as well.

    Nothing about the brain is magical? Well that would depend on how you define “magical.” There are things that science has not even begun to understand, and things that science has not even begun to imagine might exist.

    Man-made computers follow predetermined steps, and that is ALL they do. Yes, they can appear to make random choices, which might give an illusion of unpredictability. But they must be programmed to, at certain points, make selections based on a pseudo-random algorithm.

    In reality there is nothing at all unpredictable about any computer.

    Now of course you will say that humans, and all living things, merely follow predetermined programs. Well that could lead into one of those endless useless philosophical debates.

    Throughout our lives we are constantly learning and forming new habits. I think these habits are encoded as automata in our brains. Much of what we do is automatic, and controlled by these circuits — in that sense, our brains are like computers.

    However, I believe we do much more than follow predetermined algorithms. There is always a leading edge that cannot be explained as mere computation. Every computer must have programmers, and there is something in us that programs our brains.

    That is just one problem; I am trying to keep this short.

    If you consider the ideas of Roger Penrose, for example, about the limitations of computers, you might see what I mean.

  10. Steven Novellaon 30 Apr 2014 at 9:34 am

    hardnose – I am not saying we currently understand everything we would need to know about brain function. part of this research is also using computers to help us explore brain function.

    But I think you are dismissing the major objection to you position as merely philosophical. There is no reason to think that brain function is anything other than complex processing algorithms in wetware capable of plasticity. So far we have not discovered anything the brain does that cannot be modeled in a computer. Virtual simulations of brain circuits seem to function just fine.

    But I agree that we will not know for sure until we fully get there.

  11. The Other John Mcon 30 Apr 2014 at 10:14 am

    To be honest, I choked on the same line that hardnose did:

    “There is no reason why eventually we will not arrive at a piece of hardware with the ability to perform human brain processing in real time. And then, of course, once we get to that point we will then surpass it, creating computers increasingly more powerful than the human brain.”

    Although I didn’t choke as hard as he did. I think you are essentially right Dr. Novella, but I also think this is a strong statement given the current state of understanding. Strong is OK as long as we all acknowledge it as such. Also, your phrasing sort of portrays that once human-level intelligence is achieved in machines, then “scaling it up” to go beyond human capabilities will be a straightforward matter. Here’s how I would slightly modify your sentences to get at the same ideas:

    There is no clear reason why eventually we will not arrive at a piece of hardware/software/wetware with the ability to perform brain-like computations in real-time, and that will demonstrate human-like intelligence and performance, however broadly defined. Once we reach such a point, which is distant but seemingly certain, we will likely be able to create virtual intelligences that would be less-restrained in terms of processing power, speed, size, interconnectedness, surpassing the physical/biological constraints of our bodies and nervous systems, etc.

    Just my thoughts, and I share your excitement that these are really amazing developments and it will be VERY interesting to see where this all leads. It’s an exciting time to be alive.

  12. Npsychdocon 30 Apr 2014 at 10:27 am

    Steven and hardnose – I dont disagree with the idea that, per Steven, “There is no reason to think that the circuits in the brain cannot be duplicated in another medium.” However, I have to think that modeling a system to mimic primary sensory/motor and even unimodal association cortices is more feasable than other aspects of the brain, and this has been achieved to some degree. What I’m less sure about, and perhaps this is what hardnose is getting at, is how the more affective/motivational aspects of brain function, mediated in part by the heteromodal association areas, could be duplicated. Its perhaps a moot argument (perhaps not), as I dont know why anyone would want to produce a system that is geared toward cohesion (and thus bias) at the expense of accuracy when we’re talking about computer modeling and neuroprosthetics.

  13. hardnoseon 30 Apr 2014 at 10:37 am

    “So far we have not discovered anything the brain does that cannot be modeled in a computer”

    Oh please don’t get me started. Where would I start? If that were true, something would have passed the Turing test, for one thing.

    But beyond that — many of the most basic things that the brain does to keep us alive from one instant to the next have not been modeled on a computer.

    In order to be modeled on a computer, a process must be understood completely.

    Are you saying that all physiological processes are understood completely?? It would be quite amazing if any medical researcher claimed that.

    And then there are mental processes — are you saying these are completely understood?? Hard to imagine anyone with any knowledge of medicine or psychology would say that.

  14. The Other John Mcon 30 Apr 2014 at 11:29 am

    This discussion would benefit from clarification about what exact level and fidelity of “modeling” is being referred to, as we could model gross motor behaviors of an organism, we could model whole societies of organsisms, model retinal computations of contrast, model neurotransmitter exchanges in synapses, etc. etc. ad nauseum. We are not necessarily talking about the same things with use of the word model here.

    Again I would reword Dr. N.’s statement here to “So far we have not discovered anything the brain does *computationally* that could not at least theoretically modeled in a computer”. Although now this reads almost as a tautology, so not sure if I made it worse.

  15. Bruceon 30 Apr 2014 at 11:54 am

    Hardnose:

    “For all we know, the brain might be a very different kind of machine than anything computer scientists have come up with.

    I don’t think a real skeptic would unhesitatingly accept science fiction scenarios. ”

    followed by:

    “Nothing about the brain is magical? Well that would depend on how you define “magical.” There are things that science has not even begun to understand, and things that science has not even begun to imagine might exist.”

    You are, in essence, dismissing a projection from current science as “science fiction” and replacing it with something “magical”… which is in essence fantasy.

    I would remind you of Clarke’s third law at this point, perhaps we don’t know everything, but it does not mean that there is something magical in the air, it just means that we don’t know it all yet.

    And to pick up the modelling statements, it is much easier to model something than it is to actually build it. There might be years, decades or centuries before a model will become reality, but a model can be used as proof of concept and help us understand things a little bit more.

    This is exciting stuff and my first thought when I read through it was all about uploading someone’s brain to a computer. I think our true test of consciousness will come then, if someone alive and well has his brain uploaded to a computer that can mimic a human brain, will they “move” over to the computer, will they be conscious of being in both or will there be two of those consciousnesses in existence?

  16. hardnoseon 30 Apr 2014 at 12:54 pm

    “perhaps we don’t know everything, but it does not mean that there is something magical in the air, it just means that we don’t know it all yet.”

    Well I never said there is anything magical. I believe that everything is “matter” and that everything is potentially understandable.

    But there might be things — and I think there must be — that are utterly different from what science currently understands.

    So knowing that we don’t know it all yet, we should not extrapolate from current knowledge.

    The current assumption is that the brain is a computer-like machine. But there are other kinds of machines that we know about, and probably many others that we don’t know about.

  17. Bruceon 30 Apr 2014 at 1:24 pm

    “So knowing that we don’t know it all yet, we should not extrapolate from current knowledge.”

    So you want to extrapolate from what we don’t know?

    “The current assumption is that the brain is a computer-like machine”

    No, actually, you have it the wrong way around, we are saying that it might be possible to create a brain like machine by using computer technology. We might not be able to create it exactly, but we can get darn close, and the closer we get the more we will understand, the more we understand, the closer we can get. It is called progress.

    I am unsure of what the point you are trying to make.

  18. The Other John Mcon 30 Apr 2014 at 1:50 pm

    hardnose: “the current assumption is that the brain is a computer-like machine.”

    65 years of research provide overwhelming converging evidence that nervous systems are *information-processing* machines, of which a “computer” is also:

    http://en.wikipedia.org/wiki/Cognitive_revolution

    This doesn’t mean brains compute the same way your desktop computer does (they clearly don’t; serial versus parallel for one). But at its core, info processing is info processing, and the logic and underlying methods can be translated from one to another and across different mediums. Thus: whatever or however the brain is computing, these processes should be able to be implemented or precisely modeled in whatever substrate we wish.

  19. hardnoseon 30 Apr 2014 at 2:32 pm

    ” we are saying that it might be possible to create a brain like machine by using computer technology. We might not be able to create it exactly, but we can get darn close, and the closer we get the more we will understand, the more we understand, the closer we can get.”

    The OP doesn’t say we MIGHT, it says we WILL.

    And you are saying “we can get darn close.” How do you know that? You don’t.

    If the brain contains other kinds of machines in addition to computer-like machines, then you can progress forever down your chosen road but never get there.

    And AI researchers have been forging ahead on that road for about 65 years now, and do not seem to be getting any closer to your supposedly inevitable goal.

  20. hardnoseon 30 Apr 2014 at 2:34 pm

    “So you want to extrapolate from what we don’t know?”

    Do you have to extrapolate and make up science fiction fairy tales? You have no interest in logic or evidence?

  21. ccbowerson 30 Apr 2014 at 3:20 pm

    “The OP doesn’t say we MIGHT, it says we WILL.”

    Closer to what he says is that he sees no reason why not. You object to this characterzation, which is fine, but if you do then you should be able to come up with reasons why not. Where is the theoretical obstacle to “hardware with the ability to perform human brain processing in real time?”

    Your only argument is bringing up uncertaintly regarding what we don’t know. Yes, what we don’t know are obstacles, but the point is that as science progresses there doesn’t seem to be a theoretical obstacle to simulating human brain processing. The implications of that is a whole other topic. Perhaps you are overextrapolating from what he wrote to what that could imply.

  22. Bruceon 30 Apr 2014 at 4:36 pm

    “Do you have to extrapolate and make up science fiction fairy tales? You have no interest in logic or evidence?”

    No, you would rather make up some magical force made up of stuff we don’t know yet.

    Being curious and making projections and rough estimates is a part of what drives curiosity and science and discovery. If we all just said “we don’t know, it is MAGIC!” we would not have made any advances at all, but if we look at things and say “Hey, we might be able to do this!” we have a chance of finding things out.

    We don’t know for certain, but you know, after 65 years of research we are finally getting to a point where our technology can start to answer some of those questions. In time, as technology advances we will discover more. If there are other “machines” that we don’t know, perhaps we will find those, but by sitting back and believing in magic we are not going to find out.

    Science wants to find out, even if that finding out is discovering we were wrong… and in order to do that we need to extrapolate into the unknown from what we know.

  23. hardnoseon 30 Apr 2014 at 5:28 pm

    [If there are other “machines” that we don’t know, perhaps we will find those, but by sitting back and believing in magic we are not going to find out.]

    I never said anyone should sit back and believe in magic. The meaning of what I said is that AI research has gone down the wrong path for a long time, and has consistently failed. Yes, there are useful by-products, but never any real AI.

    As long as you take for granted that the brain works like a computer, and nothing else, then I think you will fail eternally.

    I am being scientific and rational, you and the blog author are living in a fantasy.

    You cannot find how the brain actually works if you stubbornly insist it works like a computer.

    And I did say I believe computer-like machines make up parts of the brain, but not all. If the brain worked entirely like a computer, I think 65 years would be long enough to get at least a tiny spec of AI working.

  24. Gotchayeon 30 Apr 2014 at 8:18 pm

    Hardnose – What I find myself not understanding, reading this comment thread, is what you have in mind when you talk about machines that are not computer-like. You mention upthread that we know of several such machines, but I don’t see examples, and I have a hard time imagining what seems to me to be a relevantly non-computer-like machine.

    I mean, trivially, there are machines that we don’t think about as computers. A lever is not a computer, in conversational English. But levers are easy to model with computers, and it seems fair to say that if you’ve got a box which contains levers and computers which accepts information as input and produces information as output then you’ve really just got a big computer. In principle you could build a massive computer out of everyday objects, including a bunch of levers. The existence of this sort of non-computer-like machine doesn’t seem relevant to the question of whether or not it makes sense to think about the brain as a computer.

    So what are these examples of non-computer-like machines you’ve got in mind? It’s not clear to me if you mean to be denying that brains are deterministic, or, if so, if you’re denying that computers can accurately model the relevant random processes.

  25. Richardon 30 Apr 2014 at 10:01 pm

    “If the brain worked entirely like a computer, I think 65 years would be long enough to get at least a tiny spec of AI working”

    The obvious question here would be “what would you consider *real* AI?” AI today can do many impressive and useful things. What makes that less “real” than what humans and animals do? Defining intelligence is notoriously difficult; if you claim that no AI is working after 65 years of research then your benchmark is clearly very specific, because I would say AI has made some rather interesting advances over the past 65 years, though we are still a long way away from understanding human minds (and of course have probably been asking a lot of wrong questions. But that’s OK).

    Which bits of the brain do you think work “like a computer”, and which don’t?

    The Other John Mc made a good suggestion: “This discussion would benefit from clarification about what exact level and fidelity of “modeling” is being referred to”. Marr’s levels of analysis provide a useful reference for this ( https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)#Levels_of_analysis ). I think Steve is saying that *in principle* there is nothing at the physical level that could not be simulated (though we are not yet able to do this). As I understand it, you would probably also agree with this, though perhaps with some caution? I don’t think Steve would suggest we have the algorithmic or computational levels worked out at all (though correct me if I’m wrong Steve!).

    While perhaps it will be possible to simulate brain physics adequately, I think we’re still struggling with many conceptual issues so at this point it seems unclear whether we will ever be able to achieve “human like” intelligence, whatever that means. Given all models are wrong, however “human like” your model appears to you, someone else will say that it is fatally flawed in some way based on their opinion of how human intelligence works.

    Bruce: as for “brain/mind uploading”, I would be willing to make a million dollar bet that it will never be achieved, unless you have a much more modest goal than what I think of as mind uploading ;)

  26. grabulaon 30 Apr 2014 at 11:47 pm

    hardnoses arguments always kill me, mostly because I think unlike many naysayers in these comments, he is sincere, but really misguided.

    Hardnose, if you existed around 1940 and someone told you we would soon have complex number calculators capable not only of doing math, but playing or making music, winning at trivia contests and becoming the core of a large part of western societies job market you would have scoffed and called it fantasy.

    You do realize you are arguing with a practicing neurologist right? Dr. Novella knows more about the brain I imagine than you do, and understands the complexities involved in duplicating it. That’s not to say he is always right, but that he argues from a position of some knowledge, and you have to assume at the very least he understands the issues faced from that side right? I feel like you don’t.

    His extrapolation isn’t from fantasy or science fiction really. We’ve made a ton of headway in computing power over a meager 4 or so decades. I understand your point that it might require a different tack in order to finally jump over that hurdle and reach something more than functional AI. Notice he’s not trying to establish a timeline and I don’t believe he’s implying it’s just around the corner (5-10 years maybe :D ), but that it’s something we will eventually reach. I believe we might even see it in our lifetime, roughly the next 50 years. I think you went from 0 to 60 on your assumptions here and all you’ve had to offer to refute it is some vague references to what amounts to magical thinking and a brief mention that you don’t think we’re close yet.

  27. Bruceon 01 May 2014 at 3:09 am

    “The meaning of what I said is that AI research has gone down the wrong path for a long time, and has consistently failed”

    I beg to differ, there have been many advances:

    http://www.nytimes.com/2013/10/15/technology/the-rapid-advance-of-artificial-intelligence.html?_r=0

    Just because your arbitrary measure for success has not yet been reached due to any number of technical and economic reasons does not mean that with a few (or many) further advances in technology we cannot reach even your measure.

    Grabula said it above, it might take 5, 10 or even 50 years, but there is a very high likelihood we will get to where we are going.

  28. The Other John Mcon 01 May 2014 at 8:03 am

    hardnose: “As long as you take for granted that the brain works like a computer, and nothing else, then I think you will fail eternally”

    As Gotchaye pointed out, you seem confused. No one is claiming the brain works *exactly* like current typical hardware or software. In fact, we know it in many ways doesn’t. But like a computer, the essence of what the brain is doing is information processing, agreed? It’s the whole reason we have nervous systems to begin with, to process information from and about the environment, and to act in the world.

    Now with the right hardware and/or software, any program or computation that can be done on one type of information processor (brain) can be replicated or modeled in another (modern computers). Nothing the brain is doing that can be considered interesting or intelligent, seems to be anything other than information processing. That’s why Dr. Novella and most every scientist studying such topics are comfortable concluding we’ll be able to model the brain or human-level intelligence to any desired degree of fidelity once we figure out how this type of information processing is accomplished.

    “If the brain worked entirely like a computer, I think 65 years would be long enough to get at least a tiny spec of AI working.”

    There are undeniably lots of “tiny specs” of AI that are working, you just seem to have adopted a particular definition of AI in your head (try googling Weak versus Strong AI). Us lowly naked primates with only a few hundred years of modern science have invented machines that can identify faces or objects, translate written language, transcribe language from voice-to-text, detect fraud, land airplanes, drive cars, perform logisitics, play chess and Jeopardy. Underlying these advances is in most cases is 50 to 100 years of intensive research & development. This is undeniably intelligent information processing, in most cases performing brain-like computations, and on this point you are just plain mistaken.

  29. hardnoseon 01 May 2014 at 9:19 am

    “You do realize you are arguing with a practicing neurologist right? Dr. Novella knows more about the brain I imagine than you do, and understands the complexities involved in duplicating it.”

    Novella doesn’t know very much about the brain; no one does. I studied both computer science and neuroscience, so I know enough to know how little we actually understand.

    Novella thinks he knows an awful lot about the brain, and many other things. That is not the same as actually knowing, and understanding.

    Even if you have memorized all the parts of the brain and can reciite the Latin names, and even if you have read and memorized every paper on neurology ever written, your understanding of the brain will still be minimal.

  30. hardnoseon 01 May 2014 at 9:21 am

    “We’ve made a ton of headway in computing power over a meager 4 or so decades.”

    Yes ICs got much smaller and cheaper. That is the reason computing power increased, not because of any improved understanding. Computers today use exactly the same kind of logic as they always did.

  31. hardnoseon 01 May 2014 at 9:32 am

    “We’ve made a ton of headway in computing power over a meager 4 or so decades. ”

    Computers can be programmed to do certain kinds of things, we all know that already. EVERYTHING they do is foreseen and planned by the programmers. Robots can do specific and routine tasks, but nothing requiring actual intelligence.

    If you have ever tried to communicate with one of the phone answering systems, then you know how utterly idiotic and brainless computers really are. Yet all the big companies are using them now, because a salesperson must have somehow convinced them this is real AI. If you can get through one of those “conversations” without screaming “F–K” into the phone, you are a saint.

    Turing thought the real test of AI would be conversation, and artificial conversation has not progressed AT ALL.

    And even the sensory and motor stuff is severely limited, however much the NY Times and others like to rave about it.

  32. hardnoseon 01 May 2014 at 9:55 am

    “Nothing the brain is doing that can be considered interesting or intelligent, seems to be anything other than information processing. ”

    A computer takes in data, processes it, and outputs something. The brain does that, but more. I mentioned Penrose in my first comment — he explains that the brain cannot be a mere computer, since no computational system can understand itself (Godel’s Theorem).

    A computer requires a programmer. That programmer could itself be a computer. Which also needs a programmer, which also could be a computer. But somewhere somehow, something has to be not a computer. It’s kind of like the Matrix movies, where systems are within systems within systems, etc.

    I am NOT saying I know how this works; obviously no one does. But you could at least consider some of the objections to standard AI research.

    Consider that AI research has consistently failed the Turing test, because the programmers cannot possibly foresee what humans will say. Except maybe in the most stupid and banal conversations. But in my experience with phone answering systems, they can’t even do that.

    And btw we do have machines that do things other than process information. Your cell phone receives data, before processing it, for example. There is NOTHING at all in physics or biology that says the brain can’t include devices that receive data that does not enter via eyes, ears, etc.

  33. Bruceon 01 May 2014 at 9:55 am

    Dude,

    You are equating an Interactive Voice Response system with AI…

    “Novella doesn’t know very much about the brain; no one does.”

    Ah, the old Hardnose favourite “We don’t know everything therefore we know nothing” gambit.

    So you are saying something you admit to not knowing a lot about (because no one does) is not going to be able to do what we think it might be able to do because these other things (which you seem to have little knowledge of either) are not passing one test that may or may not indicate something you seem to lack?

    Good luck with Mr Vaguey-Vaguerson guys and gals, I am out.

  34. The Other John Mcon 01 May 2014 at 10:34 am

    Penrose, while a respected physicist and author, is largely a crank to the AI community, simply because when he speaks about it, he doesn’t make sense.

    “[Penrose] explains that the brain cannot be a mere computer, since no computational system can understand itself (Godel’s Theorem).”

    No one claimed the brain could “understand itself” whatever that might mean. Godel’s Theorem involves, as far as I know, logical completeness of axiomatic systems, not an information-processing system being able to “understand itself”. Penrose is literally just having mental diarrhea with these concepts.

    “A computer requires a programmer.”

    Brains aren’t the types of information-processing that rely on familiar formal serial programming techniques. You seem to not be getting this.

    In any case, the information processing done by brains WAS formed by outside forces, courtesy of a few billion years of evolutionary selection.

    “But you could at least consider some of the objections to standard AI research.”

    The AI community is *more* than aware of such objections; they have considered them and rejected them as philosophical meanderings and they have proceeded on with their work building all the fancy trinkets in our pockets, homes, and lives that you dismiss so casually.

    “Consider that AI research has consistently failed the Turing test, because the programmers cannot possibly foresee what humans will say”.

    You are assuming the Turing Test is the one and only legitimate goalpost for AI? The Turing Test hasn’t failed because programmers can’t foresee what humans will say. It’s because passing the Turing Test requires mimicking the function of basically an entire brain; which of course we are nowhere near and no one said we were, and which you claim to know we are all going down the wrong path. Please God don’t read Penrose or Searle for your “understanding” of AI.

  35. etatroon 01 May 2014 at 11:20 am

    In other news, researchers at UC San Diego claim that the brain can communicate with dead people. I bet no computer will ever be able to do that. Bam! http://www.ncbi.nlm.nih.gov/pubmed/24312063

  36. hardnoseon 01 May 2014 at 11:55 am

    ” Please God don’t read Penrose or Searle for your “understanding” of AI.”

    I read Dennett and most of the pro-AI guys as well. I don’t agree with Searle, but I think Penrose makes a lot of sense. A computational system cannot have any perspective on itself. It must be a component in a larger system. This is a mathematical fact.

  37. The Other John Mcon 01 May 2014 at 12:08 pm

    I can take a picture of my computer, then put that picture on my computer: a computational system with a perspective on itself = mind-blown

  38. The Other John Mcon 01 May 2014 at 12:14 pm

    Or how about this one: pull-up your desktop computer, then click on the calculator, add up some numbers. You have just used one computational system (your desktop computer) to virtually model the information processing of another computational system (a handheld calculator). Philosophical problem solved.

  39. Steven Novellaon 01 May 2014 at 12:51 pm

    The programmer of the gaps. Nice.

    Can you build a virtual computer inside a computer? I bet you can, although it would be slower than the computer itself.

    Also, let me get this straight, the brain can’t be a computer because no computer can understand itself, but we don’t understand the brain according to you, so – there’s no problem.

    In any case, I reject all these premises.

    To say our understanding of the brain is “minimal” is absurd, and subjective. We have an incredible amount of knowledge about the brain. There is also an incredible amount we have yet to learn. But science progresses (once a field is well-established) not by invalidating older knowledge, but by deepening it. Our knowledge of the brain is getting more nuanced and complex, but is not invalidating what we already know.

    It’s like saying we have “minimal” knowledge of genetics because of all the stuff we’re just figuring out. It misrepresents reality in a meaningful way.

  40. hardnoseon 01 May 2014 at 1:03 pm

    “To say our understanding of the brain is “minimal” is absurd, and subjective. We have an incredible amount of knowledge about the brain. There is also an incredible amount we have yet to learn. But science progresses (once a field is well-established) not by invalidating older knowledge, but by deepening it. Our knowledge of the brain is getting more nuanced and complex, but is not invalidating what we already know. ”

    Some people would admit that the more we learn about the brain (and nature in general), the less we understand.

    You say the opposite.

  41. hardnoseon 01 May 2014 at 1:05 pm

    “Can you build a virtual computer inside a computer? I bet you can, although it would be slower than the computer itself. ”

    As I already carefully explained — every computer needs a programmer, and that programmer can itself be a computer. But ultimately you will need something other than a computer, since every computer needs a programmer.

  42. steve12on 01 May 2014 at 1:07 pm

    Let me preface my comment by sayng:
    1. I don’t buy any of the philisophical objections to AI for reasons already talked about. Godel doesn’t actually apply, and Penrose if just spouting sayings.
    2. A machine that can really emulate complex functions of a brain is essentially an engineering problem – there is no reason it can’t happen eventually, once we understand the brain better and have the technology to instantiate said understanding. What we don’t have much of a clue about is how the algorithms the brain uses result in efficient cognition, to make the most oversimplified understatement of all time. But there is no magic.

    That said….

    AI has always annoyed me with their estimations of when we’ll get there (there being roughly Turing – inexact, but you get the point). Maybe this is part of what Hardnose is getting at. This is a little unfair, because many in the field do not exagerate progress, and everyone knows Ray Kurzweil is nuts (though an incredible genius as the same time). I was reading an AI article from the 70′s saying the by the new millennium we will have passed the Turing test. It seems that we’re always 30 years away. I’ve talked to many AI people from the CS end of things at Society for Neuroscience, and I get suprememly annoyed at their cavalier attitude re: reverse engineering the brain – we’re almost there according to them. BS.

    I see us as very, very, very far away, and I see no major discovery that puts us in a substantially different position today than we were 10 years ago. But, if we don’t kill ourselves off as a species, I do think we’ll get there eventually. I think we’re some fundamental discoveries away, and the timelines on those kind of problems are notoriously hard to predict.

    Just ask physics, which after a flurry of discovery ~100 years ago has not been able to reconcile their 2 most successful theories during a time when the tools they have had to work with have improved at a geometric rate.

  43. steve12on 01 May 2014 at 1:11 pm

    “As I already carefully explained — every computer needs a programmer, and that programmer can itself be a computer. But ultimately you will need something other than a computer, since every computer needs a programmer.”

    These are just colloquial sayings coupled with a sufficiently vague definition of “computer” and too much reasoning by analogy.

    Evolutionarily coded instructions and environmental feedback are the “programmers”, how’s that? Ya gimme the wishy, I’ll hit ya with the washy.

  44. Steven Novellaon 01 May 2014 at 1:36 pm

    Steve – I mostly agree with you. I refrain from predicting when we will achieve human-level AI, you will notice. Perhaps we can say when computers are likely to be as powerful as a human brain, but that is different than fully self-aware AI. I wasn’t even really writing about that above, although it was interpreted that way.

    In any case, I similarly refrain from saying it will be very far off. We tend to overestimate short term advance, but also underestimate long term advance.

    Regarding reverse engineering the brain, of course we are a long way from fully doing this, but we are making exciting progress. What I am interested in is the interplay between AI and neuroscience – how will then inform each other? We are already modeling parts of brains, like cortical columns. It may all come together quicker than you think if this kind of research continues to progress.

    But you are also correct in that we may hit a wall we don’t anticipate and be stalled for decades. That’s why you can’t predict timing.

  45. hardnoseon 01 May 2014 at 1:52 pm

    My estimate is somewhere between 5 years and 5,000 years, give or take a little.

  46. The Other John Mcon 01 May 2014 at 3:10 pm

    Good one, hardnose. Classic.

    Just one last comment for anyone interested on this topic: Sebastian Seung’s book “Connectome” gives an excellent and easy to read work about the state of the art in neural imaging & mapping, and how this info can be translated into virtual, computational models. It provides a sobering and fair assessment without going sensationalistic and gives a flavor of how hard the engineering problems are (regarding imaging, infor processing, info storage/retrieval, etc.) and also clarifies that many of the goals of Strong AI are certainly possible to reach, only a matter of time and hard work.

    http://connectomethebook.com/?page_id=40
    He also did a TED talk on this topic: http://www.ted.com/talks/sebastian_seung.html

  47. grabulaon 01 May 2014 at 11:07 pm

    @Hardnose

    “I studied both computer science and neuroscience, so I know enough to know how little we actually understand.”

    I’m calling your bullshit on this one. I’ve seen you claim to have studied “stuff” in other blog entries as well, seems to be a common tactic for you, implying YOU know what you’re talking about while the actual, proven neuroscientist writing this blog, does not. Which is patently ridiculous. Cruising the internet for interesting tidbits on brains and computer does not study make. On top of making bullshit claims you literally turn around and say you don’t know enough…Otherwise provide some credentials to prove any of the claims you make on on your background. Dr. Novellas is easy to find.

    “Turing thought the real test of AI would be conversation, and artificial conversation has not progressed AT ALL.”

    Really, no progress at all? So much for studying computers…So you’re hung up strictly on Turing and Godel? No wonder you think we’re not making any progress.

    “A computer requires a programmer. That programmer could itself be a computer. Which also needs a programmer, which also could be a computer. But somewhere somehow, something has to be not a computer.”

    So by your reasoning we can NEVER have AI or produce computers that mimic the mind because they require programmers? You need to get off Godel’s balls and spend some time studying his theorems I guess.

  48. taerogon 05 May 2014 at 6:02 pm

    Aside from the current discussion I wanted to make some comments on the numbers and miscomprehension of some factors in this article.
    First, Yes computers operate on binary math, but this is because it is easy not because they “have” to.
    For instance a memory cell stores a bit as a voltage (Going very simplistic here).
    Below a voltage it is a 0 above it is a 1. This voltage and demarcation line is “arbitrary” and set by many factors, overall operating voltage, materials, expected noise, etc. Older systems used higher voltages and thus had higher deltas between low and high (ie easy to read and differentiate between bits.) Now the voltages are much smaller and the margin is much much smaller too. (easy for a bit to flip if the stored voltage wanders.)
    Note what I said there . . . “voltage wanders” The voltage IS analog and can be effectively anything between two points and the demarcation line to “call it” 1 or high – 0 or low is pick for best used to get binary. There is NO reason why you could not pick 2 or 3 or 10 different voltages that could correlate to some state/value.
    It is just much easier to stick with binary high or low, then high, medium high, medium, medium low, low (for example) easier with error correction, easier to keep the values separate, easier for the logic.
    (again this is simplified) but IC chips and most electronics do work rather analog already, the digital is a kind of a overlay, something I do not thing most people really understand.
    The benefit of using a single cell to store mutable states is a bit one though and is being looked into, (quantum computing is the be-all-end-all of this BTW)
    And yes if one cell could record 8 states you could record a full byte rather then a bit in the same place, with all of the saving that would entail, ie 1/8 of the cells needed to store the same info. but 8x the data loss if the cell has a problem.
    Also, only just recently is multi threading really taking off. Till rather recently all computers did one process at a time really fast. Now they can do 4 or 8 or 16 simultaneously (though most software still does not take full advantage of this. (I work on a system that does 1024) all very fast and if the software supports all simultaneously. This is growing VERY fast also.
    Now the Brain is not “fast” by being able to do any one possess fast (as always you can correct me on this since Brains are not my specialty) But by being able to parallel process very efficiently.
    Computers are now being able to do the same thing on a hardware level (the software is still lagging here a bit) Add with multi state alto they could be growing in computing power many fold.
    But again problem arise . . voltage can leak, a cosmic ray can hit a cell, the voltage range of a part can slide over time and use.
    Management, error correcting, and most of all software become the biggest obstacles. Check out the Grid computing concept. The hardware is not really a big deal here.

    So, in the end this is very interesting but nothing really new, There chips are trying to do more with less with multistate and parallel possessing – cool – I like it. The analog part IMO is not wrong but rather over emphasized. The big numbers are . . . big numbers that make everything sound impressive and kind of do more harm then good.
    Also, I would like Dr Novella’s opinion on this if possible.
    It is my understanding that the neurons work by changing the electrical potential (sodium ions etc) within the cell and triggering neurotransmitters. They are not transmitting electrical charges. So saying that the brain “runs” on electricity is . . not exactly correct. The cells “run” on ATP that allow them to live and create a electrical internal differential to make a complexly chemical transition to a nearby nerve. Right? So it would seem most all of these computer analogies and “power” use really do not map at all here when we are talking about a chemical wetware brain.

  49. Richardon 09 May 2014 at 6:41 am

    @taerog: “So, in the end this is very interesting but nothing really new”

    There’s an awful lot more communication going on between neurons than in standard parallel computing, in which in general you aim to minimise communication overheads, which is really crucial to how the brain computes compared with parallel computers. Additionally with the new neuromorphic hardware stuff, if you can have circuit elements that directly implement the kinds of current flows in neurons, then you don’t have to simulate them digitally (with binary or whatever other scheme you like), so you remove computational overhead.

    “It is my understanding that the neurons work by changing the electrical potential (sodium ions etc) within the cell and triggering neurotransmitters. They are not transmitting electrical charges. So saying that the brain “runs” on electricity is . . not exactly correct. The cells “run” on ATP that allow them to live and create a electrical internal differential to make a complexly chemical transition to a nearby nerve. Right? So it would seem most all of these computer analogies and “power” use really do not map at all here when we are talking about a chemical wetware brain.”

    Neurotransmitters are charged ions, so when they move you have current flow (most models of neural activity describe only the neurons’ electrical dynamics, modelling the neurons as electric circuits). So yeah it is true that neurons “run” on ATP as they need energy, but the brain computes using electrical signals, as does a computer. The brain can do things that would be computationally useful if we could implement them, and it does these computations on very low power.

  50. robchaon 20 May 2014 at 11:33 am

    hardnoseon:

    “# hardnoseon 30 Apr 2014 at 9:15 am
    Man-made computers follow predetermined steps, and that is ALL they do. Yes, they can appear to make random choices, which might give an illusion of unpredictability. But they must be programmed to, at certain points, make selections based on a pseudo-random algorithm.
    In reality there is nothing at all unpredictable about any computer.

    This is not true, hardware random generators have existed for many years for those that need it. There are even hardware random generators included in the latest Intel chips (Intel Core i7 3770K / i5 3570K / i5 3550 Ivy Bridge)

    http://en.wikipedia.org/wiki/Hardware_random_number_generator
    http://en.wikipedia.org/wiki/Ivy_Bridge_%28microarchitecture%29

    “Now of course you will say that humans, and all living things, merely follow predetermined programs. Well that could lead into one of those endless useless philosophical debates.”

    No, the debate is only long if you are religious.
    You should check out Sam Harris ‘Free will’ book.
    Or even easier, check this youtube clip:
    https://www.youtube.com/watch?v=pCofmZlC72g

    “However, I believe we do much more than follow predetermined algorithms. There is always a leading edge that cannot be explained as mere computation. Every computer must have programmers, and there is something in us that programs our brains.”

    There are plenty of self learning and genetic algorithm, where the software learns by itself.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.