Oct 17 2017

What Is Artificial Intelligence

AI-1A recent article by Peter Yordanov claims that Artificial Intelligence (AI) is nothing but misleading clickbait. This is a provocative way to state it, but he has a point, although I don’t think he expressed it well.

Yordanov spends most of the article describing his understanding of human intelligence, partly by walking through the evolution of the central nervous system. His basic conclusion, if I am reading it correctly, is that what we have today and call AI is nothing like biological intelligence.

This is certainly true, but it seems like he takes a long time to make what is essentially a semantic argument. The core problem is that the word “intelligence” means many things. Lack of a consistent operational definition plagues the use of the term is pretty much every context, and certainly in computer AI.

What we have now and is generally referred to as AI are computer algorithms that display functions that resemble intelligence or duplicate certain components of intelligence. Computers are good at crunching numbers, running algorithms, recognizing patterns, and searching and matching data. Newer algorithms are also capable of learning – of changing their behavior based on data input.

Doing some combination of these things with a powerful-enough computer can enable AI systems to beat grand masters at chess or go, to compete with human champions at wide-ranging trivia games, and even to model human behavior and conversation. The latter are not yet able to consistently fool a human (the Turing test), but they are getting close and we will likely be there soon.

These are the things we call AI today. Yordanov is essentially saying that this is all well and good, but it is not the “intelligence” we mean when we refer to human intelligence. This is, of course, true. Computer AI is not self-aware, not truly thinking, and has no understanding. It is duplicating the effects of these aspects of human intelligence  – with great sophistication and some brute computing force.

It would be nice if we had a generally accepted term for what we currently call AI to distinguish it from what most people think of as AI – meaning self-awareness. “Machine learning” is fine but doesn’t cover the whole spectrum. There are specific technical terms for the various components, but a new umbrella term for everything short of self-awareness would be optimal.

The deeper question is – will current AI extrapolate to what is sometimes called general AI which includes self-awareness? Yordanov writes that he believes the answer to that question is no, and I agree.

I do not think we will get to general AI with more and more sophisticated algorithms running on more and more powerful computers. We will make systems that are better at duplicating the effects of general AI, but will not be truly self-aware. I do think something else is required.

That something else is not biology, and there is no reason it cannot be created artificially (whether that material will be silicon or something else doesn’t really matter). What is needed is a functionality that current computer chips do not have.

We are not quite sure yet what that functionality is, because we have not yet reverse engineered the mammalian brain. But we have some ideas. For starters, the brain is neither hardware or software, it is both simultaneously – sometimes called “wetware.” Information is not stored in neurons, the neurons and their connections are the information. Further, processing and receiving information transforms those neurons, resulting in memory and learning.

That much we know and computer chips that function more like neurons are already being developed. I do suspect that the path to true AI goes through neuronal chips, rather than classic silicon chips.

But that also is not enough. Yordanov touches on this, but I want to emphasize it – the brain is wired to constantly talk to itself in an endless loop. Thoughts are information that feed into the loop of processing, which also accepts external information through the senses, and the results of internal networks constantly reporting to each other, and then using that information to generate more results.

This endless loop of communicating and processing information is our stream of consciousness. What we are currently researching but have yet to unravel is the exact networks and how they interact, and how that manifests in human-level consciousness. We have pieces, but not enough to put it all together.

This, I think, is where AI research and neuroscience will dove-tail. We can use what we learn from neuroscience to design AI, which can then become an experimental model by which we can further advance our knowledge of intelligence and neuroscience.

Eventually we should be able to make a human brain in silicon. When we do there is every reason to think that that silicon brain will be self-aware – true general AI.

What is fascinating to think about is how will it be different from a human brain. We can experiment with turning up, down, on, or off different circuits and seeing how that affects the resulting AI. This, in turn, could be a model for every mental illness.

I also suspect that this will force us to reconsider what we think we know about the basic components of neurological function (beyond the obvious like motor movements and recording visual information). What is the neurological substrate of empathy, hostility, creativity, reality checking, and feeling that we occupy our bodies?

We may never be able to fully disentangle all the circuits and their interactions – it is so complex that the number of possible interactions is too great, making it like trying to predict the weather. We can only take it so far before chaos reigns.

Another lesson from all this, which I have discussed previously, is that what we can accomplish with non-self-aware AI is greater than we previously thought. We assumed that general AI would be necessary to beat a grand master at chess, but that assumption was wrong. Limited algorithmic AI can do amazing and sophisticated things, like driving a car, without being on the path to general AI.

This is why I predict that the future of self-aware robot servants in every home will not happen. It won’t have to. Our robotic and computer infrastructure will be able to do everything we need it to do with limited AI. If we develop general self-aware AI it will be for the research, to better understand human and artificial intelligence, and just to see if we can. General AI may then find some useful function, but that will not drive its development.

That function may also be mostly to enhance humans.

It’s all hard to predict, but fun and interesting to think about.

125 responses so far

125 Responses to “What Is Artificial Intelligence”

  1. Scott Gon 17 Oct 2017 at 8:40 am

    I think you nailed it early in the article when you indicated it comes down to a semantic argument. We should continue to use Artificial Intelligence in the way it functions today – machine expert systems and the like. For what the average person thinks about as “AI,” we would probably be better saying Artificial Sentience or (perhaps better) Artificial Sapience, where the machine is truly self-aware, has some form of feelings (not necessarily directly mappable to human feelings/emotions, especially in an environment devoid of hormonal/chemical influence). Maybe if we start using the terminology differently, more accurately, folks can at least be on the same argument as to whether something may or may not happen. In this mode, AI will happen – you can say it probably already has happened – but AS, as Sentience or Sapience, is much less likely, either because we won’t be trying to do it (don’t need it) or because we can’t make it work.

  2. MosBenon 17 Oct 2017 at 10:32 am

    Speaking of semantics, for true general intelligence I think “Manufactured” or “Machine” are better than “Artificial”. If it truly is intelligent in the same way that a human brain is intelligent, then there’s nothing artificial about it.

  3. SteveAon 17 Oct 2017 at 11:27 am

    Though ‘artificial’ does essentially mean ‘manufactured’. But I get your point.

    I also like the ‘sapience’ suggestion from Scott G.

  4. michaelegnoron 17 Oct 2017 at 11:43 am

    Steven said:

    [“The deeper question is – will current AI extrapolate to what is sometimes called general AI which includes self-awareness? Yordanov writes that he believes the answer to that question is no, and I agree. I do not think we will get to general AI with more and more sophisticated algorithms running on more and more powerful computers. We will make systems that are better at duplicating the effects of general AI, but will not be truly self-aware. I do think something else is required. That something else is not biology, and there is no reason it cannot be created artificially (whether that material will be silicon or something else doesn’t really matter). What is needed is a functionality that current computer chips do not have. We are not quite sure yet what that functionality is…”]

    That ‘functionality’ is a soul. Only living things have it. It cannot be made by man, it can only be created by God.

    Materialism runs up against reality, again.

  5. Beamupon 17 Oct 2017 at 12:01 pm

    One other interesting aspect of this is the question of how capable general AI will be once it’s developed. Many people who talk about this are, I think, doing so thinking about science fiction based on the premise that once a computer gets powerful enough it simply “wakes up” on its own; this premise naturally leads to the conclusion that such an AI will be of superhuman capacity. And hence to the risk that it’ll be dangerous.

    But if it’s something that ends up having to be specifically engineered, then it seems likely that its overall capacity will be part of the engineering and it’ll only be superhumanly intelligent if we deliberately make it so (and, I’m going to bet, invest a large pile of $$$ putting together the necessary resources to make it possible). Research into intelligence won’t require that – a subhuman level of capacity ought to be sufficient to learn a great deal.

    IMO this is another reason not to consider AI an existential threat.

  6. Daniel Hawkinson 17 Oct 2017 at 12:01 pm

    I’ve said it before, but I think you and the SGU would benefit from having an actual expert in machine learning/AI to discuss this. To be frank, it’s clear that you’re bringing a layman’s understanding to the AI aspect of the discussion.

    The deeper question is – will current AI extrapolate to what is sometimes called general AI which includes self-awareness? Yordanov writes that he believes the answer to that question is no, and I agree.

    I do not think we will get to general AI with more and more sophisticated algorithms running on more and more powerful computers. We will make systems that are better at duplicating the effects of general AI, but will not be truly self-aware. I do think something else is required.

    What is the evidence behind such an assertion? As you point out, we don’t really have a good understanding of how our brain creates consciousness, although we do have hints. But if we’re at such an early stage of understanding, both on the side of AI research and cognitive science, how can you justify the statement that we “will not get to general AI with … more sophisticated algorithms?” That statement only seems justified if you restrict “more sophisticated algorithms” to minor performance tweaks to existing learning algorithms. But that doesn’t reflect the actual progress we’ve made in machine learning, nor is it reasonable to suspect that future progress would be so limited.

    Take the major point of distinction you highlighted between how our brain operates and how computers operate:

    the brain is wired to constantly talk to itself in an endless loop. Thoughts are information that feed into the loop of processing, which also accepts external information through the senses, and the results of internal networks constantly reporting to each other, and then using that information to generate more results.

    Why wouldn’t researchers be interested in a system that could process information, direct “attention”, incorporate external stimuli into existing models, and use that information to generate more results? Right now the best machine learning algorithms require 1) highly sanitized and structured data in order to generate meaningful results, 2) hand-tuning of various parameters and 3) choices about the structure of the network—but researchers are working all the time on taking humans out of the picture as much as possible. On each of those 3 aspects, we have made significant progress.

    Imagine then, what the picture might look like in 20 years? Could we have a neural network take raw video/audio/sensory streams, parse them meaningfully, determine what’s important, and incorporate that into a general model of the world? Could we have a neural network detect and learn relationships between objects in the world? Learn semantic meanings? Use external knowledge (e.g. textbook knowledge of physical laws, language, history, biology) to constrain its model of the world? We can already see hints of these abilities now, and the steps from here to there may recapitulate the same kinds of solutions evolution developed to handle these tasks.

    Or it might look completely different, or not be possible at all. But it is a bit absurd to think that we can state with any confidence that “what is needed is a functionality that current computer chips do not have.”

  7. hardnoseon 17 Oct 2017 at 12:16 pm

    “What we are currently researching but have yet to unravel is the exact networks and how they interact, and how that manifests in human-level consciousness. We have pieces, but not enough to put it all together.”

    Translation: We have absolutely no idea.

  8. SFinksteron 17 Oct 2017 at 12:44 pm

    I’m fascinated by those clueless people who are obsessed about making sure we all know how clueless they are.

  9. JimVon 17 Oct 2017 at 1:31 pm

    Turing showed that any computation that can be done (some things are uncomputable) can be done by a 1-bit computer (with sufficient time and memory). This implies to me that anything neurons and synapses can do can be simulated (with sufficient time and memory) by a digital computer using standard computer chips.

    Whether that is the most efficient way to go about it is another question, but it isn’t obvious to me that it isn’t, since integrated circuits have a lot of development behind them.

    One reason we aren’t closer to general machine intelligence is that no super-computer yet built has the capacity and speed to simulate our 70+billion-neuron nanotech brains. (Some may be capable of simulating part of a rat’s brain, the last time I checked.)

    One reason for wanting general machine intelligence is that once you can make one as smart as typical human, you should be able to add more capacity (if this isn’t too expensive) and make one that is much smarter, perhaps able to answer questions such as, what is the secret to life, the universe and everything?

    (I believe it will answer, “Evolution.”)

    Will human civilization last long enough for this to happen, is another question. These days I doubt it. Some smarter species in a distant galaxy probably will accomplish it, though.

  10. Marshallon 17 Oct 2017 at 1:33 pm

    @michaelegnor at what point does this “soul” you refer to interact with the brain? At some point it has to influence the brain in such as way as to cause the necessary motor neurons to fire, causing behavior. For example, when your “soul” directs your consciousness to spew nonsense, it at some point has to result in your motor neurons firing to direct your fingers to contract in such a manner as to type.

    How does the soul do this, and what evidence do you have that there is a soul in there doing anything? Does it somehow influence the membrane potential of the motor neurons? If so, how? Does the soul have some hidden neurotransmitters somewhere in another dimension and it summons them whenever it wants? What is your current theory as to how the soul interacts with the brain?

  11. Pete Aon 17 Oct 2017 at 1:36 pm

    SFinkster,

    Yep, they are the exemplars of artificial intelligence.

  12. fbrosseaon 17 Oct 2017 at 3:44 pm

    I’m curious. Is it possible to be sentient without contemplating your mortality and questioning your reason to exist? If not then would there be a moral dilemma in forcing a sentient computer to perform tasks for free?

  13. michaelegnoron 17 Oct 2017 at 4:04 pm

    Marshall:

    [@michaelegnor at what point does this “soul” you refer to interact with the brain? At some point it has to influence the brain in such as way as to cause the necessary motor neurons to fire, causing behavior.]

    You are thinking of a Cartesian soul, which is not what I mean. Im referring to soul as understood by Aristotle. The soul is the form of the body–it is the intelligible principle of a human being. It is not a “thing” that directs neurons, etc. It is the direction of neurons itself, the organizing principle of a living thing, which includes anatomy, physiology, psychology, etc. Your view is reductionist, which is a mistake.

    [How does the soul do this, and what evidence do you have that there is a soul in there doing anything? Does it somehow influence the membrane potential of the motor neurons? If so, how?]

    The soul is a metaphysical thing, not an entity in nature (like the pineal gland) that can be excised and put under a microscope.

    [Does the soul have some hidden neurotransmitters somewhere in another dimension and it summons them whenever it wants?]

    You would do well to try to learn some of the actual metaphysical issues involved.

    [What is your current theory as to how the soul interacts with the brain?] I’m not a Cartesian, so I don’t believe the soul “interacts” with the brain, any more than I believe that the form of a chair “interacts” with the chair. The form is just what makes the chair a chair. The soul is just what makes a man a man.

    Intelligence is a power of the human soul. Machines cannot have intelligence, any more than a book can “have intelligence”. AI and computers have the representation, storage and manipulation of human intelligence, but no machine can ever have actual intelligence. Intelligence in nature is human, and only human.

    By intelligence, I mean the ability to think abstractly (about universals), to use reason, logic, etc.

    Animals obviously can think, and can be quite clever, but they do not have intelligence understood in that way. They have sensus communis, which is the classical term for the integration of perceptions.

    Again, you would do well to learn some metaphysics, especially the Aristotelian kind.

  14. MosBenon 17 Oct 2017 at 4:09 pm

    Guys, don’t engage with Egnor. He has proven time and again that he is impervious to outside arguments, and even when he is shown to be disastrously wrong and purely trolling, he won’t admit it. He just flees the comment section instead, gathering his same tired diatribes for the next post. He has shown no interest in honestly participating in a conversation. He is not worth anyone’s time.

  15. bachfiendon 17 Oct 2017 at 4:30 pm

    MosBen,

    Agreed. It’s useless interacting with Michael Egnor. His latest ‘contribution’ is just circular reasoning – listing the things that humans have, qualitatively or quantitatively, that other animals don’t have, and declaring it to be due to a God-given soul. Why can humans think? Because they have a soul. How do we know humans have a soul? Because they can think.

    I wonder whether he thinks the 4 other human species which went extinct within the last 50,000 years also had souls.

  16. BillyJoe7on 17 Oct 2017 at 4:36 pm

    SFinkster,

    “I’m fascinated by those clueless people who are obsessed about making sure we all know how clueless they are”

    😀

    And it’s not the first time he’s said that equivalent of “we don’t know everything so we don’t know anything” (and denied that that’s what he’s saying!), and with the implication that “the $h!+ I pull out of my arse could therefore be true”.

  17. bachfiendon 17 Oct 2017 at 5:39 pm

    “You are thinking of a Cartesian soul, which is not what I mean. I’m referring to soul as understood by Aristotle. The soul is the form of the body – it is the intelligible principle of a human being. It is not a ‘thing’ that directs neurons, etc. it is the direction of neurons itself, the organising principle of a living thing, which includes anatomy, physiology, psychology, etc Your view is reductionist, which is a mistake.”

    I don’t think anyone would actually disagree, besides the quibble as to what the et ceteras are referring to. It’s perfectly reductionist. The ‘soul’ is the arrangement of the neurons within the human brains – anatomy and physiology (and psychology which is a result of the first two).

    I wonder what the ‘etc’ is.

    I’m amazed that Egnor manages to shoot himself in the foot so often without realising it.

  18. Paul Parnellon 17 Oct 2017 at 5:57 pm

    [I do not think we will get to general AI with more and more sophisticated algorithms running on more and more powerful computers. We will make systems that are better at duplicating the effects of general AI, but will not be truly self-aware. I do think something else is required.]

    But this means breaking the Church-Turing thesis. That has interesting implications for programming, math and even fundamental physics.

    [That much we know and computer chips that function more like neurons are already being developed. I do suspect that the path to true AI goes through neuronal chips, rather than classic silicon chips.]

    The chips are done in hardware to make them faster. The same thing can be done by programs. You pay a speed penalty but the algorithm is the same.

    [But that also is not enough. Yordanov touches on this, but I want to emphasize it – the brain is wired to constantly talk to itself in an endless loop. Thoughts are information that feed into the loop of processing, which also accepts external information through the senses, and the results of internal networks constantly reporting to each other, and then using that information to generate more results.]

    Computers can and do do this. Even your computer is having a constant internal dialog with itself in order to function more efficiently. The Go program constantly played internal games with itself in order to find new strategies and paths to victory. Even the min/max algorithms of the old chess programs did this. There simply isn’t anything new here.

    [Eventually we should be able to make a human brain in silicon. When we do there is every reason to think that that silicon brain will be self-aware – true general AI.]

    Any such silicone brain will just be a computer program that can in principle be implemented on any universal Turing machine. And how do you tell if it is conscious? How do I know that you are conscious? I presume that you are but that is not a measurement. I can follow the causal chain of your neurons as deep as I wish but self-awareness is never a useful or necessary part of the analysis.

    There is a conundrum here that many people seem unable to grasp. You are failing my Turing test.

  19. RickKon 17 Oct 2017 at 6:12 pm

    Marshall asked: [How does the soul do this, and what evidence do you have that there is a soul in there doing anything? Does it somehow influence the membrane potential of the motor neurons? If so, how?]

    Egnor responded: “The soul is a metaphysical thing, not an entity in nature (like the pineal gland) that can be excised and put under a microscope.”

    Egnor didn’t answer, he dodged. Marshall’s question was perfectly reasonable – consciousness is unquestionably realized in the physical brain. So if consciousness comes from the metaphysical soul, how does it manifest in the brain? Or, in other words:

    1) if the soul is the arrangement of physical matter and energy that make up the person, then why can’t it be replicated at some future state of technology (e.g.Star Trek transporter)? Why do we think consciousness can’t be created if it’s all just about the proper arrangement of physical components?

    2) if the soul is separate from the matter and energy of the person, how does it interact with the physical to facilitate consciousness? (Marshall’s question)

    I know – a pointless discussion. Egnor’s answer is: “it’s the way I say it is because God. And it doesn’t have to make sense because Aristotle.”

  20. hardnoseon 17 Oct 2017 at 7:04 pm

    “Animals obviously can think, and can be quite clever, but they do not have intelligence understood in that way.”

    And we know this because in ancient times people wanted to feel superior to animals. And whatever people want to believe has to be true.

  21. hardnoseon 17 Oct 2017 at 7:07 pm

    AI is a materialist/progressive fantasy. But you don’t mind believing in fantasies, as long as they agree with materialism.

    I think the “AI is just around the corner” announcements should stop until you have a little evidence. It’s been going on for almost 70 years and it’s getting stupid.

  22. chikoppion 17 Oct 2017 at 7:33 pm

    [hardnose] AI is a materialist/progressive fantasy. But you don’t mind believing in fantasies, as long as they agree with materialism.

    Why is AI a fantasy? Fusion was once thought impossible, as was sequencing of the genome and thousands of other advancements.

    Also, a “progressive” fantasy? As in an aspiration of of people who identify with progressive politics? AI has nothing to do with progressivism. In fact, many of those who decry the viability or dangers of AI are otherwise aptly described as “progressives.”

    Argument by slurs is silly. Especially when one doesn’t understand the terms. As a nudnik/Luddite yourself you should recognize as much. (See what I mean?)

  23. bachfiendon 17 Oct 2017 at 7:37 pm

    Hardnose,

    ‘I think the ‘AI is just around the corner’ announcements should stop until you have a little evidence. It’s been going on for almost 70 years and it’s getting stupid.’

    What’s stupid is that no one thinks that ‘AI is just around the corner.’ Another strawman argument from a expert in producing strawman arguments.

  24. Willyon 17 Oct 2017 at 8:23 pm

    Maybe Dr. Egnor’s comments would be taken a bit more seriously if there was such a thing as a consensus of “metaphysicists”. Meanwhile, he is in a minority of both “metaphysicists” and philosophers. He’s choosing from a salad bar and only selecting the ideas that fit his world view.

    Dr. Egnor, do you still think Trump is qualified to be POTUS? Dogcatcher?

    Since medicine is your bag, I give you two “thoughts” from Trump:

    On May 11, 2017, Trump said: “But in a short period of time I understood everything there was to know about health care. And we did the right negotiating, and actually it’s a very interesting subject,”

    In his July 19 NYT interview, Trump said: “So pre-existing conditions are a tough deal. Because you are basically saying from the moment the insurance, you’re 21 years old, you start working and you’re paying $12 a year for insurance, and by the time you’re 70, you get a nice plan. Here’s something where you walk up and say, “I want my insurance.” It’s a very tough deal, but it is something that we’re doing a good job of.”

    Can you put lipstick on that pig? What do you think of a POTUS who brags that he “understands everything”, yet apparently can’t tell life insurance from health care?

  25. MosBenon 17 Oct 2017 at 8:41 pm

    Please don’t engage Egnor at all, but if you must, please PLEASE don’t engage him about Trump and/or politics in general. Down that way lies a bunch of ugly racist arguments.

  26. claude191on 17 Oct 2017 at 8:46 pm

    I’m not qualified one iota to get into this in depth. That said, I did find the article seemed to treat “self awareness” as something an entity has or doesn’t have. Which I don’t think is correct.

    A human brain develops over time. Is a 1 week embryo self-aware? Is a 1 day old baby self-aware? Is my 16 yo daughter self aware? I can only answer the last one – a big NO!

    Anyway, I suspect that self-awareness evolves with the brain in terms of it’s potential (no of neurons?) and it’s experience (no of connections?). I imagine “general AI” is going to need both of these too.

  27. ImplausibleDeniabilityon 17 Oct 2017 at 8:55 pm

    Hi Steve

    You wrote: “This, I think, is where AI research and neuroscience will dove-tail. We can use what we learn from neuroscience to design AI, which can then become an experimental model by which we can further advance our knowledge of intelligence and neuroscience.”

    From this, I think you would be happy to learn about a recently funded IARPA project called MICrONS: https://www.iarpa.gov/index.php/research-programs/microns

    From the project page:

    “MICrONS seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain. The program is expressly designed as a dialogue between data science and neuroscience… Ultimate computational goals for MICrONS include the ability to perform complex information processing tasks such as one-shot learning, unsupervised clustering, and scene parsing, aiming towards human-like proficiency.”

  28. bachfiendon 17 Oct 2017 at 9:22 pm

    MosBen,

    I suspect Egnor is embarrassed by his earlier support of Trump. He realises that he made a very bad choice, and is desperate not to bring the subject up again.

  29. Paul Parnellon 18 Oct 2017 at 2:01 am

    Willy,

    Maybe Dr. Egnor’s comments would be taken a bit more seriously if there was such a thing as a consensus of “metaphysicists”. Meanwhile, he is in a minority of both “metaphysicists” and philosophers. He’s choosing from a salad bar and only selecting the ideas that fit his world view.

    It seems more likely he choosing from the garbage bin out back.

    I was just reading the wiki entry on metaphysics and the idea that metaphysics deals with the nonphysical is due to a translation error. Meta means beyond, upon or after. Usually in modern times we think of it as stuff beyond the physical. But the original usage is to refer to Aristotle’s works published after his publications on physics. It just means published after in the chronological sense. He never used the word himself to refer to his work.

    For my money all of philosophy is little more than a discussion of how to deploy language for descriptive purposes. It is essentially an exercise in applied linguistics.

  30. CKavaon 18 Oct 2017 at 3:16 am

    I suspect Egnor is embarrassed by his earlier support of Trump. He realises that he made a very bad choice, and is desperate not to bring the subject up again.

    You are extremely optimistic. I would anticipate the exact opposite.

  31. bachfiendon 18 Oct 2017 at 3:33 am

    CKava,

    Well, if Egnor ever praises Trump again, all we need todo is note what Trump has done most recently.

  32. CKavaon 18 Oct 2017 at 4:06 am

    I anticipate that having 0 impact on Egnor…

  33. ShelterIton 18 Oct 2017 at 4:33 am

    Hi Steve. Yes, there’s two things here;

    1. The semantics of “intelligence”
    2. What current AI is doing

    Let’s just agree with the first one, and move on. I work in the AI industry, and you’re absolutely right; what currently is happening in AI is nothing more than three-dimensional statistics with better means of feature extractions (ie. converting a signal into a different signal better suited for machine processing, for example turning sound into phonemes for speech analysis).

    Even the poster boy of Neural Nets / Deep Learning is just another set of filters and processes ordered in a network fashion. What’s going on is an order of magnitude away from what we normal people call intelligence.

    Hey, I wrote about a lot of this about two years ago at http://sheltered-objections.blogspot.com.au/2015/05/ai-and-bad-thinking-sam-harris-and.html and my opinion has not changed. In fact, I’ve taken some deep dives lately due to a top secret project I’m working on, and I’m more convinced than ever about the sham of AI. Just like when i started out 20 years ago in this business, it’s all about fake it till you make it. It’s a long line of filtering, extractions and big data (the latter being a huge enabler these days, which gives you a larger sample size to train your AI from, nothing more) hubbled together in a very hands-on tweaky way of getting the kind of results you’re after. But it’s *not* intelligent. In fact, it’s very crude and guided, even the stuff they call unguided is littered with constraints and frameworks.

    There’s so much hype in AI it’s really, really hard to see through the smog. Glad to see you squint through it.

  34. TheGorillaon 18 Oct 2017 at 4:35 am

    Is it really that hard to write in this topic without passing off unargued, controversial philosophical positions as obvious? Well, clearly not.

  35. TheTentacleson 18 Oct 2017 at 5:54 am

    Please don’t feed the troll, and then spend the rest of the time providing meta-commentary on how ineffective your feeding regime is!?!

    To get back to the topic of AI, Daniel Hawkins and Paul Parnell both raise complementary points.

    First off, Daniel’s general comments are spot on; there is actually a growing and highly active intersection of using “deep” brain-inspired concepts in refining existing AI models and forging entirely new ones. As one particularly fruitful area, the modelling of cortico-striatal pathways that govern reward and sensorimotor decisions have been analysed both from the actual neuronal networks, and the overarching computations being performed. Taking the influential 3 level analysis of David Marr (a highly influential and precocious computational neuroscientist who died far too early), we are understanding in parallel the overall computations, the algorithms and the biological implementation both in greater detail and integrated across the levels.

    This is distinct from even 10 years ago, where neuroscientists recorded small groups of cells, and at most generated abstract general models, and AI guys used high-level concepts of reinforcement learning, but apart from being conceptually related, these were totally separated domains.

    For example, we now have spiking neural networks that broadly model the major components of the cortico-striatal network, closely inspired by the latest wet research, and implementing them in semi-autonomous robots who do useful(ISH) stuff. We are actively taking the cognitive toolkits delineated by neuroscience (decision making, working memory, intuitive physics, emotion, attention, cross-modal integration) and building more non-linear, distributed cognitive sets of artificial neural networks. More than that, many AI researchers are realising that to move forwards, the many-layered recurrent NN must be dumped to move forwards (see recent comments from the grandfather of RNN, Geoff Hinton). The concepts of predictive coding, that brains build generative models that are tested against incoming information is becoming increasingly appreciated. As many know from the fascination with visual illusions, our brain generates our perception, and this challenging idea (generative “perception” guided by input) is at the early stages of being implemented in AI systems. Yann Lecun has made a very strong case that this is the only way unsupervised, and adaptive AIs can be created.

    These autonomous, generative, self-learning, highly interconnected and highly non-linear cognitive networks **will** create emergent and unexpected behaviours. Tononi’s “integrated information theory” of consciousness makes specific predictions about the types of networks “machine” or “biological” that support

    Paul’s criticism of Steven’s point is that there is no magic hardware needed, and he is IMO correct. All neuromorphic hardware does is improve efficiency (i.e. make a complex model easier to implement), and unless we want to invoke spooky physics, there is nothing that a neuromorphic chip can do that a sufficiently advanced computational model couldn’t.

    In summary, brain-inspired AI is finally actually taking a much “deeper” inspiration from neuroscience (especially the congnitive aspects). Current deep convolutional NNs that create all the hot air are not much different than Fukushima’s Neocognitron from 1980. BUT we are actively building multiple subsystems that include generative models, emotions, desires, attentional preferences, all in highly reentrant systems that are fundamentally different from the neocognitron-alike recurrent neural networks that currently dominate AI . They will scale better with custom hardware, but will not depend on it…

  36. RickKon 18 Oct 2017 at 6:21 am

    Bach,

    If you think Egnor is embarrassed by Trump, you really don’t understand him. The more the liberals are shocked and disgusted with Trump, the happier Egnor is with his vote.
    https://m.youtube.com/watch?v=h8JKZgwqH3g

  37. RickKon 18 Oct 2017 at 7:12 am

    From the perspective of a much more limited knowledge of AI research, i have to agree with those that say that the components of what we’re calling consciousness are likely achievable with the steady march of our technology. There may be some “aha” moments, or it may come gradually with little steps toward “awareness” (like Asimov’s robots that cluster together when unattended). But the past few years have seen stunning progress in so many areas we thought of as the exclusive territory of human minds. It seems very premature to decide now what is and isn’t possible with a few more iterations of Moore’s Law.

    The relevant question, IMHO, is not whether we’ll achieve true AI – it’s should we try? Yes, we can’t help but pursue knowledge. However, technology is advancing faster than human society and economics can absorb. Bringing some of the disparate comment threads together – the fundamental driving force behind Trump’s election wasn’t social issues, it was the huge and accelerating divide between the Haves and the Have Nots. The growth of technology continues to automate away the value of labor and progressively more skilled jobs.

    Income redistrupibution is not enough. People of all educational levels need jobs that pay a living wage. They need purpose – to make a daily contribution to society for which they’re rewarded, and the responsibility of putting food on the table. Not everyone – people are different – but most do. It’s a working societal formula and we haven’t found a superior one. And we’ve seen things get ugly when it breaks down.

    We have automated away vast amounts of labor from agriculture, mining and other extractive industries, accounting & finance, and manufacturing. In many U.S. states where the most common job was farmer, it is now truck driver. Those jobs will soon be gone, as will certain areas of journalism and even many medical professions. When telephone response systems become better than the people manning the phone banks, when the best diagnostician is Watson, when machines make better and more reliable food, investment decisions, drivers, pilots, doctors and lawyers – what will we pay people to do?

    Obviously there are many more forces than just technology at work in driving the growing inequality. But it is a factor. And historically the right answer has never been to stop advancing. But we have certainly had periods where technology moved faster than society, and they can be rough times. Before we rush into the world of creative, intelligent machines with instant access to all the world’s knowledge, we should porbably consider what people will do to earn a living.

    Commentary courtesy of your local Luddite 🙂

  38. Nidwinon 18 Oct 2017 at 8:23 am

    It’s just a matter of time, knowledge and ressources but I don’t think AI on the level of human consciousness will be achieved anytime soon. Not because we can’t in the long run but because it won’t be allowed for ethical reasons linked to higher sentiency.

  39. Pete Aon 18 Oct 2017 at 10:00 am

    We’ll know when we’ve created a machine that is human-like because it will:
    1. use the same amount of power in sleep mode as when it’s awake;
    2. wake from sleep mode because it needs to use the bathroom;
    3. often refuse to obey instructions, especially going to sleep;
    4. claim that it has the flu whenever it has a virus;
    5. pretend it’s busy at work when it’s having an affair;
    6. occasionally swear for no apparent reason;
    7. occasionally laugh for no apparent reason;
    8. perform inefficiently on Monday mornings;
    9. sporadically burp and fart — if it can do both at the same time then it’s a truly multitasking machine.

  40. Willyon 18 Oct 2017 at 10:48 am

    MosBen: I think Dr. Egnor serves a very valuable function here–regularly demonstrating the flaws in critical thinking that Dr. Novella discusses–and I think to ignore him “always” a mistake. Not that you used the word, but I also dislike the term “troll”, which seems to me to be used most frequently (on the ‘Net in general) as a term for anyone with whom one disagrees.

    Paul Parnell: Salad bar = garbage bin! ;«)

    CKava and RickK: I agree that Trump likely still admires Trump and if he doesn’t, he wouldn’t have the integrity to admit his mistake publicly anyway.

    Petea: Good one!

  41. DanDanNoodleson 18 Oct 2017 at 11:55 am

    It seems obvious to me that the true hallmark of intelligence is not the ability to learn, but the desire to do so. With machine learning systems, we have achieved the former, but not the latter.

    The necessity of curiosity — the desire to learn — for intelligence is why I don’t buy into doomsday scenarios about AIs supplanting humans. A true artificial intelligence will be limited in the same way that humans are limited, by the constant distraction of the world around it. It’s hard to take over the world when you can’t stop watching cat videos.

  42. Willyon 18 Oct 2017 at 12:00 pm

    Good Grief!!!!!!!!!!!! That should be “Dr. Egnor still admires Trump”!!!!!!!!!!!

    and “PeteA”, not “Petea”

    Must proofread more.

  43. hardnoseon 18 Oct 2017 at 12:21 pm

    “What’s stupid is that no one thinks that ‘AI is just around the corner.’ Another strawman argument from a expert in producing strawman arguments.”

    When Novella describes AI or neuroscience, he usually says things like “We don’t completely understand yet …” or “We don’t exactly have all the details filled in ..”

    This kind of statement very strongly implies “Any day now we’ll understand” Or “Complete understanding is just around the corner.”

    Materialists MUST believe in AI because it follows from their ideology.

  44. hardnoseon 18 Oct 2017 at 12:27 pm

    “It’s just a matter of time, knowledge and ressources but I don’t think AI on the level of human consciousness will be achieved anytime soon. Not because we can’t in the long run but because it won’t be allowed for ethical reasons linked to higher sentiency.”

    It’s nice to have perfect faith and to know the future.

  45. Pete Aon 18 Oct 2017 at 12:39 pm

    Willy,

    Never, ever, worry over your perfectly-acceptable typically-human mistakes in your comments. If AI manages to advance to the stage of emulating humans then it will become totally redundant science and technology: at best, it will gain only the status of being exhibited in museums which endeavour to preserve things that are quaint [adjective: attractively unusual or old-fashioned]!

  46. Pete Aon 18 Oct 2017 at 12:53 pm

    hardnose,

    Emulating you using AI would be an extraordinarily simple task.

  47. Willyon 18 Oct 2017 at 12:53 pm

    “It’s nice to have perfect faith and to know the future.”

    hardnose–I didn’t see any attempt by Nidwin to exhibit faith, perfect or otherwise, nor did I see any claim by him to “know” the future. Maybe comprehension isn’t your strong suit? Indeed, you just might be void in that suit.

  48. chikoppion 18 Oct 2017 at 2:07 pm

    [hardnose] Materialists MUST believe in AI because it follows from their ideology.

    How so? What is it about AI that requires “materialism” to be true or vice versa? Conversely, what is it about “non-materialist” positions that preclude the possibility of AI?

  49. Paul Parnellon 18 Oct 2017 at 2:44 pm

    TheTentacles,

    I think Steven Novella’s article on AI is interesting and well done but I have two frustrating problems with it that I would like your take on.

    1)He does not seem to get the Church/Turing thesis. His silicone brain must necessarily be a computer program that can be implemented on any universal Turing machine. I suspect we are in general agreement here.

    2)An algorithm that generates certain abilities like playing Chess, playing Go, prove Fermat’s last theorem or do string theory does not and cannot explain consciousness. An algorithm – any algorithm – no matter how complex or convoluted in the end is like a mathematical equation. It takes in input and produces output. What it feels like does not and cannot matter.

  50. Pete Aon 18 Oct 2017 at 2:51 pm

    [chikoppi to hardnose] Conversely, what is it about “non-materialist” positions that preclude the possibility of AI?

    Precisely!

  51. Pete Aon 18 Oct 2017 at 3:08 pm

    Paul Parnell,

    That is because consciousness is an ongoing temporal post-hock rationalization process, it is most definitely not an output dataset of the brain.

    Our conscious awareness of reality lags actual reality by at least 150 milliseconds; frequently, the time lag is orders of magnitude greater than this.

  52. jpancoaston 18 Oct 2017 at 3:17 pm

    Say we do eventually build a self aware brain in silicon. Would it be ethical to turn it off or modify it in such a way so that we can use it to study mental illness?

  53. Richard Hewitton 18 Oct 2017 at 4:42 pm

    I think people need to understand that self-awareness as we’ve observed it is a lot less quasi-mystical and a lot more mundane than it’s made out to be.

    I mean, I’m not actually sure we really need a much deeper understanding of the human brain at this point to sketch a basic portrait of what self-awareness(or general AI) is, and once that’s done, I think it’s a bit easier to see how our current AI toolkit is actually much closer to achieving some limited self-awareness than is typically thought.

    TheTentacles I think covered this fairly well, but I’ll spell it out perhaps a little more simply.

    You can reasonably call any closed loop feedback control system “self-aware” if it is capable of analyzing input data and spotting enough patterns within that data to construct a reasonably cogent model of reality.

    What constitutes a ‘cogent’ model is open for debate, but I’ll propose a rule of thumb being that the system needs to be stable: that it models itself and its environment enough to generate outputs that, outside extremis operating conditions, is unlikely to result in system failure.

    A system that just models its environment and compares it against a set of operating metrics isn’t sufficient however.

    The system needs to be capable of creating a model of itself — including not just its physical state but also generating modelling its own data processing.

    A system capable of doing the above, and combining them into a real-time simulation capable of modelling itself within its environment and generating cogent output responses can reasonably be said to — almost by definition — to be self-aware.

    So conceptually, I would argue, self-awareness isn’t difficult to define or even understand how deep-learning structures could be combined to generate it. Of course getting our energy from fusion power seems conceptually straightforward as well: in engineering terms it’s really difficult.

    I’m maybe younger than the average reader here, so maybe I’m less painted by the cynicism of a generation that thought progress would come faster than it did. Flying cars are in fact being sold now. Self-driving vehicles are just around the corner and there’s very good reason to expect that within my lifetime at least we’ll see some practical applications for fusion energy and general AI.

    I will caution that even achieving general AI may not be as monumental as people think. In fact, there’s good reason to be believe several non-human species have or are very close to hitting the mark I outlined above.

    I frankly wouldn’t be surprised if even a perfect understanding of the human brain wasn’t sufficient to emulate human level intelligence. In point of fact, absent other human beings to communicate with, the human brain doesn’t seem to substantively outperform other higher-functioning animals.

    If you think about the human brain as, essentially, a data network. You have to also realize that for data networks, only inputs and outputs matter: how or where the signal is generated is irrelevant… it’s just data.

    In network design that means you can create massive networks distributed across continents that all share and do the same thing.

    For humans it means understanding that we, long ago, co-opted using our physical environment to share data between the clusters of neurons we keep inside our skulls.

    And so it is ENTIRELY possible that the human brain alone is insufficient to generate human level intelligence: what we understand as human-level intelligence may only be achievable once we form a distributed neural network via linguistic communication. In fact human level intelligence may have shifted over time as we created new media: a higher level of processing may have been enabled by developing writing, and deepened by the creating of cheap, reliable printing presses. Radio and TV have further effected how we, collectively, process information.

    And the internet appears to have dialed all of the above to 11… causing us to be inundated with signal noise… but assuming we learn how to create better filters on our thinking, may yet allow us to identify patterns and combine information that otherwise might otherwise never have been recognized.

    In fact, when you think about it, very few of the thoughts or ideas we use to think about high-level concepts are original thoughts we as individuals developed: unless you’ve done truly innovative research virtually every means of understanding the the nuance of the world around us was passed down to us by other people(and increasingly, even that original research is done by substantial teams of people).

    Additionally – from the few examples we have – humans raised absent other human beings don’t seem to display capabilities much more sophisticated than other higher functioning animals.

    But… when you put us groups that are able to communicate with one another… we start producing all sorts of crazy ideas.

    I’d be hard-pressed to say this is all definitively what’s happening, but I think it’s a very attractive theory on cognition.

  54. bachfiendon 18 Oct 2017 at 4:42 pm

    Hardnose,

    ‘When Novella describes AI or neuroscience, he usually says things like…’

    ‘Like’? As has been noted, your reading comprehension is extremely lacking. I am forced to ask for specific quotes, with the links, to Steve Novella’s previous comments on AI.

    It’s been a great amusement for your critics to take the links you’ve previously provided and to demonstrate clearly that they don’t support your assertions.

    Another point is that you still don’t understand the difference between ‘worldview’ and ‘ideology’. It’s a worldview that AI might eventually be possible, not an ideology. ‘Worldview’ describe how the world came to be as it is, including the nature of ‘intelligence’ and whether AI is possible or not. ‘Ideology’ proscribes how the world should develop, including whether AI should be created, if it’s possible, that is.

  55. MosBenon 18 Oct 2017 at 4:45 pm

    Willy, I agree that he exemplifies all too common flaws in critical thinking, but I think that we can take those lessons without engaging with him directly. I think that it’s worthwhile for use to dissect his posts and talk about why their logic fails, but without presenting arguments directly to him or trying to engage him, especially in areas like politics. It was clear a long time ago that Egnor will not engage in reasonable debate. He will not ever change his positions, even in the face of mountains of evidence. The best that can be expected is that he’ll just disappear from a thread. Unfortunately, when he inevitably comes back he just spews the same arguments that were demolished the last time he made them. Trying to debate with him is a waste of digital breath.

  56. Paul Parnellon 18 Oct 2017 at 5:01 pm

    Pete A,

    That is because consciousness is an ongoing temporal post-hock rationalization process, it is most definitely not an output dataset of the brain.

    But exactly how does this explain conciousness? How do I make a computer post-hock rationalize and how does that cause it to experience the color red?

    Our conscious awareness of reality lags actual reality by at least 150 milliseconds; frequently, the time lag is orders of magnitude greater than this.

    A great deal was made of this but it isn’t surprising and should have been predicted beforehand. Probably was. From a programming point of view it is just game lag from a slow game loop. Worse, it can produce cognitive illusions as the brain attempts to make sense of what the loop was to slow to catch. Reality can get backfilled with a false explanation.

    I can see how this can happen in a program. I cannot see how this has anything to do with consciousness. It is just a quirk of a algorithm that happens with or without consciousness.

  57. Willyon 18 Oct 2017 at 5:21 pm

    MosBen I hear you and I don’t mind the (infrequent) time it takes to post to or about him. I especially want to see his (unlikely to occur) response regarding Trump. I have no expectations of a reasonable conversation nor of changing his mind, just as I am sure he realizes he won’t likely change any minds here.

  58. Pete Aon 18 Oct 2017 at 6:36 pm

    Paul Parnell,

    How do I make a computer post-hock rationalize and how does that cause it to experience the color red?

    Tell us how you learnt to experience the colour red. You didn’t pop out of the womb with the ability to recognize any named colours! Neither did you have the ability to differentiate between your arse and your elbow. From our previous discussions on this website, I’m guessing that you long-ago gave up trying to master that feat.

    … From a programming point of view it is just game lag from a slow game loop. Worse, it can produce cognitive illusions as the brain attempts to make sense of what the loop was to slow to catch. Reality can get backfilled with a false explanation.

    It certainly can and does produce cognitive illusions and false explanations. By far the most powerful illusion and false explanation it produces in most, but by no means all, people is the illusion of the “self”!

    I’ve discussed this before and provided references so it would be futile to argue with you again.

  59. hardnoseon 18 Oct 2017 at 7:24 pm

    ‘What is it about AI that requires “materialism” to be true or vice versa? Conversely, what is it about “non-materialist” positions that preclude the possibility of AI?’

    Materialism says that mind is created by matter, through an unguided process involving chance and natural selection.

    If that were true, then the machinery created by that process should not be impossible for us to understand. Reverse engineering the brain would be possible, and should not even be terribly difficult. Something created by a haphazard process cannot be an ingenious invention.

    So materialists can’t imagine that modern science can’t figure out how the brain works, or that modern technology can’t build something like it.

    The continual and ongoing failures suggest that materialism is wrong.

    If, on the other hand, nature is intelligent, and much smarter than we are, then we would expect to have trouble understanding it.

  60. MosBenon 18 Oct 2017 at 7:33 pm

    Why would something arising through natural selection be relatively easy to understand? There’s nothing about systems arising naturally through time that would necessitate them being simplistic. Nor does it follow that something being created by an intelligence, even a greater intelligence, must necessarily be complex to the point of indecipherability. We create all kinds of devices that animals can figure out.

    You’re also assuming that because we don’t fully understand the brain it is a failure suggesting that materialism is wrong. But we are learning more and more all the time about the brain. It’s just a complex thing that will take us a long time to figure out. Making slow but steady progress isn’t failure, it’s science in action.

  61. Macam14on 18 Oct 2017 at 8:02 pm

    Body awareness and movement, in my view, are too often ignored in discussions of AI. I’m no neuroscientist but from what I’ve read it seems there is strong evidence that our intelligence depends crucially on not just computing power but also emotions, which in turn depend on bodily experiences and desires. Also that perception develops with the help of movement. For example, apparently a kitten, always held and prevented from exploring the world through movement and touch, will never learn to see: that is, to make sense of the sensory impressions (light and color) their eyes receive. Perhaps in many ways, perception, reasoning, judgment, and other aspects of intelligence depend, for their maturity and/or operation, on inhabiting a body (not to mention social environment, etc.).

    (The 2007 book, The Body Has a Mind of its Own by Sandra and Matthew Blakeslee, may be a good starting point.)

  62. chikoppion 18 Oct 2017 at 8:15 pm

    [hardnose] Materialism says that mind is created by matter, through an unguided process involving chance and natural selection.

    “Materialism” has nothing to do with natural selection, though you are generally correct if saying that physicalism asserts that mental states are produced by the interaction of matter/energy and are not themselves a fundamentally different and separate substance.

    So materialists can’t imagine that modern science can’t figure out how the brain works, or that modern technology can’t build something like it.

    Is there some reason that precludes the possibility? There are many, many things that were once not understood and/or beyond our capability that we now routinely utilize. Most of those things were once considered mysterious and attributed to magical substances or interactions. Turns out basic properties, once understood and the complexities unraveled, were sufficient to explain them in detail with absolutely no magic necessary.

    The continual and ongoing failures suggest that materialism is wrong.

    What?! The march toward understanding the brain is relentless, with mapping of the functioning and structure steadily improving in scope and resolution.

    If, on the other hand, nature is intelligent, and much smarter than we are, then we would expect to have trouble understanding it.

    That’s the limit of your argument? Stuff is hard and takes time, therefore it must be magic?

    I’m curious. If we are successful in producing AI, would that be the deathknell for your suppositions and superstitions? What is it that you think precludes the possibility of AI? Why are you threatened by it?

  63. RickKon 18 Oct 2017 at 8:38 pm

    hardnose said: “If that were true, then the machinery created by that process should not be impossible for us to understand. Reverse engineering the brain would be possible, and should not even be terribly difficult. Something created by a haphazard process cannot be an ingenious invention.”

    Ah, by that reasoning, the fact that the best bird feeder designs can’t keep squirrels off them means that squirrels are much smarter than humans. Right?

    It can’t be because squirrels have a lot of time for trial and error, and that trial and error over a long time can achieve amazing results.

    Interestingly – it’s exactly the trial and error of science, operating COMPLETELY within a materialist paradigm, that has so dramatically grown our understanding of our natural world. It’s so sad, hardnose, that you look at what we’ve learned and see only failure. It’s so sad that you can’t grasp the power of time and trial and error guided by fitness for further replication.

    Fortunately there are still plenty of scientists and people like Daniel and TheTentacles above who do not give up the way you do, hardnose. They are happy to continue advancing our knowledge one trial and one error at a time, like that persistent squirrel. And it is they who will reach the prize while people like you scoff, pine for magic, begrudge your own ignorance, and fade.

    Why am I so certain of this? Because that’s how it has worked for centuries. The proponents and purveyors of magic (like you), the dismissers of science (like you), the incurably incurious and the unfathomably unimaginative (like you) have never added one iota to our understanding of our natural world.

    Can’t you see that you’re whole argument is identical to “If God had wanted us to fly, He would have given us wings”? Do you really know as little about history as you do about science?

  64. Willyon 18 Oct 2017 at 8:40 pm

    “If that were true, then the machinery created by that process should not be impossible for us to understand. Reverse engineering the brain would be possible, and should not even be terribly difficult. Something created by a haphazard process cannot be an ingenious invention.”

    Shirley you jest!?

    “So materialists can’t imagine that modern science can’t figure out how the brain works, or that modern technology can’t build something like it.”

    No one here has said anything even close to that. Can’t you see that you construct a straw man at every turn?

    “The continual and ongoing failures suggest that materialism is wrong.”

    …he says as he lives a life of comfort and convenience provided by scientific inquiry and freedom of thought. Maybe you should have lived two millennia ago when life wasn’t ruined by science. Who knows, you might have enjoyed existence in a leper colony. That is, if you weren’t already killed due to your blasphemy.

    “If, on the other hand, nature is intelligent, and much smarter than we are, then we would expect to have trouble understanding it.”

    Tell us what “nature” is and how it possesses intelligence. Please. Explain why, as MosBen asked, “nature” should be difficult to understand but naturally created things should be simple (there seems to be somewhat of a dichotomy there, no?).

    I think you chose your user name aptly. You enjoy being “different” and you WILL be different no matter the subject. hardnose, wiser on most any topic than millions of dedicated, decent people who spend their devoted to those many topics. Stubborn, impenetrable, and proud of it. A monument to pig-headedness.

  65. Paul Parnellon 18 Oct 2017 at 9:32 pm

    Pete A,

    Tell us how you learnt to experience the colour red. You didn’t pop out of the womb with the ability to recognize any named colours! Neither did you have the ability to differentiate between your arse and your elbow. From our previous discussions on this website, I’m guessing that you long-ago gave up trying to master that feat.

    I have no idea how or if I learned to experience color. I have no memory. Experiments suggest that babies are born with a innate but limited ability to categorize color. But do you see that there is a difference between categorizing color and experiencing it?

    This all makes sense from a programming and neural network perspective. It would be trivial to build a neural net that started with little or no ability to categorize colors and learns to do so. Would it experience them? Why would it need to?

    The alpha go program starts with little understanding of Go but learns to play. Does it experience the game? Does it need to?

    It certainly can and does produce cognitive illusions and false explanations.

    But these algorithmic “glitches in the matrix” do not depend on consciousness nor do they explain it.

    I play a zombie survival game in which the game loop can bog down so that the zombies are chasing me at a point I occupied three seconds ago. If the zombies were conscious entities they would experience this as a cognitive illusion. But they are only philosophical zombies that experience nothing. And how would it help them if they did? Either way they are going to chase me and eat my brains in the exact same way.

  66. Paul Parnellon 18 Oct 2017 at 9:39 pm

    Macam14,

    Body awareness…

    But see right there you are presupposing that which you are trying to produce and understand. Once you have awareness the problem is solved. You can turn that into body awareness, color awareness, sound awareness, heat awareness…

    Awareness itself is the central conundrum.

  67. Paul Parnellon 18 Oct 2017 at 10:02 pm

    DanDanNoodles,

    It seems obvious to me that the true hallmark of intelligence is not the ability to learn, but the desire to do so.

    Again this is just presupposing the thing you are trying to understand and create. Desire is an emotion that you experience. If you can make a program experience things then problem solved.

    Experience itself is the conundrum.

    We can model desire in a program by awarding it points when it achieves a goal. Then we create an algorithm that tries to maximize points. But such a system does not need to experience desire.

  68. Drakeon 19 Oct 2017 at 12:07 am

    “Eventually we should be able to make a human brain in silicon. When we do there is every reason to think that that silicon brain will be self-aware – true general AI.

    What is fascinating to think about is how will it be different from a human brain. We can experiment with turning up, down, on, or off different circuits and seeing how that affects the resulting AI. This, in turn, could be a model for every mental illness.”

    If ‘true general AI’ is as self-aware and intelligent as a biological human, surely it would be unethical to experiment on it as Novella suggests.

    How could inflicting mental illness on a consciousness with human intelligence be morally justified, just because the brain was silicone rather than meat?

  69. Nidwinon 19 Oct 2017 at 5:57 am

    Drake,

    That was my point and the reason true AI isn’t going to happen anytime soon in my opinion, even if hardnose didn’t seem to understand my poor English. (English is only my third language)

    Also trapping a human brain outside a human body would be unethical. We aren’t talking about someone loosing a limb but about someone not having a body by original design.

    There’s also the issue that we have no idea how a silicon based higher sentient entity could be as we have never encountered one. How can we ethicly try to create an intelligent being without any correct information about it’s his her X possible personality.

  70. TheTentacleson 19 Oct 2017 at 6:25 am

    Probably known by several of you, but wonderful nevertheless and summarises the current flow of the comments here — Philosophy Humans: http://existentialcomics.com/comic/67

    Discussion here bifurcate into so many strands it is hard to follow. It is safe to say that, as the comic so acutely demonstrates, the discussions surrounding Chalmer’s hard problem continue… I have to admit that I like Galen Strawson’s take, there is nothing strange or difficult to understand about consciousness, it is matter that is too little understood to solve this problem… But along with many other scientists and philosophers , I do not see this as sufficient reason to abandon studying the neural basis of consciousness and solving many fascinating issues along the way, even if we were to accept the phenomenal hardness of the hard problem.

    One topic that is interesting is the point Macam14 raises: body awareness and AI. This is actually a well discussed area of cognition, and one with a rich and long history. Indeed one of the oldest theories of how we see, extramission, considered sight as a motor act where the “fire within the eye” is projected to actively examine the world around us. The idea that we “palpate” the world with a light ray emitted from the eyes seems absurd to us now, but several of the underlying concepts are still highly relevant today. These ideas can be tracked via Bishop Berkely through to JJ Gibson and others to the idea that sensation cannot be decoupled from action. Animal/human cognition is deeply sensorimotor, requiring a body and sensory receptors actively guided through the world.

    Embodied cognition that depends on the models that are predicted by our actions. Our eye movements are indeed focussed and exploratory, as Karl Friston phrases it, “Perceptions are hypotheses, and eye movements are experiments”. This is why I think that for AI to progress, it will benefit greatly in learning about the world by have a body in which to explore it (real or virtual, but real would be much better). Anyone who has a child knows when they first start to crawl there is a really nice jump in their cognitive development! AI roboticists I know are keenly aware of this fact.

    And neuroscience is increasing realising that you need to study adaptive cognition (aka intelligence) in a behaving, navigating subject. Recent advances have shown how the visual cortex in mice is strongly modulated by their locomotion, and that visual cortex in humans is modified by the predicted paths of moving stimuli even if there is none. This is now starting to drive AI research to build up the current domain-specific overtrained networks into a cognitive-toolkit inspired modular system. For my own research, I’m interested in how to use ideas inspired by the corticothalamic attention circuitry to build better autonomous agents.

  71. BillyJoe7on 19 Oct 2017 at 6:28 am

    Paul,

    “Awareness itself is the central conundrum”

    In the past, it was assumed that there must be a life giving something or other (life force, élan vital) that distinguished living from non-living objects. Now nobody thinks so (nobody that matters, that is). There is no reason to think that life doesn’t emerge out of a certain collections of molecules arranged into certain complex interacting structures.

    Similarly, there is no reason to think that consciousness could not emerge out of suitably complex arrangements of molecules within certain areas of brains (interestingly, the cerebellum is not conscious)

  72. TheTentacleson 19 Oct 2017 at 6:37 am

    Oh forgot, if anyone is interested in the idea that we cannot really sense / understand the world without our ability to interrogatively move through it (with implications to conscious agents), I recommend the work of the philosopher Alva Nöe, e.g. http://www.alvanoe.com/action-in-perception/ — also as this links to ideas that our movement is driven by our internal models of the world, philosopher Andy Clark’s excellent book “Surfing Uncertainty: Prediction, Action, and the Embodied Mind”
    https://global.oup.com/academic/product/surfing-uncertainty-9780190217013

  73. BillyJoe7on 19 Oct 2017 at 6:41 am

    Paul,

    “We can model desire in a program by awarding it points when it achieves a goal. Then we create an algorithm that tries to maximize points. But such a system does not need to experience desire”

    The sequence of nucleic acid bases in DNA is a recipe for producing living creatures. But where is in that sequence is “life” encoded? Sounds like a silly question? Well then, algorithms running in brains produce understanding and emotion. But where is understanding and emotion in the algorithm? Maybe that’s a silly question as well.

  74. Pete Aon 19 Oct 2017 at 8:40 am

    Paul,

    You replied to DanDanNoodles: “But see right there you are presupposing that which you are trying to produce and understand.”

    But you are also presupposing that which you are calling the conundrum: awareness/experience.

    It’s no different from saying that a thingamajig is a conundrum — of course a thingamajig is a conundrum because it’s undefined/unspecified. Awareness/experience is not a thing because: there are circa 7.5 billion people on Earth; no two people are the same; therefore awareness/experience is circa 7.5 billion different things.

    When people are awake they are, to a very limited extent, self-aware. I assume that their self-awareness appears to them as being a convincing uninterrupted stream until they fall asleep. I’m one of a small minority whose “self” sporadically vanishes for short periods of time (it’s a neurological dysfunction, not a mental disorder). This condition has enabled me to learn that most people haven’t the faintest clue as how they actually see with their eyes, hear with their ears, and experience their conscious self. Most people are so convinced by the illusion of the “self” generated by their brain that they cannot believe that it is only an illusion. Yes, it’s an overwhelmingly powerful illusion, and I think our species could not have survived without it.

    A machine isn’t self-aware because it doesn’t have our complex physiology, which includes such things as the autonomic nervous system plus a plethora of many different types of sensors and feedback loops. Think about the process of picking up a mug of tea or a glass of beer, drinking from it, and placing it back on the table. We avoid dropping it due to being able to sense the micro-slippage between our fingers and the mug or glass; and we learnt the consequences of dropping things during our childhood!

  75. hardnoseon 19 Oct 2017 at 11:37 am

    “There is no reason to think that life doesn’t emerge out of a certain collections of molecules arranged into certain complex interacting structures.”

    If it makes you happy to think that, then go ahead. But there is no scientific reason to believe it, and there are very good reasons not to believe it.

    Life certainly does emerge out of “matter” somehow, but no one knows how or why. If scientists ever start creating living organisms out of non-living molecules, then your fantasy would be scientific. Now, it’s just dogmatic materialism.

  76. hardnoseon 19 Oct 2017 at 11:44 am

    “What is it that you think precludes the possibility of AI? Why are you threatened by it?”

    I am not threatened by AI. Science and logic have shown us that, so far, AI is not possible.

    If it ever happens that I have a rational conversation with one of those annoying idiotic phone answering systems, I will change my mind.

  77. RickKon 19 Oct 2017 at 12:04 pm

    hn said: “Now, it’s just dogmatic materialism.”

    Similarly, if we assume it happens without the help of fairies, we’re guilty of “dogmatic anti-fairyanism”.

    We’re also guilty of dogmatic anti-midichlorianism, dogmatic anti-heecheeism, and dogmatic anti-Mbomboism

    Meanwhile, that dogmatic materialism just keeps generating correct answers. See gravitational wave blog post.

  78. chikoppion 19 Oct 2017 at 1:25 pm

    I can’t tell if you’re being intentionally obtuse or if you legitimately don’t understand the implications of the words you use.

    [BillyJoe7] “There is no reason to think that life doesn’t emerge out of a certain collections of molecules arranged into certain complex interacting structures.”

    [hardnose] If it makes you happy to think that, then go ahead. But there is no scientific reason to believe it, and there are very good reasons not to believe it.

    So BJ7 asserts there “is no reason,” to which you counter, “there are very good reasons not to believe it.” You immediately and inexplicably follow with…

    [hardnose] Life certainly does emerge out of “matter” somehow, but no one knows how or why. If scientists ever start creating living organisms out of non-living molecules, then your fantasy would be scientific. Now, it’s just dogmatic materialism.

    If you agree that “life” emerges from “matter” then you are endorsing “materialism.”

    Do you have evidence that “life” requires something other than “matter?” Can you cite evidence of this mystery substance? If “no one knows how or why” then how can you determine it is necessary?

    You are merely comparing what is known (“materialism”) to what is imaginary and then claiming that to NOT include the imaginary in the solution set is somehow “dogmatic.”

    [hardnose] I am not threatened by AI. Science and logic have shown us that, so far, AI is not possible. If it ever happens that I have a rational conversation with one of those annoying idiotic phone answering systems, I will change my mind.

    Neither “science” nor “logic” have shown that AI is “not possible.” That’s not how “possible” works. For a thing to be “not possible” there must exist a reason that it is necessarily “impossible.”

    So…what I can glean from these comments is…

    “Science” is being dogmatic in the pursuit of AI because it isn’t considering imaginary things. Because it isn’t considering imaginary things it must necessarily fail, because those imaginary things are necessary even though no one knows how or why. We know this to be true because we know “life emerges from matter somehow,” but we don’t know how, but even though that would be an endorsement of “materialism” it still must be wrong, but there’s no reason to assume it’s wrong other than you’ve decided you don’t like the sound of word, but it is because reasons.

  79. Paul Parnellon 19 Oct 2017 at 1:29 pm

    But you are also presupposing that which you are calling the conundrum: awareness/experience.

    No, I have experiences. The experiences themselves may be illusions, that is the information content of the experience may be wrong. But the ability to have illusions cannot itself be an illusion. That twisting of a claim back on itself leads to logical gibberish. Like saying “This statement is false”.

    It’s no different from saying that a thingamajig is a conundrum — of course a thingamajig is a conundrum because it’s undefined/unspecified.

    I am not following. That is not what makes a thing a conundrum.

    Awareness/experience is not a thing because: there are circa 7.5 billion people on Earth; no two people are the same; therefore awareness/experience is circa 7.5 billion different things.

    I… just… don’t… follow…

    A star is not a thing because there are a hundred billion in our galaxy alone? No two are alike therefore they are a hundred billion different things?

    When people are awake they are, to a very limited extent, self-aware. I assume that their self-awareness appears to them as being a convincing uninterrupted stream until they fall asleep.

    Well actually the disappearance of consciousness may be at least partly an illusion. The brain continues to function during sleep much like it does while awake. One small difference is that the ability to form long term memory is switched off. Thus any consciousness you had is forgotten. This is why dreams can be so hard to remember.

    I’m one of a small minority whose “self” sporadically vanishes for short periods of time (it’s a neurological dysfunction, not a mental disorder).

    This is fairly rare as a neurological condition. But it is common in LSD trips and has been studied using brain imaging. What appears to be happening is that the neural network associated with self has enhanced connectivity to other networks in the brain. The self isn’t vanishing so much as it is being being diluted. The central narrative is only a small part of what is being experienced. In Alzheimer the opposite is happening. You have reduced connectivity that slowly shrinks and extinguishes the self.

    There are many neurological conditions that challenge our sense of self. There is the famous case of the guy who insisted that his leg was not part of himself to the point that he had it removed. He was much happier. Then there is Cotard’s syndrome where a person becomes convinced that they do not exist. Most of us experience ourselves vividly but these people experience themselves very thinly almost like a ghost. And then there is the person who was blind but was totally convinced that they could see.

    Most people are so convinced by the illusion of the “self” generated by their brain that they cannot believe that it is only an illusion. Yes, it’s an overwhelmingly powerful illusion, and I think our species could not have survived without it.

    Of course it is an illusion. How could it be otherwise? But what I keep trying to get you to understand is that you are speaking to the data content of experience and not to the fact of experience itself.

    Vision for the blind person that claimed that they could see was a total illusion. But the pure fact of their experience was not. Their experience was wrong, bad data, GIGO and such. But it is not the accuracy of the data but the pure fact of experience that is the mystery.

    A machine isn’t self-aware because it doesn’t have our complex physiology, which includes such things as the autonomic nervous system plus a plethora of many different types of sensors and feedback loops.

    A blind assertion. I don’t see how complex objects with feedback loops explain anything. I expect that they will follow the laws of physics with or without consciousness. Consciousness is neither needed for their function nor necessary for an analysis of their actions. And forget about self-aware. Think of aware of anything at all. Even an illusion.

  80. Paul Parnellon 19 Oct 2017 at 2:07 pm

    BillyJoe7,

    Similarly, there is no reason to think that consciousness could not emerge out of suitably complex arrangements of molecules within certain areas of brains (interestingly, the cerebellum is not conscious)

    For god’s sake I’m not arguing for magic. I’m only saying that hand-waving about complexity does not solve the problem.

    Before the discovery of radioactivity people would look at the sun and wonder how it could produce energy beyond any possible chemical reaction. It was a conundrum. Hand-waving about complexity would never solve the problem.

    I am saying that there is no clear way forward in combining the subjective with the objective. All current attempts seem like hand-waving. I don’t have a clue.

  81. Paul Parnellon 19 Oct 2017 at 2:27 pm

    BillyJoe7,

    The sequence of nucleic acid bases in DNA is a recipe for producing living creatures. But where is in that sequence is “life” encoded? Sounds like a silly question?

    I can show you. I can show you where the light sensing pigments are coded in the DNA. I can show details of how the shape of our body is determined. There is much more to be discovered but there is a or rather many productive paths forward. It isn’t a silly question. It is a profoundly interesting question that is being partly answered every day.

    Well then, algorithms running in brains produce understanding and emotion. But where is understanding and emotion in the algorithm?

    That is almost the question. I can understand how programs can produce understanding and emotions in the algorithmic sense. I do not understand how programs can produce an experience of those things. Your difficulty in understanding this is akin to a fish’s difficulty in understanding water. It cannot imagine a world without it and so reduces water to Platonic essence or a prior necessity that cannot be questioned. Then one day the fish discovers the surface. Or very likely it never does.

  82. Pete Aon 19 Oct 2017 at 3:11 pm

    Paul,

    Stating that a machine isn’t self-aware is not a blind assertion, it’s a fact, and it will remain a fact until there is sufficient verifiable empirical evidence to assert that machine X is self-aware. The reason I gave is irrelevant to that simple fact. However, it is a highly-relevant part of your ‘experience’ of being you.

    If you didn’t have your complex physiology, autonomic nervous system plus a plethora of many different types of sensors and feedback loops, what would you experience (assuming that your brain was working as per usual)? You’d never experience feeling hot/cold, thirsty, hungry, excited, startled, afraid; physically interacting with objects, people, and animals; and many other things that form part of the experience of being a unique human.

    You wrote: “And forget about self-aware. Think of aware of anything at all. Even an illusion.” An autonomous vehicle is ‘aware’ of its surroundings — otherwise it would not be allowed on the road. An experienced driver is ‘aware’ of their surroundings, but most of their inputs to the vehicle, and their feedback from the vehicle, are processed at a level below conscious awareness. It would be impossible to drive a vehicle safely at medium to high speed if driving involved only conscious awareness because the processing delay is too long. Those who remember their initial driving lessons will know this, as will those who’ve taught people to drive.

    Arguing with me won’t solve your conundrum. You need to clearly define for yourself what your conundrum is and what it is not.

  83. Paul Parnellon 19 Oct 2017 at 3:12 pm

    TheTentacles,

    I googled Galen Strawson and was struck by how his problem with free will mirrors my problem with experience. He points out that in a deterministic universe the future is fixed and allows no room for free will. And even if the universe is nondeterministic there is no mechanism for free will and it just adds noise.

    This mirrors my thesis that algorithms just do what physical law requires and there is no need of or mechanism for experience.

    But then comes the problem. I have experiences. I can easily believe that I have no free will. I don’t even know how to define free will in the context of physical law. But I have experiences. Just like free will I don’t know how to define experience. I cannot ground them in physical law. But I have experiences.

    I think our idea of free will simply derives from the way we experience our thoughts. Thus solve the problem of experience and the free will issue may resolve.

  84. RickKon 19 Oct 2017 at 3:19 pm

    Paul – do you think different species on different levels of the complexity scale have different degrees of “experience”?

    Does a mouse or an earthworm or an amoeba or a virus experience the world in your definition? Do you think there are degrees of experience? And if so, what’s the lowest, most simple level and how does it differ from the most complex?

  85. Paul Parnellon 19 Oct 2017 at 4:06 pm

    RickK,

    I don’t know. More to the point is how can I know? The whole point is that the subjective cannot be measured.

    If we go that way what if we asked if a huricane was conscious? It processes information, has feedback loops and can be thought of an algorithmic process. It is connected to its environment and responds to it. I see no algorithmic mechanism for a sense of self but I don’t see why it can’t experience other things. I also don’t see why it would.

  86. BillyJoe7on 19 Oct 2017 at 4:18 pm

    Paul,

    Rick: “Does a mouse or an earthworm or an amoeba or a virus experience the world in your definition? Do you think there are degrees of experience? And if so, what’s the lowest, most simple level and how does it differ from the most complex?”

    And how does the lowest most simple level of experience differ from no experience at all.

    Are viruses a form of life or non-life? It is generally agreed by biologists that they are non-life, but it’s a close call and many biologists disagree. Therefore, going from non-life to life must be a small incremental step rather than a giant leap. In other words, life must emerge out of a certain complex interacting arrangement of molecules. It’s not actually coded for in the DNA (otherwise show me the sequence of bases that codes for “life”). Life emerges out of what the DNA does code for.

    Similarly, there is no reason to believe that consciousness/awareness/experience/qualia does not emerge out of certain complex interacting arrangement of molecules in certain parts of the brain.

    I’m not sure why this is such a hard concept to understand.

  87. RickKon 19 Oct 2017 at 4:28 pm

    Hmmm… Ok – so your contention is there are no behaviors that are strongly indicative of “experience”?

    For example, do the feelings of fear or pleasure strongly imply experience? If so, then do the behaviors associated with fear and pleasure strongly indicate experience?

    Judging by the behavior of (most) other humans, I can judge that they have a level of experience. Similarly, watching the social interaction and emotional behavior of gorillas, it seems there is strong indication of experience, though they’re not able to describe their experience to me in abstract or analogous terms like humans can.

    Are you saying that those assumptions I’m making based on observed behavior are invalid?

  88. Pete Aon 19 Oct 2017 at 4:32 pm

    “[Paul] The whole point is that the subjective cannot be measured.”

    Good grief! It seems that you are blissfully unaware of the measuring instruments used in clinical psychology and psychiatry.

  89. Paul Parnellon 19 Oct 2017 at 4:44 pm

    Pete A,

    Stating that a machine isn’t self-aware…

    That’s not what I was objecting to. I have no idea if a given machine is aware but generally presume that they are not. My original quote:

    A machine isn’t self-aware because it doesn’t have our complex physiology, which includes such things as the autonomic nervous system plus a plethora of many different types of sensors and feedback loops.

    I see no necessary logical connective between the ability to have experiences and feedbackloops or whatever. The thing does what the laws of physics requires. Experiences are not necessary for its function. I make no assertion about what does or does not have experiences.

    If you didn’t have your complex physiology, autonomic nervous system plus a plethora of many different types of sensors and feedback loops, what would you experience (assuming that your brain was working as per usual)? You’d never experience feeling hot/cold, thirsty, hungry, excited, startled, afraid; physically interacting with objects, people, and animals; and many other things that form part of the experience of being a unique human.

    Well if I were born in such a state my brain would have never developed the ability to categorize these things. I don’t know what that would be like if it was like anything at all.

    If as an adult I were placed in such a condition it would just be sensory deprivation. After a time I would start to hallucinate. I would go insane but it seems I would have experiences. I’m just not sure what this has to do with anything.

    An autonomous vehicle is ‘aware’ of its surroundings

    In an algorithmic sense yes.

    An experienced driver is ‘aware’ of their surroundings, but most of their inputs to the vehicle, and their feedback from the vehicle, are processed at a level below conscious awareness.

    Yes they are driving in an unconscious state. Most of what your body does is unconscious and most can’t be consciously controlled. Well at least it isn’t connected to my sense of self. Maybe there is a sub-entity down there with its own consciousness. I just don’t see the relevance.

    I think we are doomed to continue to talk past each other.

  90. Pete Aon 19 Oct 2017 at 6:21 pm

    Paul,

    I wrote: “An autonomous vehicle is ‘aware’ of its surroundings”,
    You replied: “In an algorithmic sense yes.”

    Yes, but you missed the far more important point, which is that an autonomous vehicle is ‘aware’ of its surroundings in a behavioural sense. Its safety depends on its specific behaviours in response to changes in its environment. Exactly the same applies to human drivers. What the machine or the driver is ‘experiencing’ at the time is irrelevant. The driver could be enjoying listening to the car radio.

    But drivers are not driving in an unconscious state[1]; they have to be 100% consciously aware they are the entity who is driving the vehicle, each and every second of the journey. The danger of using a mobile phone while driving is that it causes lapses in this awareness, especially with modern comfortable easy-to-drive cars with cruise control activated, which most of the time need little input from the driver and give little feedback to the driver.

    [1] The heuristics they gradually learnt while learning to drive and later, operate at a subconscious level. The same applies to learning to speak words, then sentences: the initial conscious-only formulation of words, by frequent repetition, slowly gets transferred to subconscious areas of the brain that work much faster that does conscious control. Subconscious processing drastically reduces cognitive load.

    I apologise for being terse previously. I’m genuinely interested in what exactly it is that you want to know. Perhaps we are doomed to talk past each other simply because my experience of being me is so very different from your experience of being you.

    I agree with your two examples of what it could be like if we didn’t have our complex physiology. I guess we can’t remember much before we learnt to understand words because how the heck could we describe our first experience of, say, eating an apple, because we didn’t know at the time what the object was called, what its colour was called, what our reaction to its taste was called. If we could remember back to that experience we would need to use a great deal of fabrication to describe it.

  91. RickKon 19 Oct 2017 at 6:28 pm

    Paul Parnell, my last post was directed to you in response to your statement:

    “I don’t know. More to the point is how can I know? The whole point is that the subjective cannot be measured.”

    It seems to me that declaring the presence (and even the magnitude) of experience to be undetectable and unmeasurable is an unsupported assertion. It sounds like you’re implying experience/awareness can’t be detected or measured and therefore can’t be studied or replicated.

  92. TheTentacleson 20 Oct 2017 at 5:12 am

    Paul P, I suspect Galen Strawson has moved beyond his earlier work (and very hard position) on free will. I was referring to his recent criticisms of illusionist arguments that basically state consciousness is so mysterious, how can matter be the substrate, ergo, illusion. This annoys many lay people who seem fairly convinced of their own phenomenology; and Strawson simply deflects the mystery-of-consciousness to mystery-of-matter. A deflection (handwaving as you say), but one which antagonises naïve intuition a little less…

    Pragmatically, I do think this approach is more fruitful — if I study perception, I work on the assumption of sensory subjective awareness, investigating it as-a-thing (and can relate it across subjects). Knowing the machinery allows us to manipulate the machinery, which manipulates the phenomenology in consistent ways. Someone will always push the problem one step back, invoking mind-game metaphors of bats, zombies, or droll workers stuck in a room translating Chinese…

    I think skepticism (in the philosophical sense), is a waste of time (“the subjective cannot be measured”). Yes, you can claim no other entity other than yourself is subjectively conscious through direct observation, but the world contains huge amounts of structured sensory experience interacting with others that suggests otherwise. If you want to retreat into skeptical solipsism, no amount of arguing on the internet can change your mind (not saying you personally are arguing this).

    But I will work on the pragmatic assumption that my sensory experiences, my phenomenology, is shared by others because my core subjective experience (pleasure, fear, wonder, stress etc.) drives behaviours consistent with others (measured through direct observation and communication with others). We measure this daily in the lab, quantify it, use it to make predictions. Drugs have broadly similar effects on it. Anaesthesiologists depend on manipulating it to stop us dying during surgery. We write poems, books, music that depend on it to resonate to others. It is obviously anthropomorphic, relying on our understanding of our own subjectivity. Although it takes a long time for us to intuit it in others (watch any two 2 year olds playing with each other to instantly understand how difficult empathy and cooperation are to develop), but we do (some more than others)!

    So now we get back to machines. We routinely measure awareness of humans and animals, and use this constructively in so many ways. We are studying the matter which intimately and causally is related to it. So our current ideas are obviously based on this. A complex rock formation does not appear to be conscious no matter how we interrogate it, a complex brain formation clearly is. We causally manipulate parts and observe changes to awareness, to subjective phenomenolgy. No other spooky theories (God, quantum physics) provides a better bridge or a more useful heuristic at present. Perhaps we will discover a new theory of matter which provides the “missing link”, perhaps not.

    Our anthropomorphic approach may misguide us; can we really know that a highly parallel distributed in-silico phenomenology will be similar to ours? Will it care to communicate to us to help us understand? Will our imagination limit the scope of potential consciousnesses we can conceive of? Perhaps silicon can never support subjective phenomenology (it doesn’t contain spooky XRURON subatomic particles like carbon does), or that we could never prove it didn’t. All fascinating questions with no clear answer. But still learning about the brain, and the parsimonious link of its matter to our awareness and applying that knowledge will continue to provide clear, measurable progress in understanding our world, and fill the gaps of our knowledge one step at a time.

  93. Nidwinon 20 Oct 2017 at 6:09 am

    Pete A

    “[1] The heuristics they gradually learnt while learning to drive and later, operate at a subconscious level. The same applies to learning to speak words, then sentences: the initial conscious-only formulation of words, by frequent repetition, slowly gets transferred to subconscious areas of the brain that work much faster that does conscious control. Subconscious processing drastically reduces cognitive load.”

    Do you think Pete that dreamstates in sleep could be related and help in the transfer to subconscious areas of the brain of learned actions, so those actions can also start to operate on a more subconscious level?

    On a side note,
    I never thought about the subconscious processing of learnt actions but this explains certain aspects of the tingles. This would mean that I’ve reached a level where willful tingling isn’t only on a voluntary base but now on a subconscious level too for whatever reason.

  94. Pete Aon 20 Oct 2017 at 9:31 am

    Nidwin,

    I’ve been following psychology research for decades and neuroscience for at least ten years so I’ve seen many conflicting findings and have learnt to be skeptical. Part of the problem, if not the main problem, is both lack of replication and replication failures.
    https://en.m.wikipedia.org/wiki/Replication_crisis

    I think it fair to say that lack of sleep / poor sleep quality can seriously impair the learning processes within the brain. But it isn’t the only thing that interferes with learning processes, trauma has been identified as a cause, which seems to affect the learning ability of some victims much more than others, for unknown reasons. It seems to me that some stages of sleep would not help the learning process, e.g., REM sleep because the brain activity is similar to being awake. Some of the research I’ve read suggested that lack of REM sleep interferes with learning, but it was only a correlation between the two, it did not provide evidence to support its conclusion that lack of REM sleep causes impaired learning.

    Riding a bicycle is a wonderful example of how an extraordinarily difficult cognitive task can be transferred to subconscious networks, allowing the task to be performed with a very low level of cognitive load.

    The downside of this learning process is when we learn things inadvertently, because it’s nigh on impossible to unlearn them. Most people have acquired at least one annoying little habit, of which they are blissfully unaware, but it’s obvious to observers. One of mine is talking to machines, often asking them questions. Thus far, none of them have answered so I haven’t completely lost the plot, yet!

    Subconsciously-learnt physiological reactions are fascinating. E.g., if someone in a group of people starts talking about fleas, it isn’t long before some of the group starts itching or scratching — even if they’ve never personally experienced flea bites.

    These things amuse me during discussions of AI, consciousness, and self-awareness. It’s irrelevant whether or not we could ever produce a machine which mimics human behaviour because there would be no use for such a machine — we already have circa 7.5 billion ‘machines’ on Earth that perfectly emulate the diversity of human behaviour. Who wants an AI machine that catches the common cold, takes sick leave, demands holiday pay, tells lies, gets grumpy, does embarrassing things when its drunk, changes the TV channel while we’re watching a programme, keeps scratching one of its ears for no reason, refuses to wash the dishes after it’s finished eating, and belches while it’s saying goodnight to us.

  95. RickKon 20 Oct 2017 at 10:33 am

    Just a quick note of appreciation for TheTentacles comment above. A sound, practical position presented succinctly and eloquently.

  96. Sarahon 20 Oct 2017 at 11:46 am

    Hahaha man

    I hope Egnor and Hardnose will be alive when the first self-aware AI come online. Seeing them move the goalposts willbe fun.

  97. hardnoseon 20 Oct 2017 at 12:20 pm

    “I hope Egnor and Hardnose will be alive when the first self-aware AI come online. Seeing them move the goalposts willbe fun.”

    Only a dogmatic True Believer has this level of certainty.

    By the way, I am not at all against AI research. They discover lots of useful and interesting things in the process of failing to develop AI.

    In the same way, biology gets more interesting and biologists continue failing to understand life, and physics gets more interesting as physicists fail to understand matter.

    Technology and the human learning experience are interesting. It is our nature to explore and discover. But that does NOT mean we can ever figure out Nature.

  98. Willyon 20 Oct 2017 at 12:29 pm

    hardnose: I don’t see any certainty. Where do you see certainty? You’ve used enough straw over the years to burn down a major city.

  99. chikoppion 20 Oct 2017 at 12:59 pm

    [hardnose] In the same way, biology gets more interesting and biologists continue failing to understand life, and physics gets more interesting as physicists fail to understand matter.

    Continue failing?

    Gene therapy. Synthetic DNA. Minimal engineered organisms. Viral delivery mechanisms. CRISPR. Cellular reprogramming. Creating eggs from stem cells. Immontherapy. Quantum teleportation. Quantum computing. The Higgs Boson. Gravitational wave detection. CERN. Carbon nanotube computing. Etc.

    This doesn’t even scratch the surface of the last decade. These accomplishments require understanding and each milestone represents the establishment of new knowledge.

    In the meanwhile the hardnoses of the world have accomplished nothing and understood nothing.

  100. Willyon 20 Oct 2017 at 1:30 pm

    hardnose: Did you ever consider the utter absurdity of YOU accusing people of certainty????

  101. Pete Aon 20 Oct 2017 at 2:19 pm

    To be fair to hardnose — he repeatedly claims that the universe is intelligent; and he repeatedly demonstrates the stupefying depth of this intelligence.

  102. BillyJoe7on 20 Oct 2017 at 4:16 pm

    He has deluded himself into thinking that he knows better than the world’s AI researchers, biologists, and physicists who’ve made these specialised fields their life’s work, while repeatedly demonstrating his almost complete ignorance of all these fields of study.
    The hubris of Dunning-Kruger.

  103. mumadaddon 20 Oct 2017 at 5:51 pm

    This thread has been really useful for me. I had some intuitions about the likelihood of achieving true A ‘G’ I that have been properly messed with by Tentacles’ (an unknown Greek philosopher) posts, and some frustrations about ‘primary consciousness’ as an illusion that have been lucidly expressed by Paul Parnell.

    It is interesting to think about how a machine intelligence could have anything approaching motivations, and whether primary consciousness could exist without motivations. Life has always been selected for on the basis of fulfilling the purpose of reproduction, and secondary goals that achieve this purpose are baked into all behaviour, then tertiary goals for more complex behaviour. At some point primary consciousness joined the ride, and then ever more developed consciousness. Probable, I would think, that consciousness itself was selected for. So you have a system that was designed from the ground up, over millions of years, that is very much purpose driven.

    None of this is to say that consciousness isn’t a product of system architecture and networking, just that I would imagine that the architecture and networking are so complex as to be impossible to replicate from the top down, and also we have no way comparable with evolution to build it from the bottom up.

  104. yrdbrdon 21 Oct 2017 at 2:13 am

    I couldn’t get past the first couple of paragraphs because, to be frank, it’s poorly written.

  105. Pete Aon 21 Oct 2017 at 10:07 am

    yrdbrd,

    “I couldn’t get past the first couple of paragraphs because, to be frank, it’s poorly written.”

    Are you referring to Dr. Novella’s article, or to the article to which he linked in his opening paragraph? Either way, I think it is logically impossible to determine whether or not an article is poorly written without reading beyond its first couple of paragraphs. I am, however, fully aware of the fact that the majority of those who read articles posted on the World Wide Web have an attention span of circa five seconds.

  106. BillyJoe7on 21 Oct 2017 at 3:47 pm

    Pete,

    I think he is referring to the article to which SN is responding and to which he linked.
    And I think he is referring to the first three paragraphs after the introduction headed “Preliminaries”.
    If so, I have to agree that it is badly written.
    I don’t however see this as a reason to stop reading the article.

    Here are the bad bits:

    beacon of controversy”
    prolific people”
    “Most hypothesis” (could have been a typo)

    In fact that whole sentence is badly written:
    “Most hypothesis, wrapping around AI are grim and are paralleled with the technological singularity”

    “Newage”
    Can’t find that that word even in Google.

  107. BillyJoe7on 21 Oct 2017 at 3:48 pm

    …excuse the typo in the last sentence. 😀

  108. Pete Aon 22 Oct 2017 at 1:51 am

    BillyJoe,

    I noticed those things in the article. I’ve read so many articles written by people who, like me, aren’t great at writing good English that I tend to ignore the mistakes.

    I’ve been asked to write articles on one of my pet subjects, but I won’t because I know I’ll make loads of similar mistakes. During my career I wrote dozens (perhaps a few hundred) technical documents and was lucky to work with people who were more than willing to convert my drafts into appropriate English. In return, I’d help them with the technical details in their documents.

  109. BillyJoe7on 22 Oct 2017 at 3:23 am

    Pete,

    I find bad writing jarring, but I will overlook this if the author is promulgating views that he defends with logic and evidence. Not everyone can be a good writer, but that should not preclude them from putting their point of view.

    I’ve just finished reading a speech by University of Chicago president, Robert Zimmer, on “freedom of speech” vs “safe places” and “trigger warnings” (and the recent trend in universities to “disinvite” invited speakers). It’s not very well written (maybe because it is the text of an actual speech he gave), but it is a first class defense of free speech against the pernicious influence of what others have called the “regressive left”, especially amongst the student population.

    https://president.uchicago.edu/page/address-colgate-university

  110. BillyJoe7on 22 Oct 2017 at 3:24 am

    …btw, I haven’t noticed your bad writing so it can’t be too bad. 🙂

  111. TheTentacleson 22 Oct 2017 at 7:29 am

    mumadadd: ha yes, a lost pupil of Antisthenēs; fell out with Diogenēs (jealousy of his originality on my part really), and somehow ended up here…

    As Steven Novella spoke about “loops”, and loops are a central theme of brain anatomy, and the current inspiration for the new future models in AI, here is a Douglas Hofstadter and his skepticism of the current AIs:

    https://qz.com/1088714/qa-douglas-hofstadter-on-why-ai-is-far-from-intelligent/

    He is famous of course for the book “Gödel, Escher, Bach”, but I mention it here because his later book extends the “loop” more completely to take the centre stage explaining the “I” in consciousness, “I am a strange loop”…

  112. Pete Aon 22 Oct 2017 at 8:00 am

    BillyJoe,

    My technical documents included the relevant science and mathematics so my rhetoric was, at least, always thoroughly based in evidence and logic. Therefore, I totally agree with your first paragraph.

    Whenever I read articles or comments which speculate on the future of AI, I’m unable to take them seriously. I view them in the same way I view episodes of Star Trek. I spent my career working with complex systems and apparatus (‘machines’): using them; repairing them; modifying them; designing and constructing new ‘machines’ for the purpose of providing solutions to client-specific problems. My ‘recipes’ were always laced with a good sprinkling of humour 🙂 Even my documentation of the systems contained enough sporadic satire to make reading it fun, instead of boring. And, of course, some of my log files included phrases that are NSFW, because it’s the best way to entice users to monitor their mission-critical log files. Science and mathematics should be fun, not sterile and boring.

    All of my work included parts of me within it: not just my knowledge, but also love, kindness, empathy, and compassion. Machines don’t have such attributes per se, and never will, but they can convey these human attributes and sentiments by proxy due to our innate anthropomorphism.

    Thank you for the link to the speech by Robert Zimmer. It has explained to me, perhaps the main reason, for my reluctance to write articles.

  113. BillyJoe7on 22 Oct 2017 at 8:44 am

    Pete,

    It sounds like you work in an interesting field in which you have developed some expertise, and which you make even more interesting by your creative personalised approach. That must be very satisfying.

    I don’t have an area of expertise and have had to resign myself to being a sort of “jack of all trades and master of none”, as the saying goes. But I come to think that it’s not a such a bad thing – to not lose sight of the forest for the trees.

    Some can do both, of course, but then there are also those sorry individuals who can do neither. Some have been loitering around this blog for years spouting nonsense as if they’re revealing deep insights whilst revealing only that they are clueless – and, as someone commented recently, too clueless to know how clueless they really are.

  114. Pete Aon 22 Oct 2017 at 10:06 am

    BillyJoe,

    The thing I miss most since retiring is the teamwork between not just us specialist in our own fields, but especially the support staff. The secretaries, the caterers, and the cleaners who all took it upon themselves to provide us with sobering reality checks several times each day! Their banter and their unprintable incessant overt p1ss-taking mockery of us were the very things that motivated us to perform our job with the utmost integrity — as they well knew because we regularly sincerely thanked them for both keeping us in touch with reality, and for performing their jobs with not only due diligence, but also far beyond their call of duty.

    Machines that have non-human physiology cannot possibly emulate any of those awesome people.

  115. TheTentaclēson 22 Oct 2017 at 11:29 am

    Just released from NYU, David Chalmers hosts a debate about the importance of innateness in AI (how much more nature vs. nurture, what do we need for the future of AI):

    https://www.youtube.com/watch?v=vdWPQ6iAkT4

    !!!Warning: 2 hour timesink!!!

    And I’ve linked to it before, but this BBS article (preprint at arXiv) makes a clear case for the innate mechanisms that are needed for the next steps forward:

    https://arxiv.org/abs/1604.00289

    Their points (a) and (b) are the causal models which I’ve argued above are exactly where we are currently headed.

  116. TheTentaclēson 22 Oct 2017 at 11:33 am

    PeteA: a hearty applause to your gratitude to the supporting staff who are essential to balance and whose reality-checks are so under-appreciated in the lofty halls of research and/or academia.

  117. Pete Aon 22 Oct 2017 at 11:40 am

    TheTentacles,

    Both you and Douglas Hofstadter have demonstrated their abject failures to address (I shall currently resist my strong temptation to state: their abject failure to understand) communications theory and practise.

    A message instigated by a person or a machine is utterly meaningless per se. Its meaning depends solely upon having firstly established that the domain of the sender’s message is sufficiently compatible with the domain(s) of its intended recipient(s). In other words, a message is utterly meaningless unless the sender and its recipient(s) are using the same extensive set of metadata, which is implied in, but not included in, the message.

  118. Pete Aon 22 Oct 2017 at 12:04 pm

    TheTentacles,

    I wrote the above before I saw your reply on 22 Oct 2017 at 11:33 am. Thank you very much indeed, and I apologise for my comment being derogatory towards you. I’m finding it increasingly difficult to differentiate between genuine commentators and non-genuine commentators.

  119. Sarahon 23 Oct 2017 at 2:27 am

    Hardnose –

    If you’ll have the intellectual honesty to admit that you’re wrong if and when self-aware AI becomes a thing, I might have a modicum of respect for you.

    As for me, I don’t have absolute certainty – just 98% or so.

  120. TheTentaclēson 23 Oct 2017 at 9:36 am

    Pete A: no worries 🙂 Determining signal from noise, especially with so many professional noise makers can be quite a challenge online…

  121. bsooon 30 Oct 2017 at 10:06 pm

    Almost anything we can design in hardware can be modeled and simulated with software. There are probably a few other exceptions, but the only one that comes to mind right now is true random number generation, which can be easily handled by connecting a hardware random number generator to a general purpose computer.

    If we know how to build a “brain chip”, we can simulate it in software and the result will be virtually identical. It wouldn’t be as efficient as specialized hardware, but that advantage would likely be trumped by the vast amounts of available general purpose computing power.

    Unless you are a dualist there is no reason to believe that maybe a few special purpose hardware modules connected to a lot of general purpose computing power couldn’t produce AI that is indistinguishable from biological intelligence in every meaningful way

  122. TheTentaclēson 01 Nov 2017 at 12:36 am

    I’m sure Steve Novella isn’t a hardware/software dualist, I think he usually argues from the view that simulation in software is currently practically untenable, therefore faster progress will be made with custom silicon.
    Not sure if this is open-access, this week’s Science has a provocative review from Stanislas Dehaene (well regarded and smart cognitive neuroscientist, written one of the most recent neuroscience of consciousness popsci books) and colleages, on what computers will need to become conscious:
    What is consciousness, and could machines have it? Stanislas Dehaene, Hakwan Lau, Sid Kouider
    http://science.sciencemag.org/content/358/6362/486.full
    His basic classification into 3 levels of cognitive processing is really part of what new research in AI is already doing in terms of multi-modular networks (content sharing, his C1), and self-reference (estimating bayesian-like self-probability, his C2). There is a Science podcast interview with him this week, where he is very provocative with a hard functionalist stance. they notes that the specific computations of C1 and C2 clearly correlate with subjective phenomenology in humans (using blindsight as the example), and:

    “Although centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions, the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.”

  123. BillyJoe7on 01 Nov 2017 at 6:27 am

    The article is not open access at that site, but Google is your friend:

    http://www.pas.va/content/accademia/en/publications/scriptavaria/artificial_intelligence/dehaene.html

  124. TheTentaclēson 01 Nov 2017 at 10:44 am

    Thanks for the link BillyJoe, though it is a different article (with the same title, he plagiarised himself!), similar cognitive evidence but with much less detailed links to current AI research. GAH, paywalls are a POX on scientific dissemination and progress…

  125. Pete Aon 01 Nov 2017 at 1:33 pm

    TheTentaclēs,

    This has always been my attitude towards scientific research (including my own work):
    1. Keep it a secret and use it to create products/solutions that actually work;
    2. Patent it, which publishes it while at the same time protecting it;
    3. Make it freely available to the whole world for the sake of progress and education;
    4. Publish non-vital (literally: non-essential-to-life) research in subscribed (money-making) websites/journals, but make this scientific research freely available to the world a few weeks or a few months after the website/journal has published it.

    There isn’t a market for selling a newspaper that is a few days old. So paywalls attempting to sell ‘old research’ are just stifling both progress and education. Scientific research should be fully open to inspection by everyone, because even a nine-year-old can be more than smart enough to scientifically debunk the claims of many adults on planet Earth!
    https://en.m.wikipedia.org/wiki/Emily_Rosa

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.