Jun 10 2014

Turing Test 2014

You are currently browsing comments. If you would like to return to the full story, you can read the full entry here: “Turing Test 2014”.

Share

90 responses so far

90 Responses to “Turing Test 2014”

  1. jasontimothyjoneson 10 Jun 2014 at 8:53 am

    the “not Clever” problem with Bots is that they generally like to control the conversation, and some I have looked at seem to be using ‘Cold Reading’ type techniques, recently a conversation with A.L.I.C.E went along the lines of

    Bot; where do you live
    Me; London
    Bot; does your Flat have a lift (this is the cold reading example, most people living in citys live in flats/appartments)
    Me; whats a lift (we dont call them lifts)
    Bot; you tell me whats a lift

    and it seems that they all do a turn around question if you try and run the conversation.

    They are fun to play with, but I have had more intelligent conversations with SIRI

  2. jasontimothyjoneson 10 Jun 2014 at 9:01 am

    oh, you can also try talking to him in Ukrainian here http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/ it kinda does not like it

    I would like to see 2 Bots chatting to each other, having a rational conversation

  3. Ori Vandewalleon 10 Jun 2014 at 9:16 am

    We’re not there yet, but this does eventually get into sticky territory. For example, does Watson understand the questions it answers on Jeopardy? Most probably say no. But that was years ago.

    Now Watson can be given a debate claim and can (after doing some research) devise arguments for and against the claim (http://io9.com/ibms-watson-can-now-debate-its-opponents-1571837847). Does Watson understand what it’s debating? Again, most would probably say no.

    Yet there is no sign that this progression in artificial intelligence (whatever that may be) is slowing down. (And some will say it’s getting faster, of course.) The sticky territory is this: as AIs become more and more sophisticated and intelligent, it will become more and more necessary to define just what it is that makes humans special. More specifically, what is this thing we humans do that we call understanding, and how does it differ from other processes that produce similar (or identical, or better) results?

  4. archeron 10 Jun 2014 at 9:26 am

    Most people have debased the original Turing Test. (“we can fool 33% people!”)

    The test involve two participants and a judge. The judge knows one is a computer and the other is a human. Both participants have to convince the judge that he/she/it is the human and the other is a machine. If the judge designates the computer as the human more than 50% of the time, the computer passes.

    In other words, the original Turing Test requires the computer to appear more human than a real human.

  5. Bill Openthalton 10 Jun 2014 at 9:37 am

    Ori Vandewalle —

    More specifically, what is this thing we humans do that we call understanding, and how does it differ from other processes that produce similar (or identical, or better) results?

    It’s the consciousness, stupid!

  6. carbonUniton 10 Jun 2014 at 9:50 am

    Since when is 30% a “pass”?

  7. Bill Openthalton 10 Jun 2014 at 10:16 am

    carbonUnit –

    Since the days we don’t want children to flunk their years. After all, he’s a 13 year old Ukrainian with special needs.

  8. Steven Novellaon 10 Jun 2014 at 10:24 am

    Apparently there is an online version here: http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/

    I can’t get through, however.

  9. briangreenadamson 10 Jun 2014 at 10:27 am

    Interesting. I hope you can lay out in SGU or somewhere what the tests for intelligence is/are. It seems the bar keeps being raised. I remember reading in Goedel Escher Bach that I guess the colloquial understanding of when Ai would be achieved kept moving, even 30 years ago when it was written. Hoffstadter suggested that if a computer were ever to beat a Grandmaster at chess, this milestone would be explained away and something else would be the indicator if AI.

    I suppose that there are more technical understandings of what intelligence is, rather than that which is indistinguishable from organic humans. Or rather more detailed ways of distinguishing.

    I always thought that Capcha’s would be a bit of a milestone.

  10. briangreenadamson 10 Jun 2014 at 10:32 am

    I guess the other thing to mention is that there is no way to know whether the thing you are talking to is a “real” intelligent consciousness or a machine that looks and acts the same as one..

  11. Bronze Dogon 10 Jun 2014 at 10:50 am

    I think the Turing test has a good core idea, but I wouldn’t consider a few minutes of friendly conversation to be a meaningful test. I’d want it to perform under a lot of different contexts, like solving word problems, deduction, being taught how to do something, teaching a kid to do it, and other stuff like that.

    I suspect a vital part of artificial intelligence will be giving the computer a body, whether real or virtual, so that it can observe and interact with a world like a human. This isn’t the sort of thing you can program from scratch.

  12. jasontimothyjoneson 10 Jun 2014 at 10:53 am

    Steven, I think this is a younger version….maybe when he was 12 1/2
    http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/

  13. SARAon 10 Jun 2014 at 10:56 am

    Its not a good indicator of actual AI, I think the turing tests have value commercially in email response systems for companies like Amazon.

    If they can get a program to sound like a human, it will cut out a huge amount of the anger that their customer feels when getting automated or cut and paste responses. People who recognize they are being responded to by an unacknowledged automated program, resent it. I think it feels a bit like seeing bad cgi, but on a personal insult level. So overcoming that would be valuable to companies.

    Obviously getting one with better AI would be even more valuable than just sounding smart. Less human intervention costs less.

  14. JamieGeekon 10 Jun 2014 at 11:06 am

    Have you seen this critique of the test:
    https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml

  15. palwadoron 10 Jun 2014 at 11:08 am

    Bill Openthalt -

    “It’s the consciousness, stupid!”

    I disagree to some extent. I think what Ori was eluding too was how we define our own consciousness.

    I find my self thinking about this from time to time. What makes me a conscious being? Where does the line go between merely reacting to input with predetermined parameters with out freewill, and being self aware? Some might say that it’s not “thinking” before it makes choices on it’s own, or takes some definitive initiative. But haven’t we seen chess programs do that? laying traps, and choosing options?

    I would argue that we can be able to program an AI to take initiative and make choices in a real environment as well. And if it does that well enough, does it really matter if it’s really thinking?
    My answer is no, it does not. It would never, in my opinion, think like a human. It would think like a machine. Just like a frog will never think like a cat.
    What makes the choices made by a human more valid than the choice made by a machine?
    I’m almost calling computers a separate species at this point. We are far, far away from that. If we ever get there. However, if we do, what would be the difference between the evolutionary programming that we have gone through and the programming the computers have gotten?

  16. The Other John Mcon 10 Jun 2014 at 11:17 am

    I always kinda liked the Turing Test because it is an interesting operationalized definition of “intelligence” or “thinking” on which empirical testing can be accomplished… but this definitely should not be confused with how humans think or are intelligent. We can barely manage to define those terms at present.

    I think a chatbot in an unrestricted Turing Test which could fool most of the people most of the time, would really be a compelling demonstration, but implementing it would heavily rely on understanding human-like intelligence, thinking, even aspects of perception…which is what Turing was getting at with this idea.

    Our group did some cyber-security research a few years back trying to find the “killer questions” that would “out” a bot (it would be obvious from its answers or behaviors that it couldn’t answer meaningfully). Some of these types of questions/suggestions were:

    - don’t ask questions that only have one-word answers; require elaboration
    - ask questions that require meaning or understanding of a concept (what shape is door, what happens to an ice cube in a drink? questions about geography, history, etc.)
    - test reasoning abilities (how many total feet do four cats have? what does the letter M look like upside down? if you hold a 50 pound weight in one hand, and lift one leg, what might happen?)
    - what comes next in this sequence? A1, B2, C3,…
    - test emotional reasoning: (if someone pushed you in a pool or threw hot coffee on you, how would you feel? why? how would you feel if your pet dog died?)
    - just typing non-sense or jibberish, or acting bizarre, to see how the system responds
    - insert intentional misspellings that might confuse a bot but not a human
    - keep an eye out for obvious repetition of what you type being parroted back to you in a question
    - try to avoid letting the system steer the conversation into territory it is comfortable with

    We also did some analyses of bot behavior using Turing Test transcripts, and found that their typing patterns are un-human like and so could be easily spotted by an automated detection system. The point is that even a sophisticated chatbot will also need to be able to have human-like typing patterns, behavioral patterns, memory of conversation, ability to conceptualize, deep understanding of the physical world, how things interact and behave, social understanding and customs, historical knowledge, current events, etc., etc….this is why an unrestricted Turing Test could be said to represent true “intelligence” or “thinking”.

    Dr. Novella pointed out the “cheating” aspect of the winning chatbot; he merely posed as a barely intelligible naive young person. Simulate a seasoned and savvy physics professor, for extended periods of time with lots of test-takers, and THEN if you can convince people, you’ve really got something.

  17. Ori Vandewalleon 10 Jun 2014 at 11:40 am

    Bill, indeed, it’s the consciousness. But of course, the next question (which I’m sure you know) is: what is consciousness? If the current neuroscientific models are largely correct, consciousness is an emergent property of a complex network of interconnected brain modules that, ultimately, relies on chemistry, biology, and physics. If we can duplicate that–a complex network of interconnected modules with emergent properties–and that network produces results vaguely similar to a human, has consciousness been achieved?

  18. Steven Novellaon 10 Jun 2014 at 12:02 pm

    brian – my take is that what has changed is the assumption that true AI would be necessary to accomplish certain tasks, like beat a grandmaster at chess, or have a convincing conversation. These were considered markers of true AI. Then programmers were able to meet those markers with an expert system or complex algorithm without anything that can be considered a truly self-aware AI.

    What has shifted is our assumptions about what can be accomplished with expert systems vs what would require true AI.

  19. hardnoseon 10 Jun 2014 at 12:31 pm

    For once I completely agree with Steve N.

    One thing that made me skeptical is the fact that they did not provide any transcripts with the news reports. Only said transcripts may be provided some day.

    I find it extremely hard to believe anyone could be fooled by Eugene, who seems to be just a sophisticated Eliza. As far as I can tell from what I read so far.

    I think this is just another case of people believing what they want to believe.

    I really would like to see transcripts. I would like to know how hard the judges really tried to fool Eugene. Yes I know, he was trying to fool them, but it should go both ways.

    BTW, some big companies are using AI to answer their phones and talking to these mindless bots is beyond frustrating.

  20. Bruceon 10 Jun 2014 at 1:12 pm

    “What has shifted is our assumptions about what can be accomplished with expert systems vs what would require true AI.”

    Makes you wonder how long we can keep shifting those assumptions. Kind of a slippery slope fallacy, but it is conceivable that our brains are just really really complex expert systems.

  21. hardnoseon 10 Jun 2014 at 1:39 pm

    The “intelligence” of any computer system is only what its developers thought to provide it with. When I human being acts like a robot, we think they are stupid or not paying attention. We expect a person’s response to fit pretty well into the social context. And social contexts are EXTREMELY complex and hard to predict.

    So whatever Eugene accomplished, I am sure it’s not very impressive.

    Recently I was talking to a robot who answered phones for a big company, and I fooled it in about one second. I just asked “what day is today?” and it said “Sorry I don’t answer personal questions.”

    But if you are not trying to fool the computer, then it can fool you into thinking it’s a real person. If you know nothing about computers, you can be fooled, as people were often fooled by ELIZA in the 1960s.

    A lot of human social behavior is stereotyped and predictable. But most of it is not. There is ALWAYS an element of unpredictability. And that unpredictability is what no computer can deal with, and imo they never will.

    What programmers do most of the time is try to foresee what will happen to the system. Everything is planned and predicted. There is NO spontaneity in an artificial system. And spontaneity is an illusion created by functions that generate random (or pseudo-random) numbers.

    Artificial systems can make a random selection from a predetermined set,.

    The people we interact with are NOT like that. We are never sure how they will respond. There are too many unknown variables, in almost any social context.

    Computers keep getting more powerful, and can therefore deal with ever more data. The improvement is in quantity, not in quality. The quality is the same. Eugene is the same, basically, as ELIZA, who was programmed over 50 years ago.

  22. Steven Novellaon 10 Jun 2014 at 1:39 pm

    Bruce – I agree. I explored that issue before. Will we be able to make software that is a complex algorithm-based expert system that can truly imitate the full range of behavior of a human without any actual awareness or understanding? This gets to the P-zombie question also.

    I am reluctant to say “no” because prior such predictions have a bad history. I think expert systems can do a lot more than previously assumed. But they rely on a lot of brute force to accomplish tasks from a top-down approach. Essentially they are accomplishing a similar result with a different process, one that does not involve true awareness.

    I suspect, however, that a sophisticated Turing Test, one that was designed to probe for true creative thinking, for example, would set the bar much higher than the test that Eugene passed. Being indistinguishable from a human even to expert analysis may be beyond the brute force approach. It will be interesting to see how far it goes.

  23. Ori Vandewalleon 10 Jun 2014 at 2:41 pm

    hardnose: Your knowledge of how computers operate is a bit out of date. Computers are perfectly capable of novel responses. See, for example, the software programs that make paintings or compose music. You can argue that all those programs do is copy the styles that are programmed into them, but I could make a very similar argument about human artists. Does any artist spontaneously generate new art without any context? No.

  24. The Other John Mcon 10 Jun 2014 at 2:53 pm

    Bruce: “Makes you wonder how long we can keep shifting those assumptions. Kind of a slippery slope fallacy, but it is conceivable that our brains are just really really complex expert systems.”

    I conceptualize the mind as a huge collection of complex expert systems, swiss-army-knife style, with lots of interactions of the modules. Agree on the slippery slope part, it is an interesting question.

    hardnose — transcripts from previous Loebner Prize contests are available on their website, they usually do make them public eventually.

  25. The Other John Mcon 10 Jun 2014 at 2:57 pm

    hardnose: “I would like to know how hard the judges really tried to fool Eugene”

    It was my understanding that for the Loebner Prize competition, humans judges and the human confederates are instructed not to act bizarre or belligerent or super-inquizzitive, or otherwise try to fool the other communicator, I’ll have to double-check on that.

    But that issue is why this is considered a “restricted Turing Test”…both time-limits and conversational restrictions apply. An unrestricted test really is where the bar should be set, in my opinion.

  26. The Other John Mcon 10 Jun 2014 at 3:15 pm

    previous contest year’s transcripts: http://www.loebner.net/Prizef/loebner-prize.html

  27. DJCrashon 10 Jun 2014 at 3:20 pm

    I agree with what everyone has said about the bar clearly being too low to be a meaningful test. However, I disagree with the idea than an unrestricted Turing test would not be sufficient to establish AI. AI is by definition a simulation.

    It’s much like the point that Steve often makes between natural and synthetic drugs — a drug is a drug whether or not it is synthetically produced. The same is true of intelligence. If a machine were truly self-aware in the same way a human is, it wouldn’t be “AI”. It would just be “I”.

  28. NorEasternon 10 Jun 2014 at 4:11 pm

    The Turing test was an attempt by a visionary 60 years ago to define a procedure that basically was not testable for 50 years. There was no way that Turing could foresee the advent of supercomputers and terabyte databases. A heroic effort on his part. But dated. And we have seen the end of its usefulness.

    That said generating a 2014 test modeled on the Turing test should not difficult. Tests should move from the mundane to the abstract. It should abandon the guise of adolescents, and consider only intelligent, well informed, and competent adult individuals. it should bring to light difficult, no right or wrong answers to in depth questions. There should be no discernible patterns to to avenues of inquiry.

  29. Bill Openthalton 10 Jun 2014 at 8:25 pm

    Ori Vandewalle –

    I think the ability to acquire information on the internal state of the brain’s subsystems combined with the ability to communicate that state (sending and receiving) to other humans and oneself is the foundation of self awareness.
    Speaking and hearing are “promiscuous” in the sense that one hears what one says, leading to a feedback loop as well as the observation of oneself as an external person. What we say is processed by the same subsystems that process what others say, so if we recognise others as independent actors, we also recognise ourselves as such.

  30. dudeon 10 Jun 2014 at 11:54 pm

    http://www.smh.com.au/digital-life/digital-life-news/turing-test-what-eugene-said-and-why-it-fooled-the-judges-20140610-zs3hp.html

    This article has some examples of Eugene’s answers. I’m not very impressed in it’s intelligence.

  31. mumadaddon 11 Jun 2014 at 6:41 am

    Pretty much all current applications of AI are just input/response systems of varying complexity i think, with no need for any kind of real awareness.

    I’d be really interested in any work that’s being done on creating any kind of AI that’s actually conscious; eg build a system that has a model of itself in its environment and realtime monitoring of its internal state. Hanging out on the ‘brain is not a receiver’ thread (now past the mystical 1000 comment marker) now has me convinced that this shouldn’t be too difficult to do. It might be a bit sticky ethically though.

    It would be fascinating to interact with something like this and figure out what it’s conscious experience is actually like. It could be given sensory input, infallible memory, access to vast stores of information. But it would also be missing many components of human intelligence, eg emotional responses, biological drives and fears etc. Specific functionality could then be added on top of this base layer for commercial applications.

    Does anyone here know if this line of research is actually being followed, and what the current state of the art is?

  32. Steven Novellaon 11 Jun 2014 at 9:43 am

    Crash – I disagree. The artificial part can refer to being made of non-living silicon (or whatever) and not that it is a simulation. This is where language gets important.

    My point is that there is a difference between simulating intelligence and being intelligent. Simulating is a very top-down, brute force process. True intelligence is bottom up and emergent.

    We may get to the point where we would have difficulty distinguishing the two based purely on output (without examining the process), but that’s the question, right. Can a top-down simulation truly perfectly simulate bottom-up emergent intelligence?

  33. The Other John Mcon 11 Jun 2014 at 9:54 am

    mumadadd — my understanding is that along the lines you are thinking, autonomous robotics seems to be leading the charge. They seemed to have recognized the need to engineer in self-awareness (model of self within model of external environment), executive decision and control functions that oversee and direct lower-level systems and subsystems; in addition to learning and memory capabilities. Interestingly, it seems a division of something like “conscious” versus “unconscious” processing systems seems to naturally arise out of arrangements like this, in which the higher-level control functions are only partially privy to what the sub-systems are doing, and to which information the sub-systems are feeding up the chain.

    I am not a robotics expert, or autonomous systems expert, so I’m not sure whose work or which labs are leading the charge…though MIT and Carnegie Mellon come to mind, as well as tons of sophisticated robotics work from Asia, particularly Japan. These links might be helpful as well:
    http://en.wikipedia.org/wiki/Artificial_consciousness#Consciousness_in_digital_computers
    http://www.conscious-robots.com/index.php
    http://www.kurzweilai.net/robot-learns-self-awareness

    In my area of expertise, psychology, I do see that the real hardcore study of consciousness (or aspects of it) is proceeding under the topics of “decision-making”, “attention”, and “cognition” (although this last term is so broad it may not be too helpful).

  34. Bill Openthalton 11 Jun 2014 at 10:01 am

    Steven –

    Can a top-down simulation truly perfectly simulate bottom-up emergent intelligence?

    When limited to a particular scope, almost certainly (cf. chess). The problem is to emulate motivation and the resulting learning, and that cannot be done top-down, in my opinion.

  35. Steven Novellaon 11 Jun 2014 at 10:18 am

    Bill – I agree, and chess is an excellent example. Watson’s performance on Jeopardy is another. Open-ended, including displaying creativity, problem solving, abstractions and abstract pattern recognition, etc. – that’s the question.

  36. Ori Vandewalleon 11 Jun 2014 at 11:16 am

    The Other John MC:

    Interestingly, it seems a division of something like “conscious” versus “unconscious” processing systems seems to naturally arise out of arrangements like this, in which the higher-level control functions are only partially privy to what the sub-systems are doing, and to which information the sub-systems are feeding up the chain.

    This is a fundamental aspect of object-oriented programming known as encapsulation. Whether or not it emerges functionally, it is programmed in directly.

  37. RCon 11 Jun 2014 at 12:34 pm

    @Hardnose – The bot is always online – you can talk to him if you want. It’s mostly unintelligible – I think it mostly passed because people thought it was too screwy to not be human.

    @Someonelse –

    ” It would never, in my opinion, think like a human.”

    But what does that even mean?

    I’m a software engineer. I write code for a living. I don’t see anything the brain does that isn’t consistent with exactly what we thing it is – a slow, but very broad neural network (I know that term is a bit silly in this context). I see lots of things that the brain does drastically better than current software, but I don’t see anything that is intrinsically different.

    Most of what we thing of as subjective is, in my opinion, due to different weight being placed on different stimuli during the learning phase. There’s also the complicating factor that we’re all running on slightly different hardware, so you’d expect different results.

  38. The Other John Mcon 11 Jun 2014 at 12:38 pm

    Cool thanks Ori V! I’ve never heard the term ‘encapsulation’ before, that’s good to know.

  39. DJCrashon 11 Jun 2014 at 1:57 pm

    Was able to access the chat bot online at the link Steve posted today. I have no idea how it fooled anybody, even taking into account it is an ESL child.

  40. Mlemaon 11 Jun 2014 at 2:35 pm

    The easy questions of consciousness (replicable by AI?):

    ability to discriminate, categorize, and react to environmental stimuli
    integration of information by a cognitive system
    reportability of system states
    ability of a system to access its own internal states
    focus of attention
    deliberate control of behavior
    difference between wakefulness and sleep

    The hard question of consciousness:

    How does subjective experience of these phenomenon “emerge” from brain function/processes? The phenomenon of subjective experience is qualitatively different from the phenomena from which it emerges, and seems to be inexplicable by sum-of-the-parts investigation. Hence, “strong” emergence – not comparable to other instances of emergence.

  41. mumadaddon 11 Jun 2014 at 2:35 pm

    Other John,

    Muchas gracias.

    It’s interesting to consider a consciousness without biological motivations and stressors (moreso than a table or a clockwork clock). I haven’t read through your links yet (on holiday) so I don’t know to what extent goal oriented behaviour is being incorporated, or necessary for building a consciousness. Maybe it would be superficially easy to add specific goals once you’ve licked building a conscious system in the first place.

    I’m just spitballing here but I used to read a lot of sci-fi and have always found daydreaming about AIs absorbing. It seems to me that if artificial consciousness can be achieved, then given the high fidelity of their sensory input and recording, the ability to add more functional parts and reconstruct themselves, the (distant) future absolutely belongs to them. Although, without a boat-load of natural selection behind them to endow them with selfish motivations will they care whether they survive?

  42. Mlemaon 11 Jun 2014 at 2:38 pm

    motivation is programmable. Just think about all the conscious, and totally unmotivated beings you know :)

  43. The Other John Mcon 11 Jun 2014 at 3:15 pm

    mumadadd: “I don’t know to what extent goal oriented behaviour is being incorporated, or necessary for building a consciousness”

    That’s a good question. I’ve always thought the best definition of ‘intelligence’ (not consciousness) was goal-directed behavior (believe it was Steve Pinker who said this in How The Mind Works). The power of this definition is that it would allow us to recognize intelligence in other people (maybe demented, on-drugs), other animals, maybe even someday aliens. We would see their behavior as goal-directed, and say to ourselves, “I do not understand the rules or assumptions you live by, but I recognize that you have some, and I see that you are demonstrating complex-problem-solving and behaviors to achieve your goals (whatever they may be).”

    Now where consciousness comes into this picture isn’t known, as far as I am aware. Obviously much of problem solving occurs unconsciously, as does intelligence. My guess would be consciousness acts as the decision-maker, choosing between several possible solutions to a problem, with the solutions have been generated by multiple underlying problem-solving systems. We inhabit one body and so much choose one path; robots are now finding themselves in this same dilemma in which they can ‘dream’ up lots of possible solutions but must pick one and go with it.

  44. The Other John Mcon 11 Jun 2014 at 3:18 pm

    We are sometimes consciously aware of these underlying systems “struggling” over which will dominate (do I stay up late or go to bed? eat that chocolate or run a mile?); and it’s almost as if we are passive observers of these struggles, who simply “go with it” once a decision feels like it has been made by the whole system…very strange…

  45. Bill Openthalton 11 Jun 2014 at 4:45 pm

    Mlema –

    The hard question of consciousness: How does subjective experience of these phenomenon “emerge” from brain function/processes?

    “Subjective experience” is not even a problem, it’s philosophical navel gazing.

    You behave badly, and your social subsystem computes that you risk being chucked out of your group because of your actions. If you had a bunch of dials in front of your mental eye, it would register as the needle of the shame-o-meter stuck firmly in the red. We don’t have dials, we “feel ashamed”, or “experience shame” if you wish, and it leads to the same type of reaction as a gauge stuck in the red — a change in behaviour. Depending on the “strength” of the feeling (i.e. how imminent is your being chucked out), other subsystems in your brain activate to reduce the effect of your misbehaving. What we call consciousness is the combination of the process of distributing this type of information to the various functional parts of the brain and of collecting and formatting information for the purpose of exchange with other humans. We have to be able to label these effects so we can inform others on our internal state (“I felt ashamed”).

    We would not have our level of awareness without the intense social cooperation humans exhibit. To cooperate essentially as ants without the genetic identity, we need to be able to obtain lots of information on the internal state of the members of our groups, meaning we also need to be able to provide the same (and hence acquire that information internally). We also need to have a protocol (language) to encode that information for transmission, and because the parts that process the information obtained from others also process the information we provide to others (you hear yourself speak), we can model ourselves to some extend as we model the others.

    Paradoxically, we’re highly individualistic because we’re incredibly social.

  46. Bill Openthalton 11 Jun 2014 at 4:56 pm

    The Other John Mc –

    We are sometimes consciously aware of these underlying systems “struggling” over which will dominate (do I stay up late or go to bed? eat that chocolate or run a mile?); and it’s almost as if we are passive observers of these struggles, who simply “go with it” once a decision feels like it has been made by the whole system…very strange…

    The “consciousness” subsystem is indeed more of an observer than an actor. It does channel information between the various subsystems and other humans but has little (if any) influence over it (a bit like the “bus” in a computer).

  47. Mlemaon 11 Jun 2014 at 5:52 pm

    Bill O. – you can’t use a psychological model of the brain to explain how subjective experience arises. All I’ve done is categorize conscious function and subjective experience. A lot of people confuse them. Your example of “shame” as a brain process confuses brain functions which are theoretically explainable with brain functions that are theoretically not explainable. I’m not saying that things which are theoretically not explainable are important, but I do think it’s important to differentiate one from the other.

  48. BillyJoe7on 11 Jun 2014 at 6:05 pm

    Mlema,

    “How does subjective experience of these phenomenon “emerge” from brain function/processes?”

    Are you saying it can’t have emerged from brain function, or are you asking how it has emerged from brain function?

    Here are a few big clues that subjective experience emerges from brain function:
    Firstly, it evolved from being absent (amoeba) to being rudimentary (insects) to being fully developed (humans). That’s a big clue that it evolved in and emerged from brain function.
    Secondly, we all start our lives without subjective experience and, as our brains develop, so does our subjective experience
    Thirdly, there are no other rational evidence based mechanisms for subjective experience. All we have is unsubstantiated philosophical positions such as dualism and idealism. They are essentially non answers, because now we must explain where “the soul” or “universal consciousness” came from.

    “The phenomenon of subjective experience is qualitatively different from the phenomena from which it emerges”

    Meaning what? That there must be a soul or universal intelligence?
    You do need an alternative. Otherwise we must stick with what we know and see where that leads.

    and seems to be inexplicable by sum-of-the-parts investigation. Hence, “strong” emergence – not comparable to other instances of emergence.

  49. BillyJoe7on 11 Jun 2014 at 6:10 pm

    “and seems to be inexplicable by sum-of-the-parts investigation. Hence, “strong” emergence – not comparable to other instances of emergence”

    Science is a work in progress. We don’t know everything and maybe we never will. In the mean time, we have no alternative but to work and build on what we have. And we don’t have no soul or universal consciousness to work with.

  50. hardnoseon 11 Jun 2014 at 7:09 pm

    Supposedly, you can talk to Eugene here http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/

    I tried having a conversation, and could not see anything intelligent about him. He could not follow the thread of my attempted conversation and only grabbed keywords from my most recent question. If he didn’t have an answer — and usually he didn’t — he asked me some unrelated question.

    So this whole thing is ridiculous and tells you only about the extreme gullibility of the news media.

    If a third of the judges were actually fooled, it’s probably because the humans were told to act like robots, to appear even more idiotic than Eugene.

  51. Mlemaon 11 Jun 2014 at 7:14 pm

    BJ7 – I think the answer to the first question you asked me is pretty evident from what you quoted. At least I hope it is.

    The phenomenon of subjective experience is qualitatively different from the phenomena from which it emerges, and seems to be inexplicable by sum-of-the-parts investigation. Hence, “strong” emergence – not comparable to other instances of emergence.

    “Meaning what? That there must be a soul or universal intelligence?”

    I think you’re reading too much into what I’m saying. That statement is an observation/analysis – it doesn’t have to “mean” something. I haven’t made any metaphysical assertions. If I meant anything at all, it’s that a lot of people confuse the various attributes of consciousness, and thereby fail to recognize the individual nature of those attributes.

  52. Bill Openthalton 11 Jun 2014 at 7:17 pm

    Mlema –

    Your example of “shame” as a brain process confuses brain functions which are theoretically explainable with brain functions that are theoretically not explainable.

    Which brain functions are not explainable? Simply saying “subjective experience” doesn’t cut the mustard, because it’s meaningless. You report you “feel shame”, or “are ashamed”, like you report you see a “red rose”, or are “moved to tears by Mozart”. Each and everyone of these feelings and experiences are direct results of the interaction between your brain and external or internal stimuli. Feelings are processes in the brain. Experiences are processes in the brain. When you communicate them to others, you can consider them yourself. Because we don’t have direct insight in the functioning of our own brains (this would require a very expensive, useless and even dangerous subsystem, so it’s not wonder it never evolved), it’s easy get off the deep end thinking they are somehow non-explainable.

  53. Mlemaon 11 Jun 2014 at 9:18 pm

    Bill O. –
    “Which brain functions are not explainable?”

    I don’t know. You seem to think you’ve explained subjective experience. You’d better alert the press, since many scientists and philosophers are waiting to hear the explanation.

    “Simply saying “subjective experience” doesn’t cut the mustard, because it’s meaningless.”

    Like I said – if you can explain it – I’m all ears! Why is it meaningless? Does it become more meaningful if I say: currently we have no theory which would explain how subjective experience emerges?
    I’m not saying it’s not explainable, I’m saying that there are those who say it’s not explainable by weak emergence. I said it’s theoretically not explainable. That’s why we differentiate between weak and strong emergence.

    “You report you “feel shame”, or “are ashamed”, like you report you see a “red rose”, or are “moved to tears by Mozart”. Each and everyone of these feelings and experiences are direct results of the interaction between your brain and external or internal stimuli.”

    OK. (but Mozart’s not my fave)

    “Feelings are processes in the brain. Experiences are processes in the brain.”

    OK, but they don’t exist without processes outside the brain. And they’re not necessary for the processes of consciousness I’ve listed under the “easy questions”.

    “When you communicate them to others, you can consider them yourself.”

    I can consider them whether or not I communicate them to others – and all of that still comes under the aspects of consciousness that don’t require experience of the processes of consciousness).

    “Because we don’t have direct insight in the functioning of our own brains (this would require a very expensive, useless and even dangerous subsystem, so it’s not wonder it never evolved),”

    What is a “subsystem” in this context? What are you talking about when you say “direct insight in the functioning of our own brains”? What does this have to do with subjective experience? Are you saying that subjective experience gives direct insight into the functioning of the brain? If so, can you explain this some more?

    ” it’s easy get off the deep end thinking they are somehow non-explainable.”

    The example you give doesn’t require subjective experience, it only further illustrates that subjective experience isn’t required, even for the most complicated interactions, communication, and self-assessment (all included in the list of “easy” questions.) If you consider what could theoretically be created as AI, and then decide whether that creation has subjective experience, we get closer to the question you seem to think you’ve already answered. We could create machines to interact and process even social information as humans do. What is social information? If we examine it scientifically, it’s still nothing more than extremely complicated data of wavelengths of light, changes in air pressure, pressure sensitivity, etc.. We would still have no reason to believe that these machines saw colors and heard sounds instead of simply processing wavelengths of light, changes in air pressure, or the complicated composite of these that comprise social cues – and then responding to them as people do, and according to their programming. All data must be garnered by our sense organs.

    Everything the brain incorporates to interpret and to respond to is: the sensory data from its environment. The brain has to process information so complicated that it will be many many years, if ever, before we’re able to construct a model of how it does so. The question is: if and when that is accomplished, will we know how experience arises from that processing? Some say yes, some say no. The brain scientists aren’t saying they know, or will ever know, how s.e. emerges – they only say that we might be able to teach a machine to fear (for example) – but not necessarily experience fear. They are saying that we can theoretically model an artificial brain to have a brain state of “fear”. But these scientists don’t seem to expect that that artificial brain will EXPERIENCE fear. Since there’s disagreement on these issues, there’s debate on them as well.

    It’s easy to dismiss the “hard problem” as navel gazing because subjective experience is not necessary to brain function, or, indeed, the functioning of the entire organism. Information input is necessary to consciousness, but having an experience of that information isn’t (at least you can’t show that it’s necessary). Again, I’m not saying it’s important to differentiate between the two – unless someone like yourself thinks he’s solved the problem that scientists and philosophers haven’t even claimed to have solved – or claimed to be able to solve. And I’m not saying it’s not solvable, but you haven’t solved it. Your argument, or illustration, doesn’t even account for the subjective experience you’re claiming to explain.

  54. Mlemaon 11 Jun 2014 at 9:22 pm

    “Because we don’t have direct insight in the functioning of our own brains (this would require a very expensive, useless and even dangerous subsystem, so it’s not wonder it never evolved),”

    This is interesting. It seems to be a sort of backwards understanding of the nature of subjective experience. Subjective experience is the thing that’s an expensive, useless and even dangerous “subsystem” (if I understand that term properly). It’s a wonder that it DID evolve- because it causes so much error and bad decision.

  55. warrenvon 12 Jun 2014 at 12:33 am

    If 33% is a passing grade, I think they should up the standards. That is a pretty low
    percentage especially considering all of the handicaps they gave the bot. Whats even
    more confusing is the articles that make comparisons with the Eliza bot, which is 50
    years old. You would think at least something a little more groundbreaking could
    make its way through.

    Another reason why I think 33% is so low, is because Ive scripted something in the
    past that convinced people 100% that they were talking to a real person. However
    the sad part of it is it requires no AI to convince. Just a list of common phrases/replys
    and sporadic conditions. The most convincing tool of all is timing. As typical bots just
    reply instantly with no hesitation, with a carefully tuned set of timers its much more
    convincing.

    Just a common knowledge of how people initiate conversations is involved.
    If my script ever got into dangerous territory that it couldn’t get out of, it would simply
    say it would be right back. Wait and return, then try to drive the conversation a little.
    its still a gimmick, but the intent was never to elude people, as much as gather
    information for screening purposes. it worked pretty well too.

  56. Steven Novellaon 12 Jun 2014 at 6:48 am

    Mlema – I think you’re a little behind the times. Sure, we do not have a solid theory of subjective experience, but neither are we entirely clueless.

    Consciousness researchers have made significant progress. They have shifted a bit – it is now apparent that there is no consciousness circuit in the brain. It seems that each part of the brain contributes its little bit to overall consciousness. Researchers have shifted into studying things like attention.

    Which also gives you the answer as to why we evolved subjective experience. It may simply be the experience of specific functions like attention. It may, in fact, be necessary, but even if it isn’t, it seems like one solution.

    I also have to completely disagree with you about which parts of the brain make the bad decisions. There is plenty of criticism to go around, but the highest level function, executive function, seems to be the best decision maker, while the more subconscious parts of the brain follow simpler emotions and heuristics which have the advantage of being fast, but are sloppy and biased, and often have to be overridden by executive function for more strategic planning.

  57. Mlemaon 12 Jun 2014 at 3:49 pm

    Sensory input is necessary for consciousness.
    Attention is one aspect, or attribute, or component of consciousness. Attention is a brain function which determines which sensory data (input, stimuli, neural response to stimuli) is most important, and which then effects allocation of resources and finally, response. Can you explain how subjective experience is required for the brain to determine which sensory input is most important?

    Also, it’s not a new idea that all of the brain contributes to the emergence of consciousness. It’s a jumbling of nomenclature and a confusion of concepts that may be causing you to think this helps illuminate the what, how and wherefore of subjective experience.

    Subjective experience doesn’t equal consciousness. You can’t properly use those terms interchangeably.

  58. Pete Aon 12 Jun 2014 at 4:32 pm

    Mlema – Your use of the term “subjective experience” suggests to me that your questions may relate more to the philosophy of consciousness than to the term consciousness as used in the fields of cognitive neuroscience and evidence-based psychology.

    Steven Novella’s reply to you is a succinct summary. I offer you the following additions…

    You wrote: “How does subjective experience of these phenomenon ‘emerge’ from brain function/processes?” The answer to that question is contained within your list of easy questions: “integration of information by a cognitive system”. The time-domain integration of multiple channels emerges information in a different domain from the data contained within those channels. The domain, in this case, is what we experience as consciousness.

    Our brain contains synchronization “modules” to provide us with the illusion that we are experiencing events seamlessly and in real time. E.g. visual processing takes longer than sound processing, but we have a dedicated “module” to syhchronize them in order to avoid causing us a very uncomfortable (dissonant) experience of reality. We’ve all seen poor quality videos or movies that have lip-sync errors — they are almost painful to endure!
    http://www.newscientist.com/article/mg21929245.000-first-man-to-hear-people-before-they-speak.html

    In many branches of science and engineering we have mathematically-based domain conversion tools, such as the Fourier transform. We don’t have a mathematical description of the processes that lead to human consciousness for a very simple reason: each individual of our species is unique — we are not a replicated instance of homo sapiens. Attempting to build a workable model of homo sapiens is pointless because the model would quickly become incomprehensible to any and all individuals. Obviously, “subjective experience” is unique to the individual, by definition.

    If I buy two identical computers then use them slightly differently, they will have dissimilar states (and probably exhibit different functionality). My subjective experience of using each of them will therefore differ.

    What is it that you do not understand regarding subjective experience and consciousness? If you were to clearly state the problem that you are trying to solve it would be much easier to answer your questions using appropriate contexts.

  59. Mlemaon 12 Jun 2014 at 5:32 pm

    Pete A – thanks for your opinion and offer of assistance. Not sure why you think I need them though.

    Yes, what I’ve been discussing with Bill O (which is what Dr. N remarked upon) definitely relates to cognitive neuroscience. And because cognitive neuroscience is multidisciplinary, (or perhaps for some completely unrelated reason – who knows?) it seems to be difficult for people to communicate and conceptualize even the most basic aspects in a coherent way.

    Not really my problem – although it’s possible that the reason you don’t seem to understand what I’m saying above is because I’m not communicating well.

  60. Mlemaon 12 Jun 2014 at 5:38 pm

    Pete – here’s something you can help me with:

    “We don’t have a mathematical description of the processes that lead to human consciousness for a very simple reason: each individual of our species is unique — we are not a replicated instance of homo sapiens.”

    Why would we need a mathematical description of the processes that lead to human consciousness in order to explain how subjective experience emerges? And, are you able to describe the difference between consciousness and subjective experience?

  61. Bill Openthalton 12 Jun 2014 at 7:35 pm

    Mlema –

    I guess that by subjective experience you mean qualia. I don’t want to teach neither your nor my grandmother to suck eggs, but that’s a philosophical term one can argue means nothing much at all. If one observes something, either an internal stimulus like pain, or an external object like the sky, one registers the characteristics of that observation. To communicate that observation to another human, one needs a label. The characteristics are objective (e.g. the frequency of the light, or the level of the pain), the reaction of the brain is subjective (in the sense that in all likelihood, the effects of the same observation are not identical for different brains), but the label allows one to re-create the effect from the appropriate memories well enough to understand the other person. There is nothing mystical to qualia.

    We would still have no reason to believe that these machines saw colors and heard sounds instead of simply processing wavelengths of light, changes in air pressure, or the complicated composite of these that comprise social cues – and then responding to them as people do, and according to their programming.

    But the only thing we do is process information on the world around us (as well as information on ourselves, obviously), store the salient characteristics of that information and re-use it to gauge the continuous stream of new information we need to react to appropriately to stay alive. If a machine would “process” air waves, match them against stored patterns, and tell you it is Elgar’s Pomp & Circumstance March number 2, does it not “hear sounds”? If the machine and you process the same visual information (i.e. picture of a red rose) and you tell the machine it is a “red rose”, and the machine later on processes other visual information (like an image of a frock) and tells you the frock is red because the spectra match, is it not “seeing a colour”?

    This is the essence of the Turing test — it doesn’t matter how it is done, as long as the result is human behaviour, I would say the entity displaying this behaviour is human — to a point. It would require a convincing simulation of the human form to make the full spectrum of human behaviour available to a machine, but if one reduces it to say, a telephonic conversation on a pre-defined subject between strangers, the problem is less complex. What is essential though is the “self-programming” aspect of humans. We store information on the people we interact with, and modify our behaviour based on that information. If Eugene doesn’t do this, it should not pass even a very restricted Turing test.

  62. Mlemaon 12 Jun 2014 at 11:46 pm

    “If one observes something, either an internal stimulus like pain…”
    pain isn’t the stimulus – whatever caused the pain is the stimulus – damage to tissues, for example

    “…or an external object like the sky, one registers the characteristics of that observation.”
    i think it’s fair to say that the brain “registers” the characteristics. It receives information about that environment in the form of varying wavelengths light.

    “To communicate that observation to another human, one needs a label.”
    Or one could paint a picture. But I agree, one needs some externalized way to reference the experience one had if one wishes to communicate it to another human.

    “The characteristics are objective (e.g. the frequency of the light, or the level of the pain)”
    the frequency of light is objective, the level of damage which caused the pain is objective, but the level of pain is subjective.

    “..the reaction of the brain is subjective (in the sense that in all likelihood, the effects of the same observation are not identical for different brains)”

    It’s the inability to know whether or not it’s the same experience for both of us that causes us to classify it as subjective. But for me, it’s comfortable to assume that our experience of blue is similar.

    “…, but the label allows one to re-create the effect from the appropriate memories well enough to understand the other person.”
    OK

    “There is nothing mystical to qualia.”
    I could argue that nothing’s mystical.

    “But the only thing we do is process information on the world around us (as well as information on ourselves, obviously), store the salient characteristics of that information and re-use it to gauge the continuous stream of new information we need to react to appropriately to stay alive. If a machine would “process” air waves, match them against stored patterns, and tell you it is Elgar’s Pomp & Circumstance March number 2, does it not “hear sounds”?”

    In addition to processing information, we have experience. The brain somehow generates an experience of it’s activity, global state, etc. No, I don’t think the machine would “hear” sound – it would simply translate the sensory info (changes in air pressure) into something analogous to neural and brain activity – then it could assess, prioritize, store, respond, etc to that information – all the things that we think of as functions of consciousness. But there’s no reason for us to believe that the machine would hear sound. (although there’s no way to know and I think if the machine replicated human anatomy and physiology in every way I’d have to give it the benefit of the doubt) This is why it’s possible to imagine a world just like ours, but with no subjective experience. And this is the reason why I say there’s no demonstrated need for qualia, and this is the reason you can easily dismiss subjective experience as meaningless. It doesn’t matter if a person doesn’t even understand what subjective experience is – it won’t affect anything at all except that person’s understanding of the basic nature of empirical knowledge. And why would that be meaningful? (I think that’s a philosophical question) there are many meaningless aspects to life. Some say life itself is meaningless. Subjective experience is all we have. But because it’s the doorway to everything you know and are – it’s tough to see the doorway once you’ve already walked through it (which you did sometime during your embryological development) So what? You walked through, you’re having your subjective experience of your existence. If you want to say that recognizing that that’s what you’re having is unimportant or meaningless, or is just philosophical navel gazing, I really can’t argue with you.

    “If the machine and you process the same visual information (i.e. picture of a red rose) and you tell the machine it is a “red rose”, and the machine later on processes other visual information (like an image of a frock) and tells you the frock is red because the spectra match, is it not “seeing a colour”?”

    Thank you for a second example, and another really well written question (to my way of thinking). Although it’s the same question, I’ll answer it differently (while hopefully communicating the same answer) If you’ve never seen the color red, but I show you a graphical representation of the wavelength of red and tell you “this is red”, and you remember what that representation looks like – then – I later show you a something with the same graphical representation – you can tell me that the ball is red. What the machine gains is a representation of the wavelength of red – not the experience of a red color. So when the machine encounters the wavelength of green (for example) it isn’t seeing a color – it’s receiving a type of information (a specific wavelength of light) -and it’s handling that information as a brain would so that the next time it encounters that wavelength of light it can report that to you that it’s seeing green. The wavelengths of light do not possess color – they’re simply photons in various configurations. The brain creates a color to correspond to a particular wavelength. Color doesn’t exist outside the brain. Outside the brain all we have are varying wavelengths of light. The machine can recognize varying wavelengths of light and match them to a corresponding label. But that doesn’t mean it’s experiencing that particular wavelength of light by somehow constructing a color, or doing something that causes the experience of color to “emerge”. It doesn’t mean the machine is seeing red – experiencing red – it simply means it has the ability to register a particular external phenomenon (the wavelength associated with the human experience of red), interpret it, store it, retrieve it, utilize it, and do all sorts of other complicated things that the human brain does with sensory data/information. Your brain registers and interprets data in the form of light. You also see a color. We don’t know what, if anything, the machine experiences as a result of registering the wavelength of light which corresponds to our seeing red.

    In fact, i think you could legitimately say that subjective experience is a representation of sensory information. It’s not the same thing as brain process, or function, or any of the other attributes of consciousness. I’m encouraged that Dr. N said “It may simply be the experience of specific functions like attention.” We experience all the functioning of our brains indirectly. I think he has concisely stated what subjective experience is. And said it with a “may be”. Nice.
    But subjective experience isn’t the same as consciousness.

    I think I’ve gotten pretty far afield from the Turing test. My OP was simply a list of functions of consciousness – divided into what might be achieved by AI, and what might be questionable (and not just according to me, although it’s easier to write as if I know shit). Since we don’t know how subjective experience emerges, we can only guess what level of simulation might cause it to emerge. Of course, when I answer your questions, I answer them in accordance with my understanding. However, I really don’t see a debate on what subjective experience is – I only see incomprehension which causes dissension.

  63. grabulaon 13 Jun 2014 at 12:50 am

    @Mlema

    “It’s the inability to know whether or not it’s the same experience for both of us that causes us to classify it as subjective.”

    ” Color doesn’t exist outside the brain.”

    I feel like this is the same trap Ian Ward was falling into. Philosophically it’s a semi-interesting question, but scientifically it’s not as if our brain has somehow translated “green” from a wavelength that up until the first brain interpreted it didn’t have color. This to me is the same thing as wondering whether a tree makes a sound when it falls all alone in the woods. Our brains don’t write the movie as we go along, our brains observe the movie as it happens.

    ” What the machine gains is a representation of the wavelength of red – not the experience of a red color.”

    I’m not sure I necessarily agree with this. The Machine interprets the data it receives, and so does the brain. The mechanisms might be different but the results are ultimately the same, barring flaws in either.

  64. BillyJoe7on 13 Jun 2014 at 8:35 am

    Mlema: “Color doesn’t exist outside the brain”

    I would tend to agree. Before eyes and consciousness evolved in the universe, there was only different wavelengths of EMR. Even afterwards, there was initially only shades of grey but still no colour. Eventually some rudimentary colour began to evolve with barely any difference between red, blue, and green. Gradually, over time, the different colours became more distinct. We don’t know how the brain discriminates colour, but the fact that it evolved from shades of grey through rudimentary to full colour is a clear indication that the brain does all the work. As the brain evolved to become more and more complex in response to the demands of an increasingly challenging environment, it became better and better at discriminating colour.

  65. The Other John Mcon 13 Jun 2014 at 9:15 am

    “Color doesn’t exist outside the brain.”

    Yes, this is definitional, at least in perceptual psychology. “Color” is the perception, and the wavelengths of electromagnetic light, their intensities, and distributions, are the external stimuli.

  66. Mlemaon 13 Jun 2014 at 10:09 am

    BJ7 – the “machine” wouldn’t be fooled by the checkerboard illusion :)

  67. Pete Aon 13 Jun 2014 at 1:03 pm

    Mlema – Apologies for my misunderstanding. I found your recent reply to Bill Openthalt most interesting; hopefully, I’m more or less on track this time.

    You wrote “…it seems to be difficult for people to communicate and conceptualize even the most basic aspects in a coherent way.” I couldn’t agree more. Going back a few decades I think most people shared similar concepts of basic terms, but nowadays each term seems to have multiple meanings. Comment threads (and text messaging) more often lead to misunderstanding and polarization than to enjoyable discussions. So much for progress!

    Delaying gratification might be a good example of the difference between stand-alone consciousness and consciousness combined with subjective experience. Delaying gratification is not an innate human skill, it has to be instilled. This skill enables us to override our innate executive function: specifically, it enables us to reorder our “task queue” by reassigning the priority levels of certain items within the queue. Unfortunately, exercising this skill is a substantial cognitive load because executive function reassigns its original priorities behind our back, so to speak. Keeping our executive function in check requires due diligence. Selecting a salad from a menu instead of the most appetising option can be an excruciatingly difficult battle to win :-)

    Exercising control over our locus of attention likewise requires us to override our executive function. We all know of the catastrophic outcomes that far too frequently result from using a mobile communications device while driving. And we all know how much mental effort it takes to resist answering a ringing cellphone while we are performing a more important task. The ringing cellphone is akin to the cry of a child therefore our executive function places it at top of our task queue. This can be circumvented to some extent by selecting a less demanding ringtone and volume level.

    Artificial Intelligence (AI) is a subject area that never ceases to provide little more than endless bemusement. The average adult brain has circa 90 billion neurons; the number of possible interconnections between them is the factorial of 90 billion, which is a number so large that it not only defies human comprehension, most electronic calculators are unable to perform the calculation. The factorial of only 60 is 8.32E81, which is a number already far beyond human comprehension — for comparison, there are _only_ circa 1E81 particles in our entire universe.

    Obviously, to create a totally believable simulation or emulation of a human would require a neural network many orders of magnitude more complex than anything our current technology can even dream of providing in the future.

    Caveat: Our 90 billion neurons are not able to form the factorial of 90 billion interconnections because our brain has insufficient volume to accommodate the synapses required. However, we form (during our lifetime) our unique set of circa 150 trillion (1.5E14) synaptic connections between our neurons, which is still many orders of magnitude beyond anything AI is capable of achieving.

    Going back to the final question you asked me: “And are you able to describe the difference between consciousness and subjective experience?” Yes, I think I could just about manage to engage in a well-reasoned discussion. My initial thoughts are attempting to answer the two underlying fundamental questions, which are: what must we add to AI machines to give them subjective experience; what must we remove from human consciousness in order to render a result indistinguishable from AI simulation/emulation? I think the answers simply boil down to: the gargantuan database of accumulated knowledge and personal experience that every (fairly healthy) human possesses.

  68. BillyJoe7on 13 Jun 2014 at 2:39 pm

    Mlema,

    “BJ7 – the “machine” wouldn’t be fooled by the checkerboard illusion :)

    It would if it evolved (or was programmed) to discriminate colour in relation to its surrounds. ;)

  69. The Other John Mcon 13 Jun 2014 at 4:14 pm

    In fact machine vision replicates many of the perceptual illusions people have, for much the same reasons: they make similar simplifying rules, heuristics, and assumptions about how visual information maps to reality. BJ was spot on: in the machine, these assumptions are coded in; while in humans, evolution did the programming.

    One example relating to motion perception: http://www.nature.com/neuro/journal/v5/n6/abs/nn858.html

    A simple motion detection model, with some reasonable but simple assumptions about the world (slower retinal velocities are more common/likely than faster ones) can account for several perceptual illusions that people have in relation to motion perception.

    These researchers dissect the checkerboard illusion and argue “good” artificial vision systems *should* suffer from a lot of the same illusions as humans do:
    http://nerdwisdom.com/2007/09/06/using-illusions-to-understand-vision/

  70. Bill Openthalton 13 Jun 2014 at 7:33 pm

    Mlema –

    But there’s no reason for us to believe that the machine would hear sound.

    The problem is that you reify your description of a mental process. This is quite obvious in this passage:

    The brain creates a color to correspond to a particular wavelength.

    There only is a label “red”, classified with other labels (“green”, “blue” etc.) that refer to the same type of observational pattern as “colours”. This is no different from “rose” and “flowers”, even if to a human a rose is more obviously physical than a bunch of photons. We create a representation of reality through the aspects we can observe, classifying and labeling our observations so we can exchange information with other humans in a meaningful way.

    As far as the problems faced by AI to create a human-like intelligence, Pete A nailed it here:

    the gargantuan database of accumulated knowledge and personal experience that every (fairly healthy) human possesses.

    in addition to the subconscious modules that are responsible for our motivation and social behaviour. The conundrum really is that to be convincingly human, you need to have human experiences — you need to grow up in a human society, acquiring the elements of the “make-a-human” kit every society gives to its children. As often, Arthur C Clarke showed real vision in his description of HAL’s “childhood”, and how its recognisable humanity included fear, irrationality, deceit and cruelty.

  71. Pete Aon 14 Jun 2014 at 1:26 pm

    Bill Openthalt — agreed and I offer three additions…

    Mlema wrote: “The brain creates a color to correspond to a particular wavelength.” This is totally incorrect on so many levels, but the easiest way to demonstrate this error is by retorting: Then what is the wavelength of magenta?

    Perhaps a better test of AI is to create at least two identical machines, run them in parallel with identical inputs, and observe their outputs. If the outputs are identical then the machines have failed to emulate animal intelligence. The signal-to-noise ratio of brain circuity is remarkably low — brains are highly-stochastic information processors. E.g. most people either laugh or cringe when they muddle up their words. My typos and verbal errors usually make me cringe with embarrassment, but I’m very fortunate in that most correspondents do not take me to task over them. Any machine that makes unique random mistakes and has enough self-awareness to retrospectively identify its mistakes as a simple error, a Freudian slip, a double entendre, a spoonerism, etc. is a machine that would be very convincing.

    Anthropomorphism: This human trait has a wide spectrum. A highly anthropomorphic individual will be easily convinced by a machine that ends the interaction along the lines: “Thank you for your time; it’s been a great pleasure chatting with you.” Conversely, the less anthropomorphic amongst us are sick to death of human workers who use the opening gambit: “Hello. How are you today?”

  72. BillyJoe7on 14 Jun 2014 at 6:48 pm

    Mlema,

    The basis of your whole argument is that you think p-zombies are possible.

    You think that it is possible for there to be human beings exactly like you and me in every minute detail but without qualia. But the problem with your argument is that you don’t have any reason to think that this is possible. You are simply assuming that this is so. You are assuming your conclusion. In other words, you don’t have an actual argument.

    What reasons do you have for thinking that it is possible for a robot to have ALL the necessary circuits to detect and process electromagnetic radiation EXACTLY as humans do but not have qualia? Remember that you are asking a lot of this robot. For one thing, it would have to be able to identify the checkerboard illusion as an illusion.

  73. Mlemaon 15 Jun 2014 at 12:06 am

    grabula –

    “…it’s not as if our brain has somehow translated “green” from a wavelength that up until the first brain interpreted it didn’t have color.”

    That’s exactly what it is. Except the wavelength STILL doesn’t “have color”. There is no color, sound, emotion, pain, etc. apart from your body – these are all experiences that your brain has somehow formed from the sensory information it’s gained through your eyes, ears, social and other physical surroundings. Apart from your experience everything is simply the stuff of physics and the rules which govern it (as far as we know) We’ve never found green – only electromagnetic radiation that somehow causes our brains to see green.

    “The Machine interprets the data it receives, and so does the brain. The mechanisms might be different but the results are ultimately the same, barring flaws in either.”

    You see a color – the machine doesn’t see anything. The machine just recognizes the wavelength it registered on it’s interferometer, and says “I see green” And that’s because you told it to say “I see green” whenever it received the wavelength you stipulated as green. The machine does not see the color green! THere’s no color green for it to see – there’s only electromagnetic radiation in wavelengths which cause our brains to see green. No green outside the brain. There’s nothing green in a leaf for you to look at – only whatever it is that’s the basis of it’s physical existence. Protons, electrons, quarks – which in their arrangement, and according to physical laws, reflect to your eye all the wavelengths that they don’t absorb. The leaf doesn’t have color. The wavelengths of light don’t have color – only your brain “has” color. You experience color. Color is a subjective experience. It’s not an object any more than the law of gravity is an object. Color doesn’t exist in the objective world – only the stimuli which causes it. We can’t open your brain and find green (I hope) We can stimulate your brain to see green, and we can theoretically model all the activity pertinent to you seeing green. We can maybe do some fancy stuff someday and manage to project the green that you’re seeing so that we can all see it. But green still only exists in your brain and in the brains of those who see it also.

    This article explains and shows an extreme example of how the brain constructs color perception in a relative way. (it’s an unsatisfactory example in a way because it doesn’t have a corresponding “reveal” to show that the squares in the colored images below each grey image actually do match each other)
    http://www.bbc.co.uk/news/science-environment-14421303
    but there’s a more satisfactory multi-colored cube on this page:
    http://www.bbc.co.uk/news/magazine-11553099

  74. Mlemaon 15 Jun 2014 at 12:10 am

    BillyJoe – I’m not making an argument. I’m just trying to get people to understand the nature of subjective experience. I do not think p-zombies are possible. If you were to build a virtual human, I would give it the benefit of the doubt regarding qualia. I would have to do so, because that’s the thing about qualia: you can’t tell if someone’s having them.

  75. Mlemaon 15 Jun 2014 at 12:17 am

    BillyJoe, if you were to model human visual machinery (which includes brain function), of course it would be prone to the same errors. Are you familiar with Josef Albers? His book “The interaction of Color” is a classic in color theory. Google “Interaction of Color” and look at images. Depending on the calibration of your monitor, you may be able to get an idea of the kinds of illusions that can be created by the relative perception of color (it’s difficult without the text because the illusions are pretty much impossible to overcome visually, and you won’t be convinced until you see the colors separately from each other. Also – there are examples in the links I provided for grabula)

    Programming for a machine that would be “fooled” in the same way a human is fooled would have to account for every visible color and then every effect that every other color had on it, and every combination of how every other color effected every other color x all the combinations of 2. 3. 4. etc. Think of the gradation of hue (place on the spectrum) and value (shades of the color between lightest and darkest). Certainly figuring out how to formulate this would make an interesting life’s work :)

    But again, if you could reverse engineer the whole system, with the workings of the brain, which is never viable without the physiological environment of the whole body, you could bypass the programming ( – and of course, IF a machine evolved to enjoy the errors caused by qualia, I’m sure it would decide they were worth the experience of seeing color, just like I have. (not that I have a choice, unless I do myself some damage :)

    A human without subjective experience would be something like a highly functioning sleepwalker. They would perform all the tasks of every other human (without the weird mistakes made by real sleepwalkers) – but they would have no awareness of doing so. There’s just no way to know if a machine could experience qualia. It’s assumed that they don’t. If you start treating a machine that registers and reports on various wavelengths of light as if it can actually see (“I hope this light isn’t hurting your receptors”) you will be treated differently yourself :)

  76. Mlemaon 15 Jun 2014 at 12:18 am

    Oh man, sorry for all the smileys.

  77. Mlemaon 15 Jun 2014 at 12:32 am

    The Other John Mc – This is very sophisticated research – over my head. The researchers have modeled an “ideal observer” and compared it to human observers, learning that the illusions that plague our vision may actually be optimal for estimating important information in the environment. (at least that’s my take) Here’s a pdf of the paper (I only scanned it so I hope you’ll point out whatever I may have misconstrued)
    http://persci.mit.edu/pub_pdfs/weiss_bayes_nn_02.pdf
    Here’s what I think: Any machine based on the model would simulate the illusions of human vision. Machines can’t simplify rules or use heuristics on their own. Machines are only prone to errors that can happen because of the way they’re designed/built/programmed. Machines don’t make assumptions about how visual info maps to reality. They don’t know the difference between visual info and reality. So I would say yes, if you program the machine to make an assumption, it will make it, or if you fail to program the machine to NOT make an assumption, it will make it (that’s where the problem usually arises I think, because humans often don’t realize the assumptions they’re making. Then, when the machine makes an error in perception – it becomes obvious it was the human that failed to recognize his assumption and the machine has thereby pointed up his error.) Human perception makes assumptions about “reality”. This is probably because our perception didn’t evolve in isolation from it’s environment. We only find out we’re making assumptions when we get at reality in some other way than through direct human perception – or some genius has an insight. This paper shows that the assumptions we make aren’t just for the sake of simplification – they actually can be improving our assessment of the information in the environment.

    What I’m saying in my statement about the machine not being fooled by the checkerboard illusion is: if a machine “sees” by registering various wavelengths, it will accurately label the color it’s “seeing”, regardless of the immediately surrounding colors, whereas humans never really can tell exactly what wavelength a color is because their color vision is relative (see notes to BJ7 and grabula above)

  78. BillyJoe7on 15 Jun 2014 at 5:36 am

    Mlema,

    I’m not sure of the purpose of your science lesson.
    I would’ve thought that it was pretty clear we are up to speed on colour discrimination and optical illusions as should have been clear from the following two quotes:

    BJ: “It would [be fooled by the checkerboard illusion] if it evolved (or was programmed) to discriminate colour in relation to its surrounds”

    TOJM: “In fact machine vision replicates many of the perceptual illusions people have, for much the same reasons: they make similar simplifying rules, heuristics, and assumptions about how visual information maps to reality. BJ was spot on: in the machine, these assumptions are coded in; while in humans, evolution did the programming”

    I’m not sure why you felt the need to repeat this all back to us.
    Anyway, it seems we are in agreement.
    Which makes the following statement made by you puzzling:

    “the “machine” wouldn’t be fooled by the checkerboard illusion”

    Because you are now agreeing with us that it can.
    Maybe you did a bit of research between postings?
    But, then, why repeat back to us the contents of our own posts? Puzzling.

  79. Mlemaon 15 Jun 2014 at 11:48 am

    Well, I guess even with all I’ve written, there’s still room for misinterpretation. The machine that wouldn’t be fooled by the checkerboard illusion is the machine we were talking about all through the conversation with Bill O. It’s a machine that can discriminate various wavelength of light. It’s a talking interferometer. It wouldn’t be fooled by the checkerboard illusion. (with a smiley face – as I was making reference to one of your favorite devices for showing people who don’t understand how people can’t trust their subjective experiences). But I forget how literal you can be. So when you seemed to want to explore how a machine could be fooled by a checkerboard illusion, I went with it. I wasn’t trying to educate you about subjective experience. You obviously at least understand that it’s something done in the brain, which grabula, Bill O and Pete A don’t seem to get. You and TOJM both made “If” comments. I was exploring those “ifs”. One thing – I do see now that TOJM did qualify his statements on why machines might have illusions. But I’d really like to see an example of that because the link didn’t provide one. And I’d also like to know how a machine could evolve relative color perception. I tried to explain what that would involve. Perhaps you could flesh that out for my own lesson, or show me the research that’s going on that you think would be pertinent.

  80. Mlemaon 15 Jun 2014 at 11:50 am

    Pete A –
    Re: executive function (thanks for making me think about that)
    A robot that was self-maintaing – let’s say it keeps itself lubricated :) – the way a human eats to maintain itself. It could be programmed that when these various lubricants are available, always choose this kind of formula first, this one second, etc. Unlike the human who’s facing a salad with boiled egg vs. pork chop – the robot will feel no conflict or anxiety in choosing the lubricant that’s “best” for it. I think an equivalent to executive function could be included in programming – it just wouldn’t include the conflict and anxiety. The machine is programmed to always make the best choice for it’s “survival”. It could also calculate whether there was a good chance that if it waited until it reached X – that there would be a better chance to get the best formulation of lubricant. No “delayed gratification” – because there’s no feeling of gratification at the end anyway, and no sense of time or delay. That’s why I said earlier that I think subjective experience can make for bad decisions. I wouldn’t want to live my life without emotions, colors, etc. – but I think there are plenty of times that it’s negatively affected me. Executive function doesn’t over-ride subjectivity – it just adds an extra layer. Delayed gratification is still gratification – and the idea is that if you can learn to wait the gratification will be even more significant. So – gratification is a subjective experience. I think executive function still falls within the functions of consciousness which are part of the “easy” questions (that is – theoretically solvable by brain-like machines) But, theoretically…. whereas, how could we, even theoretically, answer how the experience of “gratification” emerges from the brain? What is our best hypothesis? We can come up with lots of reasons why it seems like a good thing – after all, we can’t even imagine existence without it. We would be conscious, in a manner of speaking, and yet: no experience of being conscious. It would be like dreamless sleepwalking I guess. Only we would be functioning sensibly (unlike the sleepwalker) because our brain would be co-ordinating our efforts at survival as well as it does now – just without providing the awareness of doing so.

    “…what must we add to AI machines to give them subjective experience…”
    That is the (put some outstanding number of dollars here) question.

    “….what must we remove from human consciousness in order to render a result indistinguishable from AI simulation/emulation?”
    I would say, if we could develop AI that successfully emulates human behavior – we wouldn’t have to remove anything. Because subjective experience is invisible to all but the “experiencer”. If a human and a machine react in similarly believable ways to similar stimuli, you wouldn’t have to remove subjective experience from the human in order to make him indistinguishable from the AI. The AI would appear to have subjective experience. The AI would only become suspicious if it never did anything stupid, inconsiderate, etc. If it was always rational and deferent, it would seem not human. So, maybe that’s the answer. Remove people’s jerkiness :) Or – add some personality quirks to the AI.

    “I think the answers simply boil down to: the gargantuan database of accumulated knowledge and personal experience that every (fairly healthy) human possesses.”

    As far as factual knowledge, machines already have us beat. But since they don’t experience anything, well, perhaps we can’t make experience a condition of artificial intelligence. Unless we include in intelligence things like creative insight – I don’t know what the current thinking on that is. I don’t know if we can expect a machine to come up with something truly new and insightful, like Einstein did. But basically, intelligence and experience aren’t same attributes. You can build in intelligence, but how do you make a machine experience anything?

  81. Mlemaon 15 Jun 2014 at 11:51 am

    Pete A –
    “The brain creates a color to correspond to a particular wavelength.” This is totally incorrect on so many levels, but the easiest way to demonstrate this error is by retorting: Then what is the wavelength of magenta?

    Magenta doesn’t have a wavelength. Magenta is the color we see when our eyes are stimulated by a particular wavelength of light. We may shorthand this and say “the wavelength of magenta”, but if you study perception you’ll find that it’s possible to make the same wavelength you call magenta look like a completely different color. Does that mean the objective world changed to fool your eyes? No – it means that all color is subjective. It exists only in the experience of a mind that perceives it.
    Where does the color magenta if not in the brain? And to show you that you’re not seeing a specific color every time you see a particular wavelength of light – look at the illusions I linked to in my comment to grabula. If magenta exists in the thing you’re looking at, then how come you can see it as a completely different color when it’s surrounding change? Isn’t it still reflecting light to you at the same wavelength?

    “Perhaps a better test of AI is to create at least two identical machines, run them in parallel with identical inputs, and observe their outputs. If the outputs are identical then the machines have failed to emulate animal intelligence.”

    If any two or more humans were identical, with identical life stories (parallel lives with identical “inputs”), how do you know that THEY wouldn’t have identical “outputs”? I don’t think your test would really determine whether or not a machine has emulated animal intelligence. A machine would simply need to perform all the “easy” problems of consciousness in order to have animal intelligence. It’s not impossible that we could build such a thing. But if you’re interchanging the words “intelligence” with “subjectivity” or “creative insight” – then I would say no one knows if that’s possible at this time. No one knows if it’s possible to build a machine that has experiences it’s processing in the way we experience ours. No one knows if a machine will see magenta when it sees light with the wavelength of magenta. A machine will analyze the wavelength of magenta when it’s visual apparatus receives it (just like a interferometer) It will recognize and report that it sees magenta, because the label of “magenta” has been given to that wavelength. If you can find any leading edge researchers who believe that the machine will see magenta, I will admit my error. Do you believe that an interferometer sees colors? Colors are NOT radio magnetic energy – they are a created by brain phenomena. Or they “emerge” from phenomena – or – if you are of some other philosophy about how they exist – you can term their genesis any way you like – because no one knows how the come to exist.

    Let me take an example of another subjective experience to try to explain this more. How about an emotion? Machines don’t have emotion. We can program them to act as if they do, but I don’t know anyone who thinks that a machine can feel emotions. We respond to various stimuli in such a way that an emotion “emerges” from the brain activity caused by that stimuli. So, if someone’s trying to kill me, my pupils will dilate, my pulse will increase, I may freeze, or run, and: I will feel fear. Some people think the fear is the thing that causes the physiological response, but the fear is part of the response. And if i can generate those physiological responses in you without the stimuli of someone trying to kill you, you will say you feel fear even though the only reason you feel it is because I’ve created the physiological state that accompanies the emotion of fear. (or generates it, or from which it emerges, etc.)

    “Any machine that makes unique random mistakes and has enough self-awareness to retrospectively identify its mistakes as a simple error, a Freudian slip, a double entendre, a spoonerism, etc. is a machine that would be very convincing.”

    Would this be possible to do if we programmed it to respond quickly to a stimuli with the first and most common wording in it’s database, but to continue analyzing the stimuli and refining it’s response to be most appropriate? I agree that would be convincing. But would the machine feel the sort of embarrassment that accompanies the original error? The sort of embarrassment you’ve described?

    As far as anthropomorphism goes, Dr. Novella has written about this before. People do seem to exhibit similar kinds of responses to similar kinds of “robots”.

  82. Mlemaon 15 Jun 2014 at 12:07 pm

    Bill O – you seem to think I’m trying to assert that subjective experience (which seeing color is) is something mystical. But you’re the one who’s saying that machines couldn’t convincingly simulate human behavior without “growing up” as human. Look, I don’t know whether they could or not – I’m just trying to clarify the issue. Are you saying that there’s something about experience that couldn’t be quantified, and therefore programmable? And again, no matter what your answer, I’m not trying to say you’re right or wrong. It just seems to me that you picked an argument with me because you think I’m trying to add something mystical to human experience, when you’re the one that’s saying that human experience is the thing we can’t simulate in computers. Help me understand why you’re saying this. If human experience is the sum total of sensory input – which is physical – why couldn’t we build that into AI?

    Fear, love pain, color and sound are in the same category of things. They’re all constructs, or properties (or however else you want to categorize the “emergence” of percepts) of brain function (and that’s the best I can do for description) They don’t exist outside the brain.

  83. Mlemaon 15 Jun 2014 at 7:04 pm

    Erwin Schrödinger: “The sensation of color cannot be accounted for by the physicist’s objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so.”

  84. Bill Openthalton 15 Jun 2014 at 7:33 pm

    Mlema –

    Fear, love pain, color and sound are in the same category of things. They’re all constructs, or properties (or however else you want to categorize the “emergence” of percepts) of brain function (and that’s the best I can do for description) They don’t exist outside the brain.

    100% agreement here. There is no such thing as “subjective experience” unless it is a convenient shorthand for a process in the brain, that is remembered and can be reported on.

    It is you who said earlier:

    The hard question of consciousness:
    How does subjective experience of these phenomenon “emerge” from brain function/processes? The phenomenon of subjective experience is qualitatively different from the phenomena from which it emerges, and seems to be inexplicable by sum-of-the-parts investigation. Hence, “strong” emergence – not comparable to other instances of emergence.

    What I am arguing is simply that there is no hard question, and positing “subjective experience” or “consciousness” as something in need of explanation before we can assume the mind is what the brain does, is reifying a process.

  85. Mlemaon 15 Jun 2014 at 8:50 pm

    Bill O. – Subjective experience may emerge or accompany a process, but it’s not a process. The color green isn’t a process. Love and pain aren’t processes, they’re the experience of processes. You’re failing to differentiate between the processes of consciousness and the experience of consciousness. Everything you’re feeling, seeing, hearing, right now is the result of brain function. It’s not the functioning itself. There are processes of consciousness, and we experience them occurring. The most we can say is that subjective experience is the result of brain activity.

    And yes, everything I said re: the hard question is true. Subjective experience, if emergent, is strongly emergent – meaning: we can’t explain the phenomenon by the sum of its parts. This is unlike every other emergent phenomenon we’ve identified. We don’t know how it emerges, or arises, or whatever you want to guess is going on, from the brain.

    You didn’t answer my questions:

    Are you saying that there’s something about experience that couldn’t be quantified, and therefore programmable? If human experience is the sum total of sensory input/brain processing – which is physical – why couldn’t we build that into AI?

  86. Pete Aon 16 Jun 2014 at 7:49 am

    Mlema — Thanks for your interesting and amusing replies. If I knew your e-mail address I’d send a long reply, but I’ll keep this comment as brief as possible.

    “Executive function doesn’t over-ride subjectivity – it just adds an extra layer.”
    This depends on how we define the word subjectivity. We are able to override our executive function in order to change our emotions to quite a large extent. E.g. while driving we can remind ourself to stay calm, concentrate on the task, and take a break when we feel tired or stressed — this will change our subjective experience and memory of the journey.

    I chose the colour magenta for a specific reason. Firstly, it is not generated by a particular wavelength; it results from energy being present at both ends of the spectrum (red and blue) and absent in the middle region (green). Secondly, camera auto white balance (AWB) has to be acutely aware of this colour due to human visual perception being highly intolerant of small errors on the magenta-green axis. AWB and the auto-exposure system analyse the scene using a fairly accurate model of human colour and contrast perception. This section of the machine acts as a proxy for a human expert therefore it is difficult to claim that colour does not exist outside of a brain.

    I hope you find this fun to think about… Some snakes have infrared detectors providing them with visual capability that vastly exceeds ours. Now, a philosophical snake could claim: Humans are only machines because they cannot subjectively experience the world in infrared. To these machines, infrared is just wavelengths of electromagnetic radiation as measured using an external device :-)

  87. The Other John Mcon 16 Jun 2014 at 8:56 am

    Mlema I think I am agreeing with most of what you are saying, except the “strongly emergent” stuff I’m not sure I am buying into…

    It sounds to my ears like you are positing a special type of “emergent” phenomena and it’s not clear to me that this super-special type of emergentism is actually its own category, needing its own label and distinction…

    I’m completely with Bill O here: “there is no hard question, and positing “subjective experience” or “consciousness” as something in need of explanation before we can assume the mind is what the brain does, is reifying a process.”

  88. The Other John Mcon 16 Jun 2014 at 8:59 am

    Coincidentally I think it was Ian who was saying the same sort of “strong emergence” stuff about consciousness, but apparently coming to much wilder conclusions than you are….so maybe that’s why it’s extra confusing to hear you using his terms, too, mlema ;-)

  89. Mlemaon 17 Jun 2014 at 8:27 pm

    Pete A, thanks for your amusing philosophical snake. I enjoyed the mental picture of a thoughtful, deeply pensive serpent. I think he had a furrow in his brow. :)
    Here’s what I would say re: magenta:
    Our eyes don’t work like prisms to separate wavelengths of light. Instead of colors divided along a spectrum, wavelengths of light are mixed in a field of vision.
    http://chemistry.about.com/od/colorchemistry/f/how-magenta-works.htm
    http://www.vectorstock.com/royalty-free-vector/color-wheel-vector-1017530
    I, too, could write a longer answer in reply to your comment on executive function. But if you want to contact me I’m at: my user name, then the number 45. And it’s gmail. If you do write, I will read. But I can’t promise I’ll write much back. I think I’ve said just about as much as I can say for now. Thanks so much for the conversation – it was very enjoyable.

  90. Mlemaon 17 Jun 2014 at 8:49 pm

    The Other John Mc –
    Ian didn’t make up the term “hard problem”. But when someone with Ian’s philosophies uses the term hard problem to try to justify idealism, then people who are opposed to that philosophy also want to dismiss the hard problem. What can I say? These are the words I have. I think it’s much more important to come to some understanding on what we’re talking about than to cling to or dismiss specific terms based on what philosophies we think they entail. I probably should stop using the terms strong emergence and hard problem. They’re not really all that useful in this context I guess. My intent was to differentiate between consciousness and subjective experience.
    yup – it’s probably hard to believe I could write so much about that :)

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.