Oct 06 2008

An Upcoming Turing Test

In 1950 Alan Turing, in a paper entitled Computing Machinery and Intelligence, a practical test to determine if a computer possesses true intelligence. In what is now called the Turing test, an evaluator would ask questions of a computer and a person, not knowing which was which and with text only communication, and then have to decide which was the computer. If the evaluator cannot tell the difference (or if 30% of multiple evaluators cannot) then the computer is deemed to have passed the Turing test and should be considered intelligent.

On October 12 the Loebner Prize for Artificial Intelligence will conduct a formal Turing test of six machines (the finalists in this year’s competition) – Elbot, Eugene Goostman, Brother Jerome, Jaberwacky, Alice, and Ultra Hal. (It seems that AI will have to endure whimsical names, probably until true AI can demand more serious names for itself.) The prize for the victor is $100,000 and a gold medal – and career opportunitites that will probably dwarf the actual prize.

Ever since Alan Turing proposed his test it has provoked two still relevant questions: what does it mean to be intelligent, and what is the Turing test actually testing. I will address this latter question first.

The Turing test is really testing the ability to simulate a natural and open-ended conversation, enough to fool a regular person. One way to “simulate” such a conversation is to actually be able to hold one. But another way is to employ a complex algorithm that either chooses canned responses from a large repertoire or constructs answers following a set of rules. Such algorithms exist and are referred to as artificial intelligence (AI). Anyone who has played a video game involving interaction with game characters has experienced such AI.

Therefore, the Turing test treats computers as black boxes – it does not assess what is going on inside the box, it merely judges the output. And therefore it cannot tell the difference between true intelligence and a clever simulation.

But this statement leads us only to the next question – what, if anything, is the difference?

Hugh Loebner, creator of the Loebner prize, has this to say:

There are those who argue that intelligence must be more than an algorithm. I, too, believe this.

I completely agree – depending upon how you define intelligence. Loebner seems to be using the term to mean consciousness, which is how I think most people interpret the term in this context. But the word “intelligence” can be used more broadly, and can refer simply to the ability to manipulate data. AI as a computer term takes this meaning as it applies to the ability to simulate human intelligence or compete against human players. You might also say, for example, that computers that are capable of beating world champions in chess are intelligent, but they are not conscious.

Computers have become increasing more powerful, but power alone will not achieve either intelligence or consciousness. Programmers, taking advantage of greater computing power, have created increasingly sophisticated AI algorithms (again, as any video-gamer can attest). But they are not yet close to passing the Turing test. At the bottom of this article is an example of a human and AI conversation. Read them and then come back… OK – pretty easy to tell the difference, right? The AI conversation was awkward and it lacked any sign of a true thought process. It seemed algorithmic.

But I can imagine a day in the not-too-distant future when such AI can pass a Turing test. The algorithms will have to become much more complex, allow for varying answers to the same question, and make what seem to be abstract connections which take the conversation is new and unanticipated directions. You can liken computer AI simulating conversation to computer graphics (CG) simulating people. At first they appeared cartoonish, but in the last 20 years we have seen steady progress. Movement is now more natural, textures more subtle and complex. One of the last layers of realism to be added was imperfection. CG characters still seem CG when they are perfect, and so adding imperfections adds to the sense of reality. Similarly, an AI conversation might want to sprinkle some random quirkiness into the responses.

The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.

What if, instead of using algorithms to pick canned answers, the AI program actually attempts to understand the meaning of the question and then draw upon a fund of basic knowledge about the world and about itself and then constructs an answer. This is a more complex process than a response algorithm. At a minimum the computer will have to understand human speech – it will have to have a vocabulary and a rather complete understanding of syntax and grammar, both to understand the question and create a response.

Then it will have to have a vast knowledge base, including many fact that we take for granted. For example, it will have to know that when you let go of something it drops to the floor, people need light to see, that bunny rabbits are cute and that rotting food smells bad. How many such factoids are crammed into the average human brain?

In order to simulate a human, the AI will also have to have a personality, a persona, a “life.” This could simply mean that it needs a knowledge base about the person it is simulating – what they do, how old are they, what is their life history.

While I think it is easy to agree that an algorithm offering up canned responses is not conscious, it is more difficult to make that judgment about system that is constructing responses. That’s because as the processing gets more complex it is possible to imagine that consciousness will emerge, and it becomes more difficult to see the differences between such an AI and a human brain. If the AI understands the rules of speech, so that it can both understand language and speak. And if it has a thorough knowledge base about itself and the world, and (here’s the key) it can take the abstract meaning of a question or statement, compare that to its knowledge base, make complex comparisons, search for patterns and connections, and then construct an answer based upon its “personality” – then how is that fundamentally different from a human brain?

I am not saying such AI would be conscious – just that we are getting a bit closer. I also think more is needed. The AI would have to have an internal state that it could monitor. It would have to be able to talk to itself – to think. There would need to be an active self-perpetuating process going on, not just a reaction to input.

What about feeling? Would the AI have to feel in order to be conscious? This is a tough one and I could be persuaded either way. On the one hand you could argue that consciousness and emotions are not the same thing – a conscious being could lack emotions.  On the other hand, if by “feeling” we mean anything that constitutes the subjective experience of one’s own existence, well then, yes. I think it would have to “feel” to be conscious (stated this way, however, this might just be a tautology).

What about the ability to adapt and learn? Is this a prerequisite for consciousness? This is certainly a property of human intelligence. Our brains adapt and learn, even change their hard-wiring in response to repeated behavior and experience.  Could an AI be conscious but static – unable to change? It’s hard to imagine, but I cannot say exactly why this would need to be a prerequisite. Part of my difficulty is in addressing the broad question of what is consciousness, rather than what is human consciousness. It is easier to say whether or not AI would be conscious in all the ways that humans are, but more difficult to address whether or not it has a form of consciousness, just different than human or lacking in some respects.

There may be other functions required to be conscious that I have not touched upon yet. For example, we know that human brains hold pieces of information in their working memory, and they can manipulate these pieces of information. We also have the ability to focus our attention. So, would any AI need to have something that is deemed “attention” where it is focusing on a subset of it’s knowledge or stimuli? If it is manipulating data but no paying attention to it, is that the same as subconscious processing in humans? Without the built in ability to pay attention, would AI be entirely “subconscious” and therefore not conscious at all?

This all leads to the final question – how would we know? I think this points to the fundamental weakness of the Turing test, it is only looking at output, not the process. I don’t think we could ever know if an AI was conscious based entirely on output. This is because I think we will develop powerful-enough AI to simulate human intelligence more than enough to pass the Turing test.

In order to judge whether an AI was truly conscious I think we need to not only look at behavior, but we need to look at what it going on inside the black box.  We need to consider basic principles – is the AI paying attention, is it thinking, is it able to make new connections from existing knowledge, to actually increase its knowledge simply by thinking? We will know we have true consciousness because we built it to be conscious, not just to simulate it.

This, of course, requires that we know what consciousness is and how it is created, which leads back to neuroscience. As we reverse engineer the human brain we are learning how it creates consciousness. While we do not have all the pieces yet, progress continues without slowing.  And, as I have written before, the tasks of understanding the human brain and building AI are increasingly intertwined.

The moment the Turing test is passed, it will become obsolete. For now it is an interesting milestone in the development of AI. But once we have passed that milestone it will become obvious that it does not really mean anything. Simulating human conversation is an important technology – but it is not machine consciousness. The focus is already shifting to understanding the nature of consciousness itself.

Share

31 responses so far

31 Responses to “An Upcoming Turing Test”

  1. Jakeon 06 Oct 2008 at 9:56 am

    Great read, and I can’t wait to hear the results of the test. One thing though: “On the one hand you cold argue… ” You are missing a “u”

  2. JimVon 06 Oct 2008 at 10:49 am

    Another direction of AI research which you are probably aware of is to simulate neural structures, e.g.,

    http://sc07.supercomp.org/schedule/pdf/pap402.pdf

    This method seems more fruitful to me, but the difficulty is illustrated by the fact that it takes our most powerful current super-computers to simulate one neural column of a rat’s brain.

  3. daedalus2uon 06 Oct 2008 at 12:09 pm

    A major difficulty with the Turing test is that each human who is the “detector” is different and will produce a different result. There are many instances in history (and even today) where individuals we know were/are fully human were not recognized as such by people at the time.

    Non-whites were thought to be non-human by whites. Virtually every ethnic and/or cultural/religious group has treated members not of their group as non-human. Even political groups don’t treat their opponents as fully human.

    Gay people and their relationships are treated badly because some individuals can’t perceive them as fully human with fully human feelings of love toward their domestic partners.

    I think this is the normal human reaction of xenophobia. Perhaps xenophobia was a useful trait for humans to have when humans co-existed with other Homo species and recognizing an individual as a Homo sapiens was important. Perhaps human xenophobia is the reason that there are no other extant members of the Homo genus.

    This may be related to the hyperactive pattern recognition discussed in the earlier post.

  4. petrucioon 06 Oct 2008 at 1:31 pm

    “The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.”

    You then go on to state that the answer to that question is basically yes – you just disagree that that something more could be called algorithms.

    Algorithms can certainly do all the something more you proposed. Genetic algorithms can be messy and indeterministic. Even if one could simulate billions of neurons and their interactions in software, the result of which could maybe be indistinguishable from a biological brain – that would still be running algorithms.

    The actual processing is NOT the result of the running of those algorithms – they just give rise to a simulation of a brain. The actual processing is something that will arise out of the complexity of that simulation, just like the chemical processes going on in the neurons are not what give rise to consciousness. But the basis of that simulation will be algorithms nonetheless.

    There’s a proper word to describe the complex processes that emerge out of the simpler ones at the base, but I’m missing it. You’ll probably know.

  5. sharkeyon 06 Oct 2008 at 1:33 pm

    “The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.”

    I have to disagree with you, Steve. In fact, I think _you_ disagree with you. In an earlier posting, you introduced the idea that “consciousness” is a concept similar to “life”. That is, there’s no “life” subsystem, substance or structure; life is just the process of interacting systems such as growth, reproduction, etc. We don’t define life by how the exact processes work (ie, RNA-world could be considered “life”). Similarly, consciousness may be a result of a suitably-capable intelligence interacting with a body sense, past memories, etc. How the intelligence is implemented shouldn’t matter.

    “I think this points to the fundamental weakness of the Turing test, it is only looking at output, not the process.”

    Again, I disagree, I think that is the strength of the test. I believe Turing’s observation was that intelligence is a high-level feature that is best analyzed from a black-box perspective. By abstracting the actual processes involved, you can define and compare “intelligence” more abstractly (which boils down to, “communicating with a human as if the agent was also human”).

    I suppose Turing’s hope was that the processes involved in artificial intelligence would converge on those in natural intelligence, but that isn’t strictly necessary.

  6. petrucioon 06 Oct 2008 at 1:35 pm

    Emergence – there you go.
    http://en.wikipedia.org/wiki/Emergence

    You are basically saying that it requires emergence for generic AI to work. That is certainly true, but it’s does not mean algorithms are not what makes that emergence happen.

    And we already have complex systems with emergence rising from simple algorithms, so it’s not like it’s a seven headed beast.

  7. MKandeferon 06 Oct 2008 at 2:06 pm

    Steve,

    you said, “This all leads to the final question – how would we know? I think this points to the fundamental weakness of the Turing test, it is only looking at output, not the process.”

    This is true, but what the Turing Test ultimately hinges on is the “problem of other minds”. Various modifications on the Turing Test make this clearer, the one I’m familiar with involves embodied agents (e.g., androids) and having more than a conversational interaction with them (e.g., playing sports). As you probably know, the problem of other minds is how do we justify the conclusion that other people, besides ourselves, have similar conscious experiences, and are not “philosophical zombies”. The key question that the Turing Test asks of us as researchers is, if the justifications that we use (e.g., they react in a similar manner as I do, they make reports of their conscious experiences) for concluding that another human has consciousness, why aren’t they enough when examining consciousness in machines constructed to emulate humans? One objection to this line of thought is that our present justifications are useful assumptions for interacting with people on a daily basis, but if we really wanted to be sure the artificial agent in question was conscious we need a more rigorous (and ethical*) test.

    * – These might be human-level conscious entities after all =D

  8. sonicon 06 Oct 2008 at 3:13 pm

    Problems and questions with an excellent post–(It must be good to get me going like this)

    1) You say-

    “The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.”

    But, you don’t give a ‘something more’ that can’t be constructed algorithmically or run on a computer.

    2) There is currently no test for consciousness. If I were a skeptic, I might say that it does not exist, and then demand proof otherwise. Good luck- how about a million dollar prize for proving consciousness? Would that clarify things? Why hasn’t such a prize been offered? Why do we continue to talk about something that isn’t measurable in any meaningful way?

    3) After decades of AI we still don’t have a definition of intelligence that is agreeable to all. For example:

    –http://www.wired.com/science/discoveries/news/2007/10/veggie_intelligence—

    “If you define intelligence as the capacity to solve problems, plants have a lot to teach us,” says Mancuso, dressed in harmonizing shades of his favorite color: green. “Not only are they ‘smart’ in how they grow, adapt and thrive, they do it without neuroses. Intelligence isn’t only about having a brain.”

    For more see the work of Trewavas on plant intelligence.

    So if we don’t need a brain for intelligence, then why would studying a brain be needed to understand it?
    Or are the plant people misguided?

    4) If I were a skeptic I might point out that all these ‘consciousness and intelligence’ people can’t even agree on what it is they are talking about- they might as well be talking about god for goodness sake. (That is the major difficulty with talking about god, right?)

    5) Is consciousness an irreducible? Certainly modern physics as it is applied would indicate the answer is yes. Would it be scientific to agree with physics?

    Enough all ready Sonic…

  9. Karl Withakayon 06 Oct 2008 at 3:14 pm

    Very interesting. It was obvious which of the two conversations was with a human, but only because there were two conversations to compare. When I read the first conversation, nothing jumped out at me to say, “this is without a doubt a real person talking”

    One thing that struck me in the second conversation was that in order to effectively simulate intelligence, a computer program has to be able to handle informal, incorrect, or abstract grammar and sentence structure- that is it has to be able to “understand” the message being conveyed even if it is not conveyed correctly.

    Also, as I think to myself of things that would convince me that I was having a conversation with a real person, I am amused that many times humans would not pass my tests. Many people express opinions or positions that they cannot thoughtfully support, cannot explain the evolution of their positions, or provide criteria that would lead them to change their opinion; that is what I would go for in a conversation with a candidate for intelligence: solicit and opinion or position on a subject and explore the underlying reasoning behind the opinion, and the evolution of that position. In other words, I would explore subjective reasoning

  10. Philoon 06 Oct 2008 at 3:50 pm

    Hi Dr. Novella -

    I may be wrong here, but I thought a part of the purpose for the Turing test was to demonstrate that the algorithms in the brain–whatever they might be–are substrate neutral process, capable of operating in other mediums.

    I would also appreciate your thoughts on what the condition called blindsight might play in a better understanding of consciousness.

  11. Steven Novellaon 06 Oct 2008 at 4:48 pm

    Thanks for the great responses.

    Let me clarify a couple of things. When I said that something more than algorithms is needed, I was referring to something more than human response algorithms. Even if you could simulate a human conversation that was indistinguishable from a human (and pass the Turing test), you would not have consciousness.

    The something more you need can indeed be created with other algorithms – except for one thing (see below). That is what the brain ultimately, massively parallel algorithms working simultaneously and interacting. Which one’s are necessary for consciousness? We are still sorting that out. The brain can do a lot of processing without consciousness. So complexity alone is not enough. While consciousness emerges out of the complex interaction of brain algorithms, it does not necessarily do so. You need the right kind of algorithms, and I don’t think we know yet what they are.

    Also – I don’t think that “passive” algorithms are enough. The one extra thing the brain needs in order to be conscious is to be constantly activated by cells in the brain stem. You need a constant loop of self-generating activity. This should not be technologically challenging.

  12. pecon 06 Oct 2008 at 5:15 pm

    “I can imagine a day in the not-too-distant future when such AI can pass a Turing test.”

    And I can imagine the Tooth Fairy.

    “it will have to have a vocabulary and a rather complete understanding of syntax and grammar, both to understand the question and create a response.”

    A complete knowledge of the syntax of a human language does not result in comprehension since computers, and humans, understand symbols with respect to their relationship with a context. The contexts of computer programs are finite and constrained, but the contexts of human society have no defined boundaries, and are constantly evolving.

    I predict that no computer program will ever pass the Turing Test. After half a century of trying none have come close, no one can define intelligence let alone create it, and we have absolutely no reason for assuming, as Dr. N. does, that intelligence is generated by physical brains.

    Dr. N., and others like him, are only skeptical about things that defy their philosophical preferences. He has no skepticism regarding materialist fantasies like true AI.

  13. pecon 06 Oct 2008 at 6:44 pm

    “I can imagine a day in the not-too-distant future when such AI can pass a Turing test.”

    And I can imagine the Tooth Fairy.

  14. mat alfordon 06 Oct 2008 at 9:17 pm

    “And I can imagine the Tooth Fairy.”

    A random, unrelated, irrelevant comment – I think pec just failed the Turing test.

  15. Joeon 07 Oct 2008 at 6:40 am

    Steve wrote “… The one extra thing the brain needs in order to be conscious is to be constantly activated by cells in the brain stem. You need a constant loop of self-generating activity …”

    As I understand it, this is already done in modern computers. Even when it is idling, it cycles through all the potential inputs (keyboard, mouse, etc.) to see if there is any activity. Or, did you mean something else?

  16. jimon 07 Oct 2008 at 8:48 am

    Could we look at the issue the other way round, what damage needs to happen to the brain for a human to fail the turing test. Are there brain damaged people that retain language skills but appear less than human? they would be interesting to study.

  17. daedalus2uon 07 Oct 2008 at 9:45 am

    Biological signaling systems depend to a very large extent on what is called stochastic resonance. That is a process by which noise is added to a signal, the noise plus signal measured, and then the noise is subtracted out during post-processing. This allows detectors with quite poor characteristics to function at near theoretical limits. This is what allows visual systems to detect single photons and acoustic systems to operate near quantum noise limits. It also allows biological systems to have an extraordinarily large dynamic range. The “gain” has to be adjustable, but when you have a distributed sensor network with adjustable local gain, the control of that gain is quite tricky.

    The use of adjustable gain and noise removal in post processing can also introduce spurious signals. The trade-off of type 1 errors (false positive) and type 2 errors (false negative) is immutable, and when you decrease one, you increase the other. This is the source of the hyperactive pattern recognition discussed earlier as pareidolia. If you turn up the gain enough, you will start to see/hear/feel/think/smell things that are not there.

    In terms of recognizing when an entity is human or non-human there will be type 1 and type 2 errors also. We already see type 1 errors in the anthropomorphizing of non-human animals and objects. The belief that inanimate objects exhibit intelligence or other human characteristics in the complete absence of data is a type 1 error.

    Regarding the Turing test, I consider it quite ironic that Turing himself was not thought to be fully human by society because he was homosexual (that would be a type 2 error). That behavior was considered to be sufficiently deviant that he was subjected to treatments to extinguish that behavior and so become a normal human. Those treatments did not work, and he killed himself. A tragic, horrific waste of a brilliant man because of ignorant stupidity. The person who probably did the most to secure an Allied victory in WWII due to his contribution to code breaking.

    Digital systems don’t need to do such things because the on/off threshold is chosen to be large enough that it isn’t necessary. The noise level is well below the on/off threshold.

  18. daedalus2uon 07 Oct 2008 at 9:54 am

    Jim, I think that people with autism meet the criteria of appearing to be “less than human”. I think that relates to the fidelity of their communication, but mostly the non-verbal communication that humans most rely on. Even people with Asperger’s can’t communicate the way that someone who is neurotypical can.

    Terry Schaivo had quite severe brain damage and so didn’t have the neurological structures to respond and communicate, but her parents perceived her to be responding and communicating.

  19. wallet55on 09 Oct 2008 at 7:30 am

    the discussion of chess programs, which were considered a form of a turing test belies some of the assumptions about this test and technology and our own intelligence. Chess programs, which can now beat the best chess players, do not play chess in the heuristic way that we do. With some tweaks, they are essentially high speed legal move generators with static end position analyzers. This of course is not elegant, is not chess playing, but darn it, it beats the crap out of most of us. Even when losing though, most chess players can often sense they are playing a computer.

    The point of this is that a conversation algorythm that uses a really large database of canned responses, perhaps tweaked with some tweaks of word play on the input, would like the chess programs eventually become difficult to distinguish between a real program.

    I would go a step further and posit that it would more closely simulate human conversational method than we may be willing to admit. I suspect that most people have a canned set of responses to most queries, and simply tweak them over and over situationally. We are falling into a non materialist trap to imagine that our brain does something magical to converse.

    I suspect that like chess players we will not accept the first program that passes the Turing test, because we will understand how it works, and believe that it is fundamentally different from how we do things, and therefore not intelligent. I would postulate further that we will continue to move the goalposts for AI and not really believe in it long after it is reality.

  20. sonicon 10 Oct 2008 at 4:03 pm

    wallet 55-
    You’re right about the chess playing programs. (Although I think the last time a chess program played a grand master the human won- that wouldn’t make the news though- would it)

    The problem is in defining what we mean by intelligence.
    Another problem is that some think intelligence implies consciousness- something that is problematic in that nobody can prove the existence of it (prove to me you are conscious), yet few want to say it is therefore meaningless. (I say few- I’m not sure about Daniel Dennett…)

    Recently I did a book review for a physics book. Don’t fall into the trap of materialist thinking. That philosophy does not agree with the findings of modern science. Physics, as it is currently formulated and practiced experimentally, is essentially dualistic. I realize this may come as a surprise, but it is true. Check out the ‘von Neumann orthodox interpretation’ of physics as a starting place.
    (I know there are other formulations- none of them work and the von Neumann is the one actually used in practice)

    Good luck!

  21. Philoon 11 Oct 2008 at 5:01 pm

    sonic –

    Interesting point on proving consciousness (I take it your position is that one cannot prove it), but this leads to a paradox. If consciousness is unobservable by others and, presumably unquantifiable in us, then we can’t be sure, when talking about consciousness, that we’re discussing the same thing. If consciousness has no publicly demonstrable properties, then it would seem to be pointless to talk about it. I might have read too much into your comments, but I think if any phenomena lacks demonstrable properties, then we’re at a profound disadvantage, or there was an error in our thinking.

  22. sharkeyon 12 Oct 2008 at 1:34 pm

    sonic: “Don’t fall into the trap of materialist thinking. That philosophy does not agree with the findings of modern science. Physics, as it is currently formulated and practiced experimentally, is essentially dualistic.”

    Sonic, you’re stretching the current findings of science (or constraining materialism) by implying that physics is “essentially dualistic”. Wave-particle dualism is _not_ the same as philosophical dualism; the fact that a photon can interfere with itself does not imply there is magic unmeasurable “stuff” causing macroscopic processes and properties (mind, in this example).

    I am one of those few that say that ‘consciousness’ is a meaningless term; generally, you can take any sentence with ‘consciousness’ and replace it with ‘human-like’, and the sentence reads the same. ‘Consciousness’ is just used as a word to describe human-like behaviour. If a computer, dolphin, alien or rock displayed human-like behaviour, you’d likely refer to them with the term ‘conscious’.

  23. daedalus2uon 12 Oct 2008 at 5:32 pm

    I have been thinking about the time constant for learning. If “mind” was non-material, then it should be possible to change that non-material “mind” instantaneously. Learning via the immaterial mind would be instantaneous. No type of learning is ever instantaneous because it requires remodeling the neural structures to hold the new ideas.

    Since we don’t know the cognitive mechanisms that generate what we call consciousness, how do we know that it isn’t some highly kludgey thing like the brute-force approach to chess? A massively parallel output generator and look-up table that evaluates the value of the expected position? Something that interpolates between previous instances and does something random to fill in the gaps?

  24. sonicon 12 Oct 2008 at 6:40 pm

    When I say that physics is essentially dualistic I mean that there is a physical aspect (Schrodinger’s equation) and a ‘mental’ aspect (the conscious choices made by experimenters.)
    The choices are not fixed by any known laws of physics, yet the choices are asserted to have causal effects.
    This is the formulation that is actually used by practicing physicists today. It is a dualistic (in terms of philosophy) approach.
    Von Neumann orthodox formulation it is usually called.
    If you care to investigate this revolution in scientific thought, Henry Stapp has written many readable and useful papers available on the web.

    The idea that consciousness reduces to something physical, or that the mind and the brain are the same thing are not scientific facts, but rather statements of investigation based on a faith in a philosophy- (materialism or physicalism)

  25. sharkeyon 12 Oct 2008 at 7:10 pm

    sonic, I still think you are interpreting quantum events too broadly. The “conscious choices made by experimenters” causing waveform collapse also occur during any suitably macroscopic environmental interaction; “observing” a quantum event does not imply “human observer”.

    You further stated: “The idea that consciousness reduces to something physical, or that the mind and the brain are the same thing are not scientific facts, but rather statements of investigation based on a faith in a philosophy”

    The idea that the mind is a higher-level feature of human brain processes is a hypothesis supported by evidence, _not_ a philosophical position.

    There has been much work in determining how changes in physical properties and processes in the brain lead to observed behaviour changes. I’m sure others could offer better references, but “How the Mind Works” by Pinker was quite an enjoyable read and contains a referenced guide to supporting studies.

  26. sieneron 13 Oct 2008 at 6:38 am

    Hi Steven. I’m a long time fan, first time poster.

    I believe that a system that can pass the Turing test is
    automatically conscious. Note that I am mean the Turing test in its most general form, not the limited test used for the Loebner Prize. Put simply: something that acts like it is concious is concious.

    You assert that no test where a system is treated as a black box will ever be able to determine whether that system is truly conscious and that you would need to look at the inner workings to make your final judgement.

    You are falling for what Daniel Dennet calls “the zombic hunch” – that it is possible for a human to exists that is indistinguishable from you and me in every way, but that is not concious. I don’t believe that it is possible for such a philosophical zombie to exist.

    Think about it this way: You are saying that a system can exists that acts like it is conscious, but unless it has some magical additive, some élan vital with absolutely zero affect on its
    behaviour
    it cannot be truly conscious.

    For all you know some humans might be truly conscious while others merely act as if they are.

    I recommend you read Dr. Susan Blackmore’s Conversations on Consciousness (or better yet, interview her on the Skeptic’s Guide). Some of the experts in her book share your view of conciousness, but I find the arguments that go the other way a lot more compelling. People who agree with you seem to rely mostly on gut feel or common sense rather than anything grounded in science or logic.

  27. sonicon 13 Oct 2008 at 1:23 pm

    Sharkey-
    Perhaps you are refering to the Ghirardi, Rimini, and Weber approach (spontaneous-reduction). Nobody has figured out how to make this model work for the fact that particles are created and distroyed (observable fact) nor has anyone figured out how to make this relativistically invarient.
    In otherwords what you say does not match the current understanding of the scientists who investigate such matters.
    Mr Pinker is a good writer. He apparently doesn’t understand the psychophysical nature of modern physics very well, however. (I don’t know that anyone does…)
    Again I would suggest dipping into some of Stapp’s writings for further understanding.

    Siener,
    I would agree with you that currently there isn’t any logical or scientific reason to think consciousness is anything more than passing a Turing test.
    Just goes to show how far we have to go before we get a science and logic that actually is useful in describing the universe we live in. (If you have studied physics much you will know that last statement is NOT controversial)

  28. sharkeyon 13 Oct 2008 at 3:07 pm

    sonic – The little I’ve googled about Strapp leaves me with the impression of a good physicist with some crackpot ideas; not a rare occurrence, but still unfortunate.

    This “psychophysical nature” of modern physics you keep mentioning is not a scientific fact. Just because quantum mechanics is non-intuitive does not imply that quantum mechanics is involved in “consciousness”, or vice versa.

    Or, in Strapp’s case, just because ion channels are small doesn’t mean quantum effects are involved in “consciousness”. I’m willing to hear the evidence, but randomly throwing the word “Vedic” into papers doesn’t impress me.

  29. Will Nitschkeon 20 Oct 2008 at 2:58 am

    “Even if you could simulate a human conversation that was indistinguishable from a human (and pass the Turing test), you would not have consciousness.”

    The point of the Turing Test is that you would have consciousness, but that does not imply it would be human consciousness.

    The experience of having a mind without a body (as would be the case with an intelligent computer) would be a very different type of consciousness. It’s possible to argue that certain higher primates may be conscious, but if so their consciousness would be different from ours. So it is important to separate ‘human consciousness’ from ‘consciousness’ in a more general sense. Steven’s article alludes to this but then also blur’s the distinction.

    The other problem with the article is the underlying assumption that it could be plauisble to (in the not too distant future) write software that could convincingly ‘fool’ a determined human inquisitor. Regardless of computing power, regardless of everything we know about software engineering, such a goal would not just be difficult, but at this stage cannot even be imagined.

  30. mdcatonon 04 Mar 2009 at 6:33 pm

    The Turing test has had many critiques which I won’t go into here, but a major one is that it doesn’t approach what Chalmers calls the hard question of consciousness; that is, is a Turing test-passing computer conscious? Steve isn’t claiming that the Turing test DOES touch this question, though there are implicit claims made based on other positions. That is, if you don’t believe in zombies, you have a tough time arguing that a Turing-passing machine is NOT conscious.

    In the consciousness debate I’m in the Chalmers camp, unlike Steve, but like Steve I also think that creating a machine which can pass the Turing test is just a matter of time, that is, a matter of sufficient increase in processing power.

    I suspect that there is far more embedded software in the human CNS than we realize at present, much of it far more profound than “bunnies are cute”. I actually think that it’s THIS that will be the final hurdle for a successful Turing test. A few hundred million years of legacy system accumulation is difficult to catch up on.

    http://cognitionandevolution.blogspot.com/

  31. Bart B. Van Bockstaeleon 17 May 2009 at 2:06 am

    I just discovered this article, and it is right up with what has fascinated me for decades.

    Two simple questions would be:

    Am I conscious?
    My answer to that is: I do not know.
    Is my hamster conscious?
    Same answer

    I looked at one of the articles referred to. The Grayling comment doesn’t impress me. What he says may be correct in the present, but it is wrong in principle.

    What does professor Grayling know about computer programming? If he would know anything about the nature of computer programming, he would know that the programmer ultimately doesn’t matter. A good programme will simulate nerve cells in such a way that what they do is indistinguishable from the “real” stuff. Once that is accomplished, it is merely a matter of putting enough of them together to construct a brain. For that, we need to understand more about what a brain actually does, but that is merely a matter of time.

    The only question, from where I am sitting, is this: would the intelligence of such a brain be “artificial”? I am of the opinion that it would not be the case and that it would simply be intelligence, residing in a different machine than ours.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.