Mar 28 2017

Is AI Going to Save or Destroy Us?

cylon1Futurists have a love-hate relationship with artificial intelligence (AI). Elon Musk represents the fear side of this. In two recent articles we see two sides of this fear of AI. In a Vanity Fair piece we learn:

He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

We also learn from The Verge:

SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink, according to The Wall Street Journal. The company, which is still in the earliest stages of existence and has no public presence whatsoever, is centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence. These enhancements could improve memory or allow for more direct interfacing with computing devices.

So Musk thinks we need to enhance our own intelligence digitally in order to compete with the AI that we are also creating, so they don’t destroy us. Musk is joined by Bill Gates and Stephen Hawking raising the alarm bells about the dangers of AI.

On the other end of the spectrum are Ray Kurzweil, Mark Zuckerberg and Larry Page. They think AI will bring about the next revolution for humanity, and we have nothing to worry about.

So who is right?

I am much closer to the Kurzweil-Zuckerberg end of the spectrum. First, I don’t think we are on the brink of creating the kind of AI that Musk and the others worry about.

How Close Are We To AI?

Seventy years ago, when it became clear that computer technology was taking off exponentially and that these machines were powerful information processors, it seemed inevitable that computers would soon exceed the capacity of the human brain, and that AI would emerge out of this technology. This belief was reflected in the science fiction of the time.

In 2001 (the 1968 film) we thought nothing of HAL being an AI computer (and one that goes a little funny and kills his crew). That time frame seemed about right. Even more telling were Star Trek The Motion Picture and The Terminator. In both films computers awaken and become fully aware AI simply by crossing some threshold of information and computing power. That plot element reflects the belief that AI was all about computing power – something which turned out to be a false assumption.

Here we are in 2017, almost 50 years after the Kubrick film, and Moore’s Law has held up fairly well. We have cheap powerful computers, and supercomputers reaching for the Exaflop level – a billion billion (quintillion) calculations per second. The current fastest supercomputers are getting close to the raw computing power of the human brain, and we will soon exceed it.

I have no fear that when we finally turn on that first exaflop computer that it will awaken and become self aware. That notion now seems so quaint and misguided. There are two reasons for this.

The first is that standard computer architecture is simply different than vertebrate brains. Computers are digital and largely serial. The brain is analogue and massively parallel. This means they are good at different things.

The fact is that standard computer hardware is simply not on a course to become artificially intelligent, because it does not function that way. You could theoretically have a virtual human brain in a standard computer architecture, but then it would have to be orders of magnitude more powerful. We are likely still decades away from such a computer, which would likely be the size of a building and require the power of a small city to operate.

We are, however, just beginning to develop neuromorphic chips. As the name implies, these are computer chips designed to function more like neurons – analogue and massively parallel. These kinds of chips are simply much more efficient and much better at doing the kinds of things that our brains do. I strongly suspect that if we ever do develop self-aware AI it will be with something like neuromorphic technology, and not standard computer technology.

This brings me to the second reason I am not worried – computers (regardless of their architecture) are not simply going to wake up. We have learned how naive this idea was. Computers will have to be designed to be self-aware. It won’t happen by accident.

In fact, I have been using the term ‘AI” to refer to self-aware general artificial intelligence. However, we already have AI of the softer variety. There is AI in your smartphone, and in your video game. We already have software that can learn and adapt, and it can do this without the slightest self-awareness. We are even using neuromorphic chips to perform tasks like pattern recognition that this type of computing does much better – again, without anything on the path to awareness.

I am not afraid of AI because it seems to me that our computers will do what we want them to do, as long as we continue on the path of top-down engineering. AI will be able to do everything we need and want it to do without self-awareness. Our self-driving cars are not one day going to revolt against us.

I do think it is possible to develop a fully self-aware general AI that matches and then exceeds human intelligence. In fact, I think we will do this. Neuromorphic technology is the beginning. With computers designed to function like the brain there is the potential of reproducing the processes that produce awareness in humans. This, however, will not be easy. It will require a dedicated research and development program with self-aware AI as the goal.

The other possible path is that we model the human brain, even before we fully understand it. We are working to model the connectome – a diagram of all the connections in the human brain. We are also modeling the basic components of the brain, such as the cortical column. Once we have an accurate enough map of the brain, and the neuromorphic technology to reproduce its function, we could theoretically just build (virtually or in hardware) an artificial human brain. I believe a functional model of the human brain would be, in fact, a self-aware human brain.

I do think it would be a mistake to put such an artificial brain into a fully autonomous super robot, at least before we fully understand and control the technology. Think about why we would want to do this.

We would not do this to make robot slaves. Robot slaves should not be self-aware, and they don’t have to be. They can do everything we need them to do without the burden or risk of self-awareness. If we want a self-aware AI to think for us, they would not need to be in a robot body. They can be safely on a desk top.

We would have to be willfully careless, like the scientist in Caprica who built the Cylons (yeah, we should not build self-aware killer robots). Creating self-aware robots that get out of our control would require a deliberate and careless program.

I am trying to envision an application that requires both self-awareness and autonomy. The only thing I can think of is space exploration, because in space a non-biological body is a huge advantage, and distance requires autonomy.

Further, by the time we can develop self-aware AI we will also be able, using the same technology, to enhance our own intellect and physical capabilities. This brings us back to Musk’s Neuralink – he obviously thinks the same thing, and wants to make sure that computer-brain interfaces are up to the task of allowing us to compete with our own AI.

For the foreseeable future self-aware AI will need to be the result of a massive and deliberate program, giving us the time to be careful. I don’t see any immediate need for creating the kind of AI that haunts our sci-fi nightmares.

Of course, if you go far enough into the future, all bets are off. But at the same time, we cannot predict what our own human capabilities will evolve into. There is no point is worrying about the distant future because we simply cannot predict what will happen.

So, I would say that I am not worried for the next 50 or even 100 years. We should continue to develop AI for the benefits they will bring us, and just don’t invest millions of dollars and years of research building self-aware killer robots.

304 responses so far

304 Responses to “Is AI Going to Save or Destroy Us?”

  1. Tiffon 28 Mar 2017 at 8:40 am

    Thanks for the article Steve.

    I agree totally with your summary of how a self-aware AI could and could not be developed. However I am not as sanguine about the safety of a desktop AI, particularly if connected to the internet (which I imagine you would usually want to do now and would be even more essential to a hypothetical AI role in the future). Given current trends around the Internet of Things, cyberwarfare and, allegedly, the interference possible in elections and public opinion, an internet-connected super-intelligent AI would not need a robot body to effect massive changes in society.

    At this point, it is tempting to go off into a sci-fi plot involving it persuading a government to build an army of robots that it controls because of a deteriorating global security situation that it itself secretly created…

  2. tvadakiaon 28 Mar 2017 at 10:26 am

    How would you view/refute Sam Harris’ latest take?

  3. Nareedon 28 Mar 2017 at 10:50 am

    War is another good reason for self-awareness and autonomy. That way a drone can protect itself and look for targets on the move It doesn’t require awareness or even much intelligence to bomb a fixed target. But a directive like “find Osama and shoot him,” does require both.

    But there’s another point for this debate: had someone warned humanity about the internal combustion engine when, say Henry Ford was rolling off the first Model Ts from his factory, would people have listened? Consider, how many people have died since then in traffic accidents? How about the effects of widespread air pollution, greenhouse gasses, traffic jams, freeway construction, etc.?

    Granted automobiles won’t destroy the world, and certainly won’t take it over. And granted some of the things I mention above have existed since humans first gathered in cities (like traffic jams). The fact remains there are many negative consequences, as well as positive ones, from any technological development.

  4. TheGorillaon 28 Mar 2017 at 11:10 am

    The biggest reason to not worry is one you take for granted as false: you’re not going to have some self-aware brain without a body. The idea that we could is simply a relic of Cartesian mind-body split in the form proper to a culture of computer fetishism, and it’s still only recently that non-representational and embodied theories of cognition are receiving due attention.

    Aside from that you’re spot on with how we employ AI

  5. Atlantean Idolon 28 Mar 2017 at 11:17 am

    Ideally we would protect ourselves from the risks of self-aware AI by fully developing our understanding of how self-awareness evolves in biological brains first. The big question is, do we need to actually create a self-aware AI to experimentally verify our model? Perhaps by identifying various component processes and structures of self-awareness and simulating them independently of one another? This implies a highly reductionist view of consciousness.

  6. mumadaddon 28 Mar 2017 at 11:31 am

    “you’re not going to have some self-aware brain without a body”

    I don’t see why not. I get that brains are evolved to be embodied, (their function is to convert information about the environment into behaviour), and that a brain without any input stimuli would miss huge swathes of its development, BUT – why could can’t a meat body and information about the physical world be replaced with digital data inputs and a set of things the AI can manipulate in it’s virtual space?

  7. TheGorillaon 28 Mar 2017 at 11:56 am

    Mumadadd,

    I’m going to give the quick and lazy answer and elaborate later if necessary– by the time you’re designing those sorts of inputs to a meaningful degree you’ve just added a body to your simulation without calling it such.

  8. yrdbrdon 28 Mar 2017 at 11:58 am

    Dr. Novella, have you had the chance to read Nick Bostrom’s book Superintelligence? Musk is effectively popularising Bostrom’s ideas. It’s a great read, and Bostrom covers the topic in depth, including historical background, paths to superintelligence, containment strategies, and many interesting doomsday scenarios. And your comment to the effect that we don’t have much to worry about for 50-100 years isn’t far off from Bostrom.

    (Btw, I love how you snuck in a refrence to Dr. Strangelove.)

  9. DanDanNoodleson 28 Mar 2017 at 12:16 pm

    I’m not worried about AI, but for a different reason. The very aspect of our intelligence that makes us self-aware is also what limits us: curiosity, AKA the ability to formulate original questions and seek answers to them. I see no reason why an “artificial” intelligence would not also be subject to the same stricture.

    Without curiosity, we could not learn new things independently. But curiosity is also a rathole: there are, in a practical sense, an infinite number of things to learn, including things that we create ourselves, e.g. part of the reason why people post in comment sections is to see what other people think of their ideas. Curiosity absorbs our resources. The more curious we are, the more we can learn, but the more we are limited in what we can do.

    Information and knowledge are different things; it isn’t learning facts that makes us intelligent, it is establishing relationships between those facts. Our brains are wired to look for patterns, which leads to phenomena like pareidolia, where we see patterns that aren’t there. An AI can learn faster than we can, but that just means it has more connections to make. Whose to say it won’t get infinitely distracted just trying to find the deeper meanings in everything it knows, or fixating on some specific thing? An AI might end up spending all its time posting cat videos.

    There’s another reason why AI doesn’t frighten me: higher intelligence generally corresponds to a better ability to understand others and thus empathize with them. Maybe if an AI with a higher order of intelligence were to happen, it would actually be a good thing. Dystopian predictions show AIs taking over and destroying humanity, but why wouldn’t they decide that humanity needs to be protected and preserved?

  10. TheTentacleson 28 Mar 2017 at 12:38 pm

    I strongly recommend watching some of the Future of Life Institute videos, especially one on superintelligence: https://www.youtube.com/watch?v=h0962biiZa4 — this features Nick Bostrum (whose book is great as yrdbrd mentions!) and lots of other smart minds. In fact, Elon Musk isn’t an intellectual in this space at all, and I’m quite disappointed he seems to generate most of the public discussion on this fascinating topic.

    As a neuroscientist I am trying to integrate deep nets into my research understanding sensory transformations. I went to a conference last week on the “Future of Neuroscience and AI” held at NYU Shanghai. The current state of the art is quite disappointing actually. Most current neural network models are “inspired” by neuroscience, but only very superficially. Just modelling the connectome won’t do it either — the wonderful David Marr reminds us that computation, algorithm and implementation (what a connectome is) are 3 parallel ways to understand brain function. All 3 are necessary to replicate brain function.

    And even very advanced deep nets that can outperform humans in limited testing are incredibly fragile to input transformations. Shimon Ullman has a great 2016 paper in PNAS showing how deep nets fail in ways different to humans, suggesting underlying computations on the same tasks are completely different. They are all bottom-up / feedforward hierarchical models — yet the brain is dominated by intrinsic and feedback connectivity. Visual illusions are the canonical example of how perception is inference, not simple sensory processing. Our brains make models of the world, and our senses validate or update our models. This is not captured AT ALL in any current leading AI system, they are mostly glorified pattern recognition systems.

    For anyone interested in this more deeply, I can recommend a review article by Lake and colleagues:

    Lake BM, Ullman TD, Tenenbaum JB, & Gershman SJ (2016) “Building Machines That Learn and Think Like People” Behavioural and Brain Sciences p. 1–101

    And as others have mentioned, I also think perception-cognition-action is a loop and that can only be fully realised with a physical embodiment of some sort. The philosopher Alva Nöe has a great book on this topic (Action in Perception), and some neuroscientists have made the strong claim that only embodied agents will be able to generate superintelligence. Of course a networked hybrid computer ⬄ embodied robot AI is entirely plausible: embodiment+

  11. mumadaddon 28 Mar 2017 at 1:28 pm

    Does the embodiment need to be physical though, in principle? I’m not saying this is wrong, just pondering it. What is it about a physical body/environment that couldn’t, in principle, be replicated in virtual space?

  12. Kabboron 28 Mar 2017 at 2:21 pm

    An artificial intelligence that is operates using similar mechanisms to our own seems to me to be the perfect recipe for mental illness. Our biology has had ample time to evolve and thus reduce the array of mental illnesses that we could hypothetically be subject to. I don’t know how this would manifest, and it might not even be an immediate effect but it seems like we’d have to be incredibly lucky to land on a recipe for self aware thinking that does not come with some pretty serious immediate or downstream mental issues.

    These could be worked out as we advance our understanding of the systems at play, but I don’t envy those early iterations of AI that are our experiments. As noted in the article, there isn’t any particularly good reason to have self-aware AI in control of anything of consequence, but I don’t know if self aware AI is even a justifiable area of research if we don’t have an explicit need to make use of it.

  13. chikoppion 28 Mar 2017 at 3:49 pm

    [mumadadd] Does the embodiment need to be physical though, in principle? I’m not saying this is wrong, just pondering it. What is it about a physical body/environment that couldn’t, in principle, be replicated in virtual space?

    Steven Pinker, who was recently name checked in another thread, makes some interesting observations about how embodiment relates to the composition of mind. He approaches it as linguist, noting how spatial concepts (above, before, within, etc) relate to the construction and expression of abstract thought. Humans are objects. Our minds have evolved to comprehend and function in that capacity, extending the metphors of the physical world into the abstract space of organized thought.

    For an AI modeled on human intelligence, but without an innate understanding of embodied space, all those otherwise functional cognitive heuristics might become useless (or worse, an alien impediment). The question might be whether it isn’t far easier to create an embodied AI than to try and unravel and model the all the ways the human mind leverages embodiment for the construction and processing of thought.

  14. TheTentacleson 28 Mar 2017 at 11:07 pm

    Lots of nice concepts being raised here.

    Regarding embodiment, I think the magnitudes greater quantity of richly structured information in the real world means physical embodiment will be far more efficient than a virtual one for years to come. I study perception, and even at this “simple” cognitive level the benefits to embodied processing is clear. There is a great paper from Karl Friston with the wonderful title: “Perception as hypothesis, saccades as experiments” — the brain generates Bayesian perceptual models, but it is the movement of the eye that tests these models against the sensory baseline. This is one example where even at the first stage of cognition, movement of the eye and body are used to validate the current gist of the world around us. Visual neuroscientists are only beginning to study this, and are surprised about how even in simple models like the mouse, primary visual cortex is modulated by self-generated movements through the world in a way no thinking of a feedforward brain hierarchy would have predicted (and current AIs are all feedforward).

    The current AIs that have spotlight of fame on them don’t have any of these layered functional systems, and actually the AI teams that are dealing with physical robots are the ones who are “forced” to think about this. I’ve spoken with some AI startups who want to make rescue robots who are integrating neuroscience in a much deeper way than the current popular AI models. So I think the AI groups who deal with embodied agents (even virtual) are the ones who will ultimately get to a general AI first.

    r.e. curiosity — a fascinating point, but I do wonder that although wet brains can internally process in parallel, our acquisition of knowledge is limited by a serial mode of input processing (we process things as a “Gestalt”) and are severely limited by late cognitive tricks like reading. General AIs would not necessarily have this constraint.

  15. loncemonon 29 Mar 2017 at 11:56 am

    AI can’t run away all by itself. It is entirely up to us – or some of us – how beneficial or detrimental to humanity it is. If you trust the people who determine how it gets funded and developed, and if you trust the people own the means of its production and deployment, then like any other technology, there is nothing to worry about.

  16. chikoppion 29 Mar 2017 at 2:24 pm

    FYI

    (Via Pinker on Twitter) Louis Liebenberg has made his books on the evolution of human cognition, The Origin of Science and The Art of Tracking, available as a Free eBooks:

    http://www.cybertracker.org/science/books

  17. Karl Withakayon 29 Mar 2017 at 4:05 pm

    I’m more realistically afraid extremely sophisticated computer viruses with or without (non-self aware) AI. Think super Stuxnet on the loose in the wild and out of control. That’s a more realistic fear than Skynet.

    However, I’m even more frightened by “devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence.” because then you are vulnerable to hacking, computer viruses, malware, etc via the implanted technology.

    History tells us that good security is an afterthought for almost any new technology, and pretty much anything can and will be cracked & hacked given sufficient motivation, resources, and time.

    It’s hard to believe that once upon a time, we executed commercial transactions via the internet in plain text and never though twice about it.

    I’d still like to go back in time and stop the people that created SMTP without any security or sender verification built in. #ThanksFortheSpamScams

  18. Fair Persuasionon 30 Mar 2017 at 3:57 pm

    Computers are rapid computational decision-tree machines. Unlike the human brain, they cannot evolve and substitute knowledge and functional areas to undamaged parts. Computers need to be programmed in a linear sequence.

  19. mlstrmrpon 30 Mar 2017 at 9:08 pm

    Self-awareness = mental illness.

  20. hardnoseon 30 Mar 2017 at 10:36 pm

    Everyone who is talking about this, whether they are worried about it, or looking forward to it — all of them, are confusing science fiction with reality.

    Just because something happens in a science fiction film does not mean it will happen in reality some day. Science fiction is not science, it is FICTION.

    But lots of people do get them mixed up.

    SN tells us the brain is not fully understood. So we are expected to think the understanding is not quite there yet, right on the edge, any moment now, the brain will be fully understood.

    But no one actually has any idea to what extent the brain is understood.

    Another obstacle is the fact that we don’t know what intelligence actually is. How can you create something when you don’t know what it is?

    Computer technology has made great progress, but computers are not getting any more intelligent. They are doing the same things they have done for decades, but faster.

    Whatever opinion you have on this, it is only a guess. There is no evidence either way.

    I happen to think the materialist theory of the brain is wrong. But even if it were right, you can’t show evidence of any real progress towards AI.

    There was no HAL in 2001 and there is no HAL in 2017, and there is no reason to think there ever will be a HAL.

    Neural networks are good for certain things, but we have no reason to think the brain is simply a neural network.

  21. Anathemon 31 Mar 2017 at 12:59 pm

    I would argue that isn’t self-awareness (in the way that we consider self-awareness) that is the concern, it’s self modification/learning. For any given weak AI that has goals (which we have presumably put it in through top down engineering) that can do simple learning or self-modify in some other way, there will absolutely be cases in which it will try to accomplish those goals in ways we can’t predict. The stamp collector thought experiment on Computerphile is a good example of this. https://youtu.be/tcdVC4e6EV4.

    That is an unbounded perfectly learning AI which I agree with is far off, but it show the extreme version of this example. This channel has another video which I think it much more realistic in under consideration of the learning AI. https://youtu.be/4l7Is6vOAOA. The summary of the video is that a logical system which has a goal and is able to assess and accurately determine obstacles to that goal will naturally not want someone to tune in such a way that makes it more difficult to achieve that goal. So the machine would actually fight you trying to fix it.

    In one sense, I agree with the post. That I don’t think it likely any doomsday situation will happen, that when learning AI comes it will be directed in a way that works with people rather than against them. Also, general AI is a long way off. That said, learning AI is happening right now. Self-driving cars by google, facial recognition by Microsoft, Watson by IBM. The major difference between them and the kind of AI that can be dangerous are that they are currently bounded in their ability to act and change themselves.

    I’ll leave you with one final thought experiment that is somewhat realistic in the shorter term from a programming perspective. Suppose you’re a hacker and you have a simple AI who generally understands language, like the chat bots that exist. First, you take that code and modify it so that instead of english, it understand semantically its own programming language. Second you give it any simple goal, something like, with 50% of your resources send as much data across a network as you can. Third, with the other 50% of your resources modify your heuristics using real grammar at random keeping the base rules in effect, disregarding but remembering the failures so as not to try them again, and keeping the successes whereby you improve in rule number two. To get this machine do something malicious is at this point only a matter of having enough resources to allow it to track it’s failures and successes.

    Lose/win tracking is the most basic type of learning algorithm that exists (that I know of). it sucks because of the number of successes for any given trial will be miniscule relative to the number of failures, but it does work. And there are better learning models that already exist, like the one used with AlphaGo. Turn the best ones to a malicious use, and you suddenly have the Anarchist Cookbook times 1000.

  22. morris39on 31 Mar 2017 at 9:59 pm

    I have only a very superficial knowledge about AI from media/blogs such as this one. I do not understand how humans would communicate with some AI, assuming that it is orders of magnitude more intelligent than humans. Do humans have to pose extraordinarily intelligent queries to AI to obtain practical super intelligent answers? Is the human/dog intelligence difference a possible analog? Dogs are unable to pose intelligent questions in human terms. This question strikes me as fundamental. Am I the only one who does not get it? Dr. Novella?

  23. BillyJoe7on 01 Apr 2017 at 12:21 am

    morris….Man created gods, and in their own image did they create them.

  24. TheTentacleson 01 Apr 2017 at 4:58 am

    hardnose: Computer technology has made great progress, but computers are not getting any more intelligent. They are doing the same things they have done for decades, but faster.

    I would argue you don’t understand what current AI systems are doing then. Ultimately everything is bits and processors works in the same way, but if you’ve read David Marr you will know that to understand an intelligent system (wet brains or silicon AI), you can decompose it into [computation], [algorithm], and [implementation]. In the brain, how synapses work are [implementation], this does not tell you what algorithm the neurons may be processing information with, and the algorithm does not necessarily tell you what the final computational aim is.

    So yes, [implementation] hasn’t changed for decades, but [implementation] in biology is also basically identical between a horse and a human. The important differences are at the other levels. This is where the growth of AI has radically transformed “intelligent behaviour” of machines. The fact that you can train a deep network to deconstruct semantic meanings, or image categories usefully and implement this network with cheap throwaway hardware tells me at least, that the “new” way is a quantum leap compared to the traditional brittle expert systems.

    Your camera can simultaneously track 8 human faces and choose the one with the most appealing structure in real time without using much battery. This is because it uses algorithms and computations that no one thought about 40 years ago, and even if they had the same hardware could not be implemented using the old techniques. That is intelligent behaviour.

    It is wrong to state that because we don’t have a full omniscient understanding of the brain or what intelligence is, we effectively know nothing! We have very clear ideas of what intelligent behaviour is, and how that behaviour is beneficial — any biologist can tell you that instantly (and has been intensively studied across multiple disciplines over the last hundred years or so). We certainly are some way away from artificial general intelligence (as Steve also suggests above), and we don’t know how and if that will be “conscious”, but to dismiss all of this as pure fiction is plain wrong.

  25. hardnoseon 01 Apr 2017 at 8:45 am

    “The fact that you can train a deep network to deconstruct semantic meanings”

    Then explain why computers are still not capable of intelligent communication.

    “It is wrong to state that because we don’t have a full omniscient understanding of the brain or what intelligence is, we effectively know nothing!”

    The same old thing repeated endlessly by materialists. You can have some understanding about how the brain works, yet still have a fundamentally wrong theory about what kind of machine it is.

  26. Pete Aon 01 Apr 2017 at 8:46 am

    [Fair Persuasion] Computers need to be programmed in a linear sequence.

    My introduction to electronics was with analog systems that used vacuum tubes. During my teenage years, I was delighted that transistors and some simple integrated circuits became widely available and affordable. When I started college, multifunction pocket calculators were available, but they were far too expensive for our parents to afford, let alone us students: we had to use either a large book of mathematical tables or a slide rule for our calculations.

    The CPU in modern computers still has extremely limited functionality, which generally means that it has to be programmed in a linear sequence. Computers are a “jack of all trades, master of none”; their level of intelligence is zero, zilch, nada!

    Computers are awesome devices because they allow us to produce computer simulations and emulations of both the real world and our tentative hypotheses. Of course, they are also a wonderful tool for testing our scientific theories, and for providing us with computer-aided design and manufacturing.

    The real world, at our particular macroscopic scale of human existence, is mostly analog/continuous; it is not digital/discreet and sampled. So perhaps, the future of AI will be best served by complex parallel-processing analog hardware rather than by digital computers.

    At the end of the day, what is actually inside the ‘black box’ that is providing its high-level functionality is completely irrelevant to anything other than its manufacturing costs and its running costs. E.g., when we chose a loving partner, we do not base our choice in the careful inspection of their CAT scan, fMRI scan, and their blood type!

  27. mumadaddon 01 Apr 2017 at 9:24 am

    TheTentacles,

    Thanks for your response to hardnose — it was informative and well written. You also exactly have his number here:

    It is wrong to state that because we don’t have a full omniscient understanding of the brain or what intelligence is, we effectively know nothing!

    So thanks for the chuckle too.

    🙂

  28. BillyJoe7on 01 Apr 2017 at 10:24 am

    A chuckle here as well. 🙂

    I saw the troll’s post a couple of days ago and chuckled at the obvious response. But I’ve stopped responding to the troll because it’s just too easy and because it’s the same old crap over and over again. I’m happy to have someone new respond and see how long it takes them to get his measure. In this case it looks like it took about three seconds. 🙂

    “don’t know everything so don’t known anything”
    ^he never gets that this is exactly what he is saying. 😀
    “materialist”
    ^he keeps saying this but he has no idea what it means.

    Yeah, so thanks TT for the demolition job…and the chuckle.

  29. hardnoseon 01 Apr 2017 at 10:45 am

    You don’t know everything so you don’t know you everything.

  30. hardnoseon 01 Apr 2017 at 10:48 am

    “I have only a very superficial knowledge about AI from media/blogs such as this one. I do not understand how humans would communicate with some AI, assuming that it is orders of magnitude more intelligent than humans.”

    Yes your knowledge obviously comes from blogs such as this one. It is pure mythology. There are no computers that are orders of magnitude more intelligent than humans. There are no computers that are as intelligent as humans. There are no computers that are as intelligent as dogs. There are no computers that are as intelligent as bacteria.

    I am not against computers, I am a computer programmer. I know how they work, I know they are not intelligent.

  31. arnieon 01 Apr 2017 at 11:17 am

    TT,

    I second (or third) mumadad’s and BillyJoe 7’s comments. In time you will see that HN is the only commenter who knows anything because, given that he is the only one who knows that since the rest of us, Steven included, don’t know everything, we therefore know nothing. That leaves him, alone, with any knowledge. Get it? That pretty sums up hiss contributions and why some of us have chosen to ignore his unending arguments reiterating that theme.

  32. RickKon 01 Apr 2017 at 11:44 am

    hn: “Then explain why computers are still not capable of intelligent communication.”

    Another hardnose classic, and Egnor does this as well: “If it hasn’t been done, then it can’t be done.”

    Look at any point in human technological development, and you’ll find people like hardnose and Egnor – claiming “if scientists know so much why haven’t we already solved every mystery.” There were hardnoses and Egnors speaking out against powered flight, against understanding electricity, against curing diseases, against space travel, against harnessing fission, against harnessing fusion, and on and on and on.

    Hardnose validates himself by thinking he is wiser and smarter by embracing the things we don’t yet know or can’t yet do. As long as he’s able to point to something we haven’t discovered, then he can feel good about himself. He feels inferior in areas of knowledge, but feels superior in areas of ignorance. So he always tries to “play up”, to highlight and to maximize areas of ignorance.

    What is so sad about hardnose and Egnor is that their attitudes make them fundamentally dislike the advancement of human knowledge, because every new idea or discovery chips away at the zone of ignorance in which they find personal validation.

  33. TheTentacleson 01 Apr 2017 at 12:20 pm

    Hm, OK so it appears hardnose is more accurately termed hardheaded, and it is best not to feed the troll (though isn’t calorie restriction beneficial for an organism’s survival… 😉

    OK for those of us poor souls who do consider the accumulation of knowledge drives the progressive betterment of our understanding — this article about the internal upgrade of Google’s translation engine from an expert system to deep network is quite interesting:

    https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html

  34. hardnoseon 01 Apr 2017 at 12:38 pm

    “There were hardnoses and Egnors speaking out against powered flight, against understanding electricity, against curing diseases, against space travel, against harnessing fission, against harnessing fusion”

    Some things have been invented that were predicted. Some people thought they were impossible, and others thought they were possible.

    Other things have been predicted, but so far have not been accomplished. Some predict they will be, others predict they won’t be.

    Your logical error is the following:

    Some inventions were predicted, and they were accomplished, therefore everything that is predicted will be accomplished.

    I predict AI will continue to fail, because I don’t agree with the materialist theory that the brain generates the mind.

    So far, AI as failed, relentlessly and repeatedly.

    But your faith in the materialist theory says AI HAS TO SUCCEED. That is not a scientific reason for believing something.

    So your faith in AI is illogical and unscientific.

  35. Pete Aon 01 Apr 2017 at 1:04 pm

    “So your faith in AI is illogical and unscientific.”

    Faith in anything is illogical and unscientific.

    faith [noun]:
    1. Complete trust or confidence in someone or something.
    2. Strong belief in the doctrines of a religion, based on spiritual conviction rather than proof.
    https://en.oxforddictionaries.com/definition/faith

    “I predict AI will continue to fail, because I don’t agree with the materialist theory that the brain generates the mind.”

    Yes, you have incessantly informed us of your nonmaterialistic faith and your idealism.

    In philosophy, idealism is the group of philosophies which assert that reality, or reality as we can know it, is fundamentally mental, mentally constructed, or otherwise immaterial.

    As an ontological doctrine, idealism goes further, asserting that all entities are composed of mind or spirit.[2]
    https://en.wikipedia.org/wiki/Idealism

  36. hardnoseon 01 Apr 2017 at 1:07 pm

    Materialism is a faith. Not believing in materialism is skepticism.

  37. Pete Aon 01 Apr 2017 at 1:27 pm

    Materialism is your persistent straw man logical fallacy.

  38. mumadaddon 01 Apr 2017 at 2:40 pm

    “Materialism is your persistent straw man logical fallacy.”

    Accuracy/word ratio = 100%.

    By which I mean there are no fewer words one could use to perfectly explain hn’s working concept of ‘materialism’.

  39. mumadaddon 01 Apr 2017 at 2:42 pm

    It is basically whatever he disagrees with.

    Not quite as eloquent though. 🙂

  40. bachfiendon 01 Apr 2017 at 5:46 pm

    Hardnose,

    ‘I don’t agree with the materialist theory that the brain generates the mind’.

    No – the brain doesn’t generate the mind. The brain IS the mind, and the mind IS the brain.

    Your non-materialist view of the mind is just incoherent. How do you explain the split brain phenomenon on a non-materialist viewpoint? If the corpus callosum (which connects the two cerebral hemispheres) is divided (for the treatment of intractable epilepsy) producing two brains anatomically and structurally (a right brain and a left brain), it also produces two minds functionally, a right mind and a left mind, and left mind doesn’t know what the right mind is doing and thinking, and vice versa.

    If there’s some sort of non-materialist something or another which produces the mind, then how is it divided by a materialist surgical procedure? Or how does the non-materialist something or another know to connect to one brain half or the other, or someone else’s brain?

    ‘So far, AI has failed, relentlessly and repeatedly’.

    So you’re claiming – as usual – future vindication for your delusional viewpoints? Your claim (and your hope) is that you’re right, and you won’t be proven wrong, because you’ll be dead and gone before AI is developed?

  41. mumadaddon 01 Apr 2017 at 6:05 pm

    bachfiend,

    “No – the brain doesn’t generate the mind. The brain IS the mind, and the mind IS the brain. ”

    Do you really stand by that statement? Does the mind regulate the autonomic nervous system? If it doesn’t, it can’t be equivalent to a brain. Surely the mind is one (or a subset of) the the things a brain does?

  42. bachfiendon 01 Apr 2017 at 6:32 pm

    mumadadd,

    The mind consists of the conscious mind and the subconscious mind. The conscious mind is by far the smaller part, the figurative monkey on the back of the elephant which has the delusion that it’s directing where the elephant is going by pulling on the elephant’s ears -whereas the elephant is going where the elephant wants to go.

    Neuroscience has demonstrated that in most cases the unconscious mind makes the decisions and the conscious mind rationalises the decisions to preserve the delusion that it’s making the decisions and is in charge.

    The centres regulating autonomic function are just part of the unconscious mind (and there’s nothing stopping the conscious mind affecting autonomic function anyway).

  43. mumadaddon 01 Apr 2017 at 6:40 pm

    bachfiends,

    So do lizards and insects have minds?

  44. mumadaddon 01 Apr 2017 at 6:41 pm

    Yes, just unconscious ones.

    Never mind, I see you are just defining it differently to me…

  45. bachfiendon 01 Apr 2017 at 7:11 pm

    mumadadd,

    I’m defining the mind as Daniel Dennett defines it (at least based on my reading and understanding of ‘From Bacteria to Bach’).

    Anything with a brain has a mind. Knowing whether insects or lizards have conscious minds is unknown (and almost certainly unknowable). We even can’t be certain that other people have conscious minds, but it’s by far the simplest assumption.

  46. mumadaddon 01 Apr 2017 at 7:28 pm

    bachfiend,

    I’ve downloaded ‘From Bacteria to Bach’ but not yet listened to it. However, I think most people hold to a definition of ‘mind’ that doesn’t include the autonomic nervous system (just the conscious part AND all the automated subroutines that contribute to perception, narrative and decision making), but YMMV. That isn’t to say it’s the correct definition, just I’ve never heard a definition that encompassed the autonomic nervous system.

    On the topic of ” Knowing whether insects or lizards have conscious minds”, I found this episode of The Brain Science Podcast really interesting:

    http://brainsciencepodcast.com/bsp/2016/128-jonmallatt

  47. bachfiendon 01 Apr 2017 at 9:04 pm

    mumadadd,

    I suppose it’s a matter of semantics whether you include the autonomic nervous system as being part of the mind, whether conscious or unconscious. I’d personally would include it on the basis that there occasions when the autonomic nervous system is doing stuff that becomes consciously apparent, such as when a person has a narrow miss from a possibly fatal accident and has the subjective manifestations of the flight or fight response.

    I think that there are grey areas. That there are no rigid dividing lines, with the autonomic nervous system being definitely not part of the mind.

    What was the conclusion, if there was one, of the podcast, regarding whether insects or lizards have conscious minds? I don’t have the time to listen to it (my ambition today – besides finishing listening to this week’s SGU – is to listen to all of Shostakovich’s symphonies – I’m on number 1)

  48. mumadaddon 01 Apr 2017 at 9:26 pm

    “What was the conclusion, if there was one, of the podcast, regarding whether insects or lizards have conscious minds?”

    It’s was that primary consciousness (or subjectivity) probably developed way back down the chain of complexity. Can’t remember about insects or lizards specifically, but I think lizards: yes; insects: ?

  49. BillyJoe7on 02 Apr 2017 at 2:00 am

    bachfiend,

    I tend to agree with mumadadd here regarding definitions.

    The brain produces the mind (which is the conscious part of the brain). And, besides producing a mind, the brain does other things which are, by contrast, subconscious or unconscious. And, yes, by a large margin, most of the brain’s functioning is subconscious or unconscious. I guess it does get down to definitions and semantics, but I think most neuroscientists, when they talk about the mind would be referring to what you call “the conscious mind”.

    Perhaps Daniel Dennett uses the phrase “the subconscious mind” as a convenient shorthand for “the things the brain does below the level of consciousness”, and therefore uses the phrase “the conscious mind” in order to clearly differentiate it from “the subconscious mind”?

    Maybe, Steven Novella could chime in here to clarify.
    Maybe it doesn’t matter a lot as long as we know what we are talking about.

    “Neuroscience has demonstrated that in most cases the unconscious mind makes the decisions and the conscious mind rationalises the decisions to preserve the delusion that it’s making the decisions and is in charge”

    A good succinct summary.
    (I would leave off the “in most cases” – unless there is evidence that the conscious mind feeds back on the subconscious mind affecting it’s decisions, which would equate to dualism in my opinion).
    (I’m using your terminology here)

  50. bachfiendon 02 Apr 2017 at 3:30 am

    BillyJoe,

    It’s a matter of definitions. Daniel Dennett does write (I can’t give a page number, it’s location 1740 out of 9328 in the Kindle edition in chapter 5 – the evolution of understanding):

    “An unconscious mind is no longer seen as a ‘contradiction in terms’; it’s the conscious minds that apparently raise all the problems”.

    I used to think that the brain produces the mind. The conscious mind. I used to be a property dualist. Now, I’m no longer any sort of dualist. It’s much simpler to regard the brain to be the mind, and the mind to be the brain. If the brain is damaged, then the mind is damaged, because they’re equivalent, they’re the same thing, they’re just different names for the same object.

    I don’t have any great objections to limiting the mind to just the conscious mind. What I really dislike is hardnose’s version of vitalism in his assertion that computers will never achieve intelligence (however intelligence is defined), while he also asserts that bacteria are intelligent.

  51. BillyJoe7on 02 Apr 2017 at 3:42 am

    Rick,

    “Hardnose validates himself by thinking he is wiser and smarter by embracing the things we don’t yet know or can’t yet do. As long as he’s able to point to something we haven’t discovered, then he can feel good about himself. He feels inferior in areas of knowledge, but feels superior in areas of ignorance. So he always tries to “play up”, to highlight and to maximize areas of ignorance”

    Spot on.
    The Troll dismisses out of hand all of the hard-earned, evidenced-based conclusions of mainstream science that does not fit in with his ideology (which just knows is true and that he just knows will be vindicated by future research!); and he accepts without question every conceivable non evidence-based, wildly speculative individual opinion as long as it is contrary to the mainstream and inline with his own ideology.

    “What is so sad about hardnose and Egnor is that their attitudes make them fundamentally dislike the advancement of human knowledge, because every new idea or discovery chips away at the zone of ignorance in which they find personal validation”

    I just came across a quote from Neil deGrasse Tyson:
    “god is an ever-receding pocket of scientific ignorance”*
    With The Troll, replace “god” with “the universal mind/the intelligent universe/cosmic consciousness”.

    ———————-

    *Full quote: “If that’s how you want to invoke your evidence for God [as the cause of inexplicable events], then God is an ever-receding pocket of scientific ignorance that’s getting smaller and smaller and smaller as time moves on. So, just be ready for that to happen, if that’s how you want to come at the problem. So that’s just simply the God of the gaps argument.”

  52. BillyJoe7on 02 Apr 2017 at 3:50 am

    And a relevant quote from H. L. Mencken:
    (To correct The Troll’s caricature of science)

    “The essence of science is that it is always willing to abandon a given idea for a better one; the essence of theology [and ideology] is that it holds its truths to be eternal and immutable”

  53. BillyJoe7on 02 Apr 2017 at 3:55 am

    bachfiend: you seem to be equating an organ with its function. That is fine as metaphor but…

  54. bachfiendon 02 Apr 2017 at 4:37 am

    BillyJoe,

    I’m not equating an organ with its function in stating that the brain is the mind and the mind is the brain. I’m not a property dualist.

    And anyway. The brain has more functions than just producing consciousness. Or a conscious mind.

  55. Ian Wardellon 02 Apr 2017 at 6:51 am

    bachfiend

    “If the brain is damaged, then the mind is damaged, because they’re equivalent, they’re the same thing, they’re just different names for the same object”.

    My Response:
    In order for objects X and Y to be one and the same object, then X cannot have any properties Y does not have, and Y cannot have any properties X does not have.

    In the case of the brain and everything contained therein on the one hand, and consciousness on the other, there are no properties in common whatsoever. The former is characterised by the quantitative and is observable from a third person perspective. The latter is characterised by the qualitative and is not observable from the third person perspective — it is only known through the experiencing subject.

    We can say the brain causes, or perhaps elicits, consciousness. Or perhaps consciousness supervenes on brain processes. Nevertheless, *by definition*, they are not the same.

  56. bachfiendon 02 Apr 2017 at 7:51 am

    Ian,

    Quite frankly, I don’t care a flying f*uck what you think.

    But anyway, to quote Daniel Dennett in ‘From Bacteria to Bach and Back: the Evolution of Minds’ when he’s discussing Francis Collins’ book ‘the Astonishing Hypothesis: the Scientific Search for the Soul’:

    ‘…in which he argued that dualism is false; the mind is just the brain, a material organ with no mysterious extra properties not found in other living organisms. He was by no means the first to put forward this denial of dualism; it has been the prevailing-but not unanimous-opinion of both scientists and philosophers for the better part of a century. In fact, many of us in the field objected to his title. There was nothing astonishing about this hypothesis; it had been our working assumption for decades! It’s denial would be astonishing, like being told that gold was not composed of atoms or that the law of gravity didn’t hold on Mars. Why should anyone expect that consciousness would bifurcate the universe, when even Life and reproduction could be accounted for in physicochemical terms?’

    It’s as I said – the mind is the brain and the brain is the mind. There’s nothing idiosyncratic or unusual about my assertion. It’s a position which is supported by the evidence, including the split brain phenomenon. Mind-dualism is just wrong.

  57. TheTentacleson 02 Apr 2017 at 7:52 am

    I’ve just finished listening to the last SGU podcast where this topic was quite hotly debated 🙂

    To frame my feedback, as both Cara and Steve used an appeal to (Neuroscientific) authority ;-P, I may as well put my Equidae in the race. I am a working research Neuroscientist studying visual processing at the systems level of Neurons to networks, both in human and non-human models[1]

    First off, most of the conversation was sidetracked by a false dichotomy. “Awareness” is an incredibly tangential phenomenon for AI (and most neuroscientists would feel exactly the same about biological consciousness). The core questions do not critically hinge around this dichotomy of aware vs. unaware AIs. Complex decision making in biological or artificial systems really does not depend on awareness. It does depend on probabilistic inferences, something our previous systems failed miserably on, but our current approaches to both brains and AI are adopting with relish (bayesian probabilistic reasoning).

    The evolution in our understanding of decision making in the brain in the last 5 years has been astounding, especially with the “causal” toolset like opto/chemo/magnetogenetics making major strides forwards in behaving subjects. Neuroeconomics as a field has really blossomed[2] and high–level abstract cognitive ideas or vague fMRI studies are now being causally interrogated from the network, circuit, neuron down to the synapse. The coming together of decision making theory, emotion/arousal and working memory is a great example of how distinct fields are strongly validating each other. Really, look at research in the last 5 years from e.g. Tobias Boenhoffer’s lab using two-photon imaging showing how you can track spine formation during working memory tasks in individual behaving mice learning something for the first time, and by ablating just those individual spines you loses the memory. This was pure science fiction 10 years for us Neuroscientists, and it is now routine work in hundreds or thousands of labs!

    Steve said he was more optimistic / open 10-15 years ago than he is now. In general about understanding the brain, and much more recently about AI, I am now much more optimistic now than I was 15 years ago!

    When I used to tell people I studied the brain they’d go, “oh must be complicated”, and I’d reply, not really because, well, we’re just one rung up a very long ladder of understanding. Diverse fields in systems neuroscience each progressed independently, papers would be published in big journals that were heavy in data and light in theory. BUT this IS changing!!! Working at the raw coal face of Neuroscience research, I am much more excited about the cohesion of the last 5 years compared to my previous 15 years of experience (of course the last 5 years depends on the previous). My long standing caution and pessimism about my field is slowly yielding to both how our technology now causally interrogates cognition, and how our background theoretical models are blossoming (including potential grand “unifying” theories like hierarchical predictive coding).

    Of course it is hard to make predictions for the future. But I would say the last few years has been VERY supra-linear in understanding brain function (look at brain-machine-interfaces, and other decoding experiments), and that trend is most likely to evolve more exponentially than linearly.

    And so back to AI.

    Steve is incorrect when he suggests that there is some categorical distinction between deep networks and neuromorphic chips. I understood him to suggest that neuromorphic chips are a required conceptual “leap” forwards, that they contain some secret sauce our software models running on von Neumann architectures do not. BUT they really don’t. They merely try to implement in silicon what the software based deep networks already do in software. TrueNorth contains integrating “neurons” connecting via 256 “synapses”, around a million neurons and 268 million connections. Actually neuromorphic chips are a validation of software neural networks! They will make the same computations & algorithms (remember David Marr) MUCH more efficient, but they will not do something that is not being done now. Indeed at the moment the types of networks neuromorphic chips implement are much more basic than our current leading deep networks.

    Steve is also wrong in the false dichotomy that if an AI is not somehow aware, it cannot be dangerous. To reiterate, decisions do not require awareness. We know more about how decision making occurs in real brains, and how we learn the stuff (bayesian priors) that makes decisions possible. We are teasing apart the cognitive circuits that integrate perception, prediction, memory and action. And we are working hard on implementing these ideas into working AIs. An autonomous decision maker who can process input probabilistically to generate beneficial actions, with their own internal states and goals. It is utterly irrelevant if that agent is aware, if it can control some physical embodiment or computational system that interacts with the world around it, it has a potential to be dangerous (trivially an autonomous car, but I really do mean the systems we will have in 5 years). We already have robotic control systems[1] that have multiple cognitive modules like attentional systems, prediction, working-like memory and probabilistic decision making systems run in parallel. As we add in neuroscience inspired recurrent connectivity and expand the bayesian priors these systems have about the world, their capabilities will grow quickly.

    I’ve followed Kurzweil’s blog/site for many years because I was actually a skeptical anti-futurist. I would find their proclamations of the singularity largely absurd (I still get that reaction emotionally). But if you watch the recent Future of Life Institute talks, where philosophers and researchers as well as commerical bigwigs talk about this stuff, I think the futurists are starting to look slowly less absurd.

    I also want to make a general point about this post and that SGU conversation. Sadly we seem to frame these discussions around what CEOs say, people like Larry Page, Elon Musk, Zukerberg. Fine they are successful, but we should really be listening to the scientists, philosophers and researchers working on AI. Ignore Zukerberg, but DO listen to what Yann LeCun says (both a Professor at NYU, one of the founders of CNNs and also head of AI research at Facebook). As I and YRDBRD said before, Elon musk is largely popularising what the philosopher Nick Bostrum talks and writes about, but Nick Bostrum does it much more interestingly!!! 😛

    —-
    [1] I’ve mostly been interested in what the Neuroscientist Marcus Raichle terms the “Dark Energy” of the brain, the intrinsic activity and connectivity, in particular the dense recurrent networks within the cortex and back to thalamus. I am currently collaborating with computational neuroscience groups, and will be writing grants in the near future on trying to bring some modern neuroscientific ideas about recurrent feedback and bayesian inspired predictive models into a deep neural network model of some kind. I work in China, where the big Chinese brain project is kicking off this year — bridging the gaps between neuroscience and computational neuroscience is a major aim. And I’m also a sci-fi geek 😉

    [2] I was originally very snobbish about Neuro-economics, it seemed more fashionable than it was “deep”. But although perhaps the more sensational fluffy fMRI studies make the news, the solid underlying “Wet” systems neuroscience advances have been simply remarkable, so I very humbly eat my hat…

  58. mumadaddon 02 Apr 2017 at 8:16 am

    TheTentacles,

    Again thanks for your post — interesting stuff. I’m reading the NYT article you linked to yesterday and will also check out Nick Bostrom on your (and yrdbrd’s) recommendation.

    bachfiend,

    “Ian,

    Quite frankly, I don’t care a flying f*uck what you think.”

    That gave me a real belly laugh for some reason — thanks to you too. 🙂

  59. arnieon 02 Apr 2017 at 8:34 am

    Another way to look at it is that “the mind” is an abstract linguistic construction to talk about some of the activity of the brain. IOW, one of the things the brain does (both within and outside our awareness) that we can’t yet describe well in physiologic/chemical/electrical terms because we don’t yet know and understand enough to do so (no, HN, that doesn’t mean we know nothing). So, no, mind and brain are not synonymous because one is the physical-physiologica-biochemical organ in total and the other an abstract concept enabling us to talk about some of what that organ does such as think. We use consciousness as a semantic way to indicate those activities of the brain that “rise” to awareness (subjectivity) in contrast that those that don’t.

    This is not dualism. Mind is not an “entity” on a conceptual-linguistic level of the brain or in the sense that dualists speak of “the soul”. Mind-speak can become less vague and abstract as detailed knowledge of the brain and its various activities increases. (In that sense mind-speak could be considered language-of-the-gaps also).

    Open to critiques of my attempt at formulating the complex issues in this thread.

  60. arnieon 02 Apr 2017 at 9:05 am

    The Tentacles,

    Thanks for all that information and for your work on “the gaps” of knowledge about the brain and how it works. I hadn’t seen it when I was composing my little offering above. Though I don’t have the neuroscientific sophistication you bring to your work and explanations, from my neuropsyciatric framework I certainly agree that decision making does not primarily happen in awareness (conscious state of subjectivity). In fact, apparently the research indicates the brain generally does the conscious awareness stuff only shortly after the actual decision making stuff.

  61. hardnoseon 02 Apr 2017 at 12:07 pm

    “If the brain is damaged, then the mind is damaged, because they’re equivalent, they’re the same thing, they’re just different names for the same object”.

    If the brain is damaged, the mind’s ability to interact with the world is damaged.

    You may have decided they are the same thing, but the mind and the brain don’t care what you decide.

  62. arnieon 02 Apr 2017 at 1:00 pm

    Addendum to my last comments:

    Bachfiend,
    The usual way the word “mind” is used does not imply dualism in its philosophical sense, nor is it literally synonymous with the word “brain” as I explained above. E.G., If my patient’s brain is minimally damaged due to a very small stroke in the motor area resulting only in very slight weakness in the left hand with no change whatsoever in the mental status, I wouldn’t conclude that the person’s mind has been damaged nor that “the mind’s ability to interact with the world is damaged”, would you? If so, then we’re just talking semantics and definitions philosophy or metaphysics.

  63. bachfiendon 02 Apr 2017 at 4:49 pm

    Arnie,

    The assertion that the mind and the brain are equivalent (as Daniel Dennett seems to be claiming in ‘From Bach to Bacteria’) doesn’t mean that a minimal stroke causing just weakness (not paralysis) of the left hand wouldn’t also cause a minimal change in the person’s mind – one which would also be difficult or even impossible to detect clinically.

    ‘If the brain is damaged, the mind’s ability to interact with the world is damaged’.

    For once, I actually agree with hardnose. He seems to have abandoned his non-materialism, in stating that damage to the brain also causes damage to the mind, which wouldn’t be possible if the mind was non-material (if he’s attempting to claim that the brain and mind are different things, and the mind is non-material, then he ought to be claiming that damage to the brain causes a reduction of the mind’s ability to interact with the brain and indirectly with the world – which is an incoherent claim anyway).

  64. BillyJoe7on 02 Apr 2017 at 5:31 pm

    bachfiend,

    “For once, I actually agree with hardnose”

    Actually, you don’t, you just misunderstood what he meant.
    He means that, when the brain is damaged, the mind actually remains intact. But, although the mind remains intact, its ability to interact with the world is damaged because it can only interact with the world through the mind and the mind is damaged.

    “if he’s attempting to claim that the brain and mind are different things, and the mind is non-material, then he ought to be claiming that damage to the brain causes a reduction of the mind’s ability to interact with the brain and indirectly with the world

    If you read the quote carefully, that’s exactly what he is saying: “If the brain is damaged, the mind’s ability to interact with the world is damaged”

    As for the brain being the mind and vice versa. Consider a brain preserved in formalin. Or a comatose brain. It’s still a brain. It’s just a non-functioning brain. There’s no mind there. That’s all I meant. Nothing to do with property dualism.

  65. bachfiendon 02 Apr 2017 at 6:01 pm

    BillyJoe,

    Gawd only knows what hardnose ever means in any of his comments. You must have a higher degree than I do in hardnosese (or is hardnose’s unique form of English actually hardnosian?).

    Your clarification of what a mind is only applies if you restrict ‘mind’ to a conscious mind. I’ve abandoned property dualism in asserting that the brain and the mind are the same thing, and the mind consists of the conscious mind (which is a free floating function, capable of expanding or contracting according to the moment to moment needs and conditions of the organism) and the unconscious mind, which is everything else.

    The brain in a vat of formalin doesn’t have a mind because the brain and mind are dead. The brain in a coma still has a mind, because at least some of its vegetative functions are still functioning.

    Everything with a brain has a mind, even insects, although not necessarily to human level. It’s unknown, and unknowable, whether insects have conscious minds, or whether everything they do is just instincts put into action by their unconscious minds.

  66. arnieon 02 Apr 2017 at 7:05 pm

    Bachfiend,

    “Gawd only knows what hardnose ever means in any of his comments.”

    Now that, I suspect, we can all three agree with! However, interestingly, we have three different takes on what HN meant. I debated about each of your takes when I first read it and then decided he meant to paraphrase what he thought Bachfiend inferre or meant (and indeed did mean) and with which he didn’t agree. I think he believes that the mind remains intact even though the brain is damaged, as BJ7 said, but I’m not sure he thinks the mind is 100% dependent on brain to interact with the world. I think he believes we can’t know that and therefore we essentially don’t know anything about the brain and mind connection.

    I understand your point, Bachfind, and I’ve read a fair amount of Dennet (although not yet “From Bach to bacteria”) and I think sometimes he sees mind as meaning some, but not necessarily, everything that the brain does and at other times sees them as totally synonymous concepts, literally. So in your view, having two different words is totally redundant? The anatomical organ, in total, and all the phenomena resulting from its activities, can be equally and legitimately be called “the brain” or “the mind”? That seems pretty closely analogous to saying “the GI tract” could equally be called, in total, “the digestive system” when it also does other things. What about the fact that we talk about the brain of microcephalic children but think of many of them as essentially incapable of doing “mind” phenomena. What I’m getting at is that, in fact, at this point in our knowledge, we have one literal-level term for the brain and another more abstract-level term for some, but not all, of what it does. Maybe that will change someday, I don’t know and I will read Dennet’s new book with an open mind for his arguments and the logical and evidentiary basis of it.

  67. Ian Wardellon 02 Apr 2017 at 8:07 pm

    Bachfiend, it doesn’t matter if the denial of dualism “has been the prevailing-but not unanimous-opinion of both scientists and philosophers for the better part of a century”. If, instead, they are maintaining materialism, then they are all believing in a position that is incoherent.

    And this is separate from the fact that the brain cannot be the mind since they have zero properties in common.

  68. Ian Wardellon 02 Apr 2017 at 8:25 pm

    TheTentacles:
    “Complex decision making in biological or artificial systems really does not depend on awareness”.

    I really find it hard to comprehend how a functioning human being could believe in such an obvious transparent falsehood.

    Consciousness *necessarily* plays a causal role in the brain. It is incoherent to suppose otherwise. Read 2 of my blog entries:

    http://ian-wardell.blogspot.co.uk/2015/06/can-consciousness-be-causally.html

    http://ian-wardell.blogspot.co.uk/2016/03/materialismphysicalism-is-incompatible.html

    TheTentacles:
    “Steve is also wrong in the false dichotomy that if an AI is not somehow aware, it cannot be dangerous”.

    I think people have this laughable notion that computers will become conscious, and hence have plans and purposes opposed to the best interests of humankind. Of course computers can be programmed to be dangerous.

    TheTentacles:
    “We know more about how decision making occurs in real brains, and how we learn the stuff (bayesian priors) that makes decisions possible”.

    Any activity in the brain is merely the neural *correlates* of decisions, not the decisions themselves. Sighs. Why is it that seemingly all scientists are so philosophically clueless?? :O

  69. yrdbrdon 02 Apr 2017 at 8:32 pm

    TheTentacles, thanks for your latest reply. After listening to the podcast yesterday, I began to write a post making several similar observations, but without your level of expertise and clarity! I’d love to hear more about your work and hope you continue to post here.

    Cheers!

  70. hardnoseon 02 Apr 2017 at 9:19 pm

    We don’t know what “mind” is. We know that a person must have a functioning brain in order to interact with the world. If a person’s brain is not functioning, we don’t know what state their mind is in. There is no way to find out.

    Materialists make the assumption that a person without a functioning brain does not have a mind. But you can’t know.

    If a person can’t talk or move, we don’t know what they are thinking. Their brain might be functioning very well, but they can’t communicate. We assume they still have a mind.

    If a person can’t talk or move, and also has no measurable brain activity, is that a completely different scenario? We don’t know.

    Materialism assumes the two scenarios are completely different.

  71. chikoppion 02 Apr 2017 at 9:21 pm

    [Ian Wardell] And this is separate from the fact that the brain cannot be the mind since they have zero properties in common.

    That’s because the terms refer to two different aspects of the same thing.

    The brain is the object and the mind is the action. Where we see the object significantly impaired to degrade its action so too do we see the mind impaired.

    There is a long and well-documented history of brain injuries, chemical interactions, electro-magnetic stimulation, etc. to indicate that the mind is synonymous with brain activity. There is no evidence to suggest that the mind is itself a separate object, independent of the brain.

  72. hardnoseon 02 Apr 2017 at 9:27 pm

    I think the worrying over whether AI can be dangerous should happen after there is AI. Currently we have no reason to think there ever will be AI.

    At the beginning of computer science, intelligent machines were expected to be developed within a few years. The prediction keeps on being extended.

    We’re always getting closer, and the researchers are always getting all excited. And then nothing happens.

  73. chikoppion 02 Apr 2017 at 10:00 pm

    [hardnose] If a person can’t talk or move, and also has no measurable brain activity, is that a completely different scenario? We don’t know.

    Materialism assumes the two scenarios are completely different.

    “Materialism” has nothing to do with it.

    Severe brain injuries can have drastic impact on a person’s personality, irrevocably altering it. Now, there are two options here. The first is that the “mind” has been altered because the activity of the brain has been altered and the mind IS that activity.

    The second option involves the invention of a magic realm of existence, for which there is no evidence, where the unaltered mind resides. Despite the fact that the mind remains unaltered in this realm the person displays behaviors, motivations, and characteristics that are inconsistent with the mind’s “true” personality. Is the mind unable to direct the brain? If so, where are these new personality traits coming from? Is the brain suddenly producing them on its own? If that’s the case, then personality is not a characteristic of the mind.

    Where was the mind before the physical brain developed? Was it waiting around in line of minds to receive its assignment? Was it waiting for a brain with the right “frequency” to appear? Was it conceived with the advent of the physical brain?

    Not only is this conjecture unwarranted and nonsensical, it is irrelevant. If the interactions of this magic realm of existence can’t be observed or detected then for all intents and purposes it doesn’t exist. When a brain ceases to function that person ceases to exist in the observable world.

    Either there is evidence for a thing or you are making it up to explain away something you don’t understand. In order to sustain belief in dualism you have to invent a raft of nonsense claims and excuses to deny what is actually observable. Knowledge based on observable evidence isn’t “materialism,” it’s just knowledge. Claims and denials based on evidence-free conjecture is a waste of time.

  74. bachfiendon 02 Apr 2017 at 10:12 pm

    Ian,

    “‘Complex decision making in biological or artificial systems really does not depend on awareness’.

    I really find it hard to comprehend how a functioning human being could believe in such an obvious transparent falsehood'”.

    The obvious example is that of the social insects (bees, ants and termites) which engage in complex behaviours involving making decisions without the colonies being aware of the stimuli eliciting the response. Or the reason for the decisions being made. Intelligent decisions don’t need to be made by intelligent, or even particularly aware, actors

  75. bachfiendon 03 Apr 2017 at 12:51 am

    Or perhaps a better example of complex decision making in the absence of (conscious) awareness is the phenomenon of blindsight. Individuals with neurological damage in the right area of the brain can have no perception of vision, but yet be able to perform complex visual tasks, involving making decisions, including finding their way through an obstacle course of furniture in room or posting letters in letterbox slots of varying height and orientation.

    The conscious mind in these individuals isn’t aware of visual stimuli. The unconscious mind must be aware of the visual stimuli (obviously) otherwise it wouldn’t be able to perform the task.

  76. Lightnotheaton 03 Apr 2017 at 2:53 am

    chikoppi-
    Re the mind as being independent of the brain, I would say “there is no evidence that mainstream scientists regard as scientific” rather than “there is no evidence”. There are mystical experiences, there is scientific data that people like Dean Radin point to, there is lots of anecdotal evidence, etc. You and I would argue that there are good reasons to say this is not “good” evidence, but i think skeptics are somewhat vulnerable to the argument that they are being denialists in this general area of metaphysical concepts about things like mind and consciousness. More later..

  77. Ian Wardellon 03 Apr 2017 at 5:55 am

    chikoppi
    “That’s because the terms refer to two different aspects of the same thing”.

    I was pointing out that the notion that the brain and mind are the same thing is false. Brain and mind being aspects of something else is a completely different position.

    chikoppi
    “The brain is the object and the mind is the action”.

    No, mind, or more specifically consciousness, consists of qualia in its broadest sense, and the philosophical term intentionality.

    chikoppi
    “There is a long and well-documented history of brain injuries, chemical interactions, electro-magnetic stimulation, etc. to indicate that the mind is synonymous with brain activity”.

    You can’t appeal to correlations to overturn a conceptual impossibility. Not only do such correlations not prove they are one and the very same, they don’t even show one causes the other. Correlations and causation are not the same thing.

    chikoppi
    “The second option involves the invention of a magic realm of existence, for which there is no evidence, where the unaltered mind resides. Despite the fact that the mind remains unaltered in this realm the person displays behaviors, motivations, and characteristics that are inconsistent with the mind’s “true” personality. Is the mind unable to direct the brain? If so, where are these new personality traits coming from? Is the brain suddenly producing them on its own? If that’s the case, then personality is not a characteristic of the mind”.

    You really need to try and think about this sort of stuff. I wouldn’t know where to start to address all this. You’re saying that if Y influences X, then X must be wholly brought into being by Y. This simply is not true.

    And it’s not the mind, but the *self* that remains unaltered. The mind is the result of the self and the brain.

  78. Ian Wardellon 03 Apr 2017 at 6:02 am

    bachfiend, a decision *by definition* involves awareness or consciousness. If we do something on “autopilot”, then that is not a decision. But thinking about whether to do a degree in physics or phislophy, then choosing, is a decision. Bees, ants and termites might not make decisions (or they might, I have no idea), but I sure as hell know I do.

  79. mumadaddon 03 Apr 2017 at 6:36 am

    Ian,

    “a decision *by definition* involves awareness or consciousness. If we do something on “autopilot”, then that is not a decision. But thinking about whether to do a degree in physics or phislophy, then choosing, is a decision.”

    The point that you’re missing is that the subjective experience of ‘deliberation’ isn’t actually reflective of how decisions are made by your brain; the conscious deliberation has little to no to effect on the outcome. Decisions are made by automated subroutines that you have no conscious access to, and then fed to the conscious part of your mind for post hoc justification.

  80. bachfiendon 03 Apr 2017 at 6:59 am

    Ian,

    Making decisions is making choices. A person with blindsight navigating a room containing furniture forming obstacles is making choices, such as whether to go around an occasional table on the right or left side for example, and is not aware of the choices being made. Not consciously. The decisions are being made subconsciously, below conscious awareness.

    Bees are making choices too, so they’re also making decisions too. I know that for a fact. I don’t know whether they are aware – that’s something that’s unknown and unknowable

  81. Pete Aon 03 Apr 2017 at 8:11 am

    Qualia: a term invented by philosophers for the purpose of obsuring their circular reasoning.

  82. arnieon 03 Apr 2017 at 8:32 am

    Ian,
    You have a right, of course, to share your ideologically based opinions, but mumadadd,s and Bachfiend’s comments at 6:39am and 6:52am are, in contrast to yours based on actual scientific research. There’s a world of difference.

  83. chikoppion 03 Apr 2017 at 9:32 am

    [Ian Wardell] I was pointing out that the notion that the brain and mind are the same thing is false. Brain and mind being aspects of something else is a completely different position.

    Good, I think we agree. The terms “brain” and “mind” do not refer to the same thing. That doesn’t mean, however, that they are two separate ontological entities.

    The terms “engine” and “KPH/MPH” don’t refer to the same same thing. One refers to a physical object and the other refers to the object’s relative speed at a given moment. The speed of the engine is not an independent entity, but a description of the state of the object.

    No, mind, or more specifically consciousness, consists of qualia in its broadest sense, and the philosophical term intentionality.

    That doesn’t mean qualia isn’t appropriately understood as product of (synonymous with) brain function.

    You can’t appeal to correlations to overturn a conceptual impossibility. Not only do such correlations not prove they are one and the very same, they don’t even show one causes the other. Correlations and causation are not the same thing.

    Right. Maybe it’s the light switch fairies that cause the lightbulb to turn on. Every time we flip the switch we see the electrical state of the circuit change. We know the circuit connects to the lightbulb. We know an amount of voltage is consumed that can be accounted for by the impedance of the wire plus the wattage of the bulb. The correlation of these phenomena is consistent. But you’re right, it’s only correlation. We can’t prove its not the light switch fairies.

    But the burden of proof is on the proponents of the fairies to first prove they exist and are not merely a non-falsifiable notion invented in an attempt to explain-away the behavior of lightbulbs.

    You really need to try and think about this sort of stuff. I wouldn’t know where to start to address all this. You’re saying that if Y influences X, then X must be wholly brought into being by Y. This simply is not true.

    No, Ian. I’m pointing out the assumptions, complications, and absurdities that come with a solution proposed from a position of dualism. Specifically, that dualism creates far more questions than it purports to answer.

    And it’s not the mind, but the *self* that remains unaltered. The mind is the result of the self and the brain.

    The “self?” Can you show me a self, or can you merely point to your body and describe the subjective conscious experience of your brain?

    You are trying to define things into existence by appealing to conceptual terminology. Self, mind, and brain. These are merely different terms to describe the same object in different states of function and domains of discussion.

  84. Ian Wardellon 03 Apr 2017 at 9:49 am

    mumadadd
    “The point that you’re missing is that the subjective experience of ‘deliberation’ isn’t actually reflective of how decisions are made by your brain”.

    As I keep saying and linking to my essays, the notion the brain makes decisions all by itself is conceptual incoherent. If people are denying this, then they need to state what is wrong with my philosophical demonstration. Good luck with that ..

  85. Ian Wardellon 03 Apr 2017 at 9:53 am

    bachfiend
    “A person with blindsight navigating a room containing furniture forming obstacles is making choices, such as whether to go around an occasional table on the right or left side for example, and is not aware of the choices being made. Not consciously”.

    Yes, it’s his subconscious making the decisions. I’m not sure why you think this is interesting or what it proves. I’m merely saying that on occasions our consciousness is causally efficacious — most notably when we think something through and gain an understanding of something.

  86. mumadaddon 03 Apr 2017 at 10:23 am

    “If people are denying this, then they need to state what is wrong with my philosophical demonstration. Good luck with that ..”

    No, Ian, you can’t pit your personal incredulity and armchair conjecture against reams of experimental data and then claim that the burden in on us to prove you wrong! (Well, actually that’s exactly what you have done — my point is it’s unreasonable).

  87. TheTentacleson 03 Apr 2017 at 12:41 pm

    Ian Wardell: I really find it hard to comprehend how a functioning human being could believe in such an obvious transparent falsehood.

    Are you suggesting a functional problem with my mind, or my brain? 😛

    Ian Wardell: Consciousness necessarily plays a causal role in the brain. It is incoherent to suppose otherwise. Read 2 of my blog entries: …

    Thanks for the links. With all due respect, there isn’t anything there that I haven’t read at length in the Stanford Encyclopaedia of Philosophy or elsewhere, nor anything that any number of working philosophers haven’t grappled with at length over the last few thousand years. And while I think philosophy of mind is an interesting field, their discussions are deep, subtle and extensive (I think the many diverging schools of dualists and materialists have interesting points to make, mostly on the epistemology specific to the field of philosophy). And sadly this gets filtered down into people on the internet insulting other people who don’t belong to their tribe. I actually have soft spots for several dualists, and even the idealist Bishop Berkeley (mostly for his wonderful work on vision). But honestly, David Hume gets a lot of this right: if your metaphysics gets in the way of your ability to utilise empirical tools for understanding (which generates new knowledge as science does), time to dump those metaphysics.

    Ian Wardell: Sighs. Why is it that seemingly all scientists are so philosophically clueless?? :O

    Or they just may not share the view/belief of your preferred branch of philosophy… You have no idea what my [meta]physical beliefs may or may not be. And you simultaneously dismiss a large body of philosophers who also don’t share your view; I’m sure you can come up with some rationalisation of why they are all mistaken while you have found the enlightenment of reason in a blog post of A–a propositional arguments.

    Ian Wardell: Any activity in the brain is merely the neural correlates of decisions, not the decisions themselves.

    My turn to play categorical. Your superficial name checking of Benjamin Libet totally underplays a huge body of literature you don’t appear to know about (or your metaphysical priors colour your ability to reason over). You use a tedious "merely" correlation≠causation as some shining sword of logical inviolability. First off "mere" correlation has worked spectacularly well for scientists (i.e. using the weight of evidence to infer correctly, but I really don’t want to waste my life there). Neuroscientists are not just charting correlations, they are causally manipulating detailed aspects of brain function. I suggested above people should go and read what a Neuroscientist like Tobias Bonhoeffer actually does. These neuroscientists image hundreds of dendritic spines using two-photon imaging in brains of awake mice performing tasks. They track those spines over time, and sees how latent spine production will predict future learning (i.e. correlation strongly infers causation because it is predictive of a future behavioural observation). But then they physically ablate just those spines, and the mouse loses the ability to latent learn that task. This sort of experiment is becoming routine, and is applied across multiple domains that involve decisions, emotion/reward, memory, perception etc.
    Perhaps you will just keep pulling back your definition of causation (well, those ablated spines just correlated with lost ability to remember), and if so, honestly, you are just lost in the dualism of the gaps. I was astounded on your blog post where you dismissed >100 years of detailed and fascinating research on colour perception as meaningless because you assumed they were all materialists (or that you think only philosophy should study colour perception!!?!!). That is a flag to me that you are the one with the clear problem with metaphysics.
    Anyway I will pull an alibi ad infans, my two year old is aware he needs playing with (mind, brain, whatever!) and that is more important than arguing on the internet. 😉

  88. TheTentacleson 03 Apr 2017 at 12:46 pm

    Oh and for Steve and others, and an interesting (if overlong) AI and medicine article published today:

    A.I. Versus M.D. – by Siddhartha Mukherjee

    http://www.newyorker.com/magazine/2017/04/03/ai-versus-md

  89. TheGorillaon 03 Apr 2017 at 3:45 pm

    I mean the idea that experimental evidence is specific evidence for materialism is just plain question begging. It’s not like (educated) non-materialists reject experimental data. The actual battle has to be philosophical, period — it’s metaphysics.

    The only reason a person could watch substances change consciousness and see it as some powerful evidence for materialism is by holding certain philophical views — trotting that sort of thing out as an argument against dualism just demondtrates that those philosophical views are unexamined.

    Certainly various theories of mind can be tossed out based on research, but that’s a different animal.

  90. mumadaddon 03 Apr 2017 at 4:20 pm

    Gorilla,

    All the experimental evidence fits with mind = brain function, and any anything in addition to this is sliced away by Occam’s razor.

    What functions of mind have been demonstrated in the absence of brain function? You have to look to low grade and unreliable ‘evidence’ to find any.

    My metaphysical position is a conclusion based on the evidence, not a prior assumption, and I won’t let difficulties conceptualising consciousness sway me away from what all of the reliable evidence suggests.

  91. Ian Wardellon 03 Apr 2017 at 5:13 pm

    TheTentacles
    “[N]euroscientists image hundreds of dendritic spines using two-photon imaging in brains of awake mice performing tasks. They track those spines over time, and sees how latent spine production will predict future learning (i.e. correlation strongly infers causation because it is predictive of a future behavioural observation). But then they physically ablate just those spines, and the mouse loses the ability to latent learn that task. This sort of experiment is becoming routine, and is applied across multiple domains that involve decisions, emotion/reward, memory, perception etc”.

    Dendritic spines? No idea what they are. But you think they therefore must somehow *produce* learning ability, memories and whatever? All we know is that they might be necessary for the *expression* of various mental capacities. That is to say a learning capacity, perceptions, memories might all still exist but require a functioning brain in the relevant areas for their expression.

    Consider the following fictional cautionary tale:

    Once upon a time, there was an investigator who wished to find the locus of the
    organs of hearing of fleas. He laboriously trained a flea to jump whenever he
    uttered the word “jump.” He then carefully analyzed his flea’s anatomy to find
    where its ears might be located. He would say “jump,” and observe a jump as an
    indicator that the flea had, indeed, heard him. He removed flea leg after flea leg,
    and the flea continued to jump whenever he commanded. When, finally, the flea
    did not jump, once he had removed the flea’s final leg, he concluded that the flea’s
    ears were located on that last leg, because, obviously, the flea had not heard his
    last jump command.

    This is the mistake you and other materialists make.

    TheTentacles
    “I was astounded on your blog post where you dismissed >100 years of detailed and fascinating research on colour perception as meaningless because you assumed they were all materialists”.

    I’m not sure where you’re getting that from? I never dismiss any scientific research that has been repeated by other researchers; I merely question what people might surmise from such research. If you’re suggesting that colours don’t really exist in the external world, then I submit this is nonsense (as well as depressing!). Research on colour perception is wholly irrelevant in this regard. You have to go back to the birth of modern science in the 17th Century. They *stipulated* material reality (i.e the external world) as being wholly devoid of any qualitative aspects, and placed such aspects into the mind. That’s handy for the success of science, but don’t make the mistake of supposing that such science describes the whole of reality! Nobody has ever discovered that colours do not exist in the external world, it was simply stipulated (same for sounds and smells).

    I note you haven’t addressed my previous post at all with my point that consciousness is necessarily causally efficacious. I want to stay focussed on that topic.

  92. BillyJoe7on 03 Apr 2017 at 5:46 pm

    TG,

    “I mean the idea that experimental evidence is specific evidence for materialism is just plain question begging”

    Only if you think the scientific method is incapable of discovering supernatural/immaterial/nonphysical phenomena. And if you think 400 years of natural explanations replacing supernatural “explanations” is not evidence in support of naturalism/materialism/physicalism…

    “It’s not like (educated) non-materialists reject experimental data”

    Actually most do, and they do so for ideological/philosophical reasons. And the most of the rest are dragged kicking and screaming to accept the data but are still motivated by their ideological/philosophical positions to twist,turn, and torture the data to somehow still support their ideological/philosophical positions.

    “The actual battle has to be philosophical, period — it’s metaphysics”

    Philosophy not based on the scientific evidence is dead weight. Period.

    “The only reason a person could watch substances change consciousness and see it as some powerful evidence for materialism is by holding certain philophical views — trotting that sort of thing out as an argument against dualism just demondtrates that those philosophical views are unexamined”

    Of course it’s evidence for naturalism/materialism/physicalism or, Of course it’s evidence for monism.
    But, no, it’s not evidence against dualism, just 400 years of complete lack of evidence for dualism. That you don’t think so just demonstrates that you have twisted, turned, and tortured the scientific evidence to support your particular evidence free philosophical views.

  93. RickJohn57on 03 Apr 2017 at 6:07 pm

    The brain is just a place, … a place where the mind hangs out, mostly. The mind extends further to the influences of the body and the perceived environment. The mind is the sum total of all its influences, past, present, and future(if you think about it).

    A machine’s self awareness is not important, it will be whatever and whenever we will want it to be. Humans will always strive to create a more human-like machine and are succeeding today in part. There will be no point of “singularity”. We will develop and refine mind-like behaviors in machines, constantly making small improvements. We will be forced to continually reform our definition of what “alive” is.

    But as the SGU points out we have to survive this century intact. Our ability to re-engineer ourselves with existing technologies like CRISPR and Global Warming are a far more immediate situations. The problems that will arise could threaten the stability needed to sustain rational advanced science.

  94. Pete Aon 03 Apr 2017 at 6:27 pm

    “Philosophy not based on the scientific evidence is dead weight. Period.”

    As clearly demonstrated over the years by Ian Wardell: in both his comments on NeuroLogica Blog; and by his incessant circular reference to his own blog, which never seems to be updated, or have articles redacted, in the light of new evidence — especially from the new evidence provided by 21st Century philosophers and/or cognitive scientists!

  95. Pete Aon 03 Apr 2017 at 6:30 pm

    Apologies for my typo: “21st Century” should’ve been “21st-century”.

  96. bachfiendon 03 Apr 2017 at 6:45 pm

    Ian,

    Colours undoubtedly exist in the real external world (or at least there are photons of varying energy or waves of varying wavelength corresponding to different colours in the real world).

    But it’s well known that the perception of colour is an illusion produced by brains. The human brain produces the illusion of colour in its manufactured picture of the external world. That the brain produces a manufactured, not real, picture of the external world is due to the fact the the human retina has really good image acquiring abilities in the central foveola (with its high concentration of cones), which corresponds to the size of the thumb nail with the thumb held at arm’s length in the visual field. The rest is just pasted in by the brain.

    That colour is an illusion produced by brains was demonstrated by the blue and black or gold and white dress illusion. Well, what was the colour of the dress in the real world, if the perception of colour is real?

    Can you really claim that it’s definite that ‘when we think something through’ that ‘consciousness is necessarily causally efficacious’? It’s possible that whenever you appear to be trying to decide between two choices and come up with conscious reasons for picking one over the other, all your conscious mind is doing is rationalising a decision part of your unconscious mind has already made for different unconscious reasons.

    For your example of trying to decide whether to study philosophy or physics – you might decide to study philosophy for the very good conscious reasons that you enjoy philosophy more than physics and that you have extreme difficulty in counting above 10 if you have your shoes and socks on (so your mathematics skills really suck), but the real unconscious reasons is that you want the glamour, big bucks and adoration of the hot philosophy groupies in a future philosophy career (not just because the unconscious reasons are wrong doesn’t stop them being employed to decide).

  97. hardnoseon 03 Apr 2017 at 7:03 pm

    “Severe brain injuries can have drastic impact on a person’s personality, irrevocably altering it.”

    Alcohol can have a drastic impact on a person’s personality, temporarily altering it.

    We know that the brain’s condition affects the mind. We also know that the body’s condition affects the mind. A person who is in pain, for example, will have a different personality than someone who feels good.

    The mind affects the brain, and the body. The brain, and the body, affect the mind.

    A severe brain injury would interfere with how a person experiences the world. How we experience the world influences how we feel and how we act.

    It is a typical materialist over-simplification to say that the condition of the brain determines the condition of the mind. It is a two-way interaction.

  98. hardnoseon 03 Apr 2017 at 7:09 pm

    https://www.theatlantic.com/science/archive/2016/11/quantum-brain/506768/

  99. bachfiendon 03 Apr 2017 at 7:28 pm

    Hardnose,

    You’re begging the question in assuming that the mind is something real that can affect the brain. And vice versa.

    It’s a very respectable philosophical viewpoint that the mind doesn’t exist (or rather that it’s just the brain). I personally agree – the brain is the mind and the mind is the brain. There’s a conscious brain (a conscious mind) and an unconscious brain (and an unconscious mind).

    You still haven’t addressed the question as to what the split brain phenomenon means for a non-materialist view of the mind.

    I don’t have any difficulty with disorders of the brain affecting the mind because they’re the same thing.

  100. chikoppion 03 Apr 2017 at 8:08 pm

    [hardnose] We know that the brain’s condition affects the mind. We also know that the body’s condition affects the mind. A person who is in pain, for example, will have a different personality than someone who feels good.

    Here’s a good example of what I’m talking about. There is an inherent assumption that the “mind” is a distinct entity from the “brain.” There is no basis for this premise (again, “materialism” has nothing to do with it).

    I can make the same assertions as above by stating, “the condition of the brain and body IS THE MIND.” There is no need to invent a separate entity. Doing so only introduces more unknowns without any need or evidence.

    I agree that invasively altering the physical condition of the brain alters brain function, including cognitive processing, memory, emotional status, etc., all those subjective awareness categories that are usually assigned under the term “mind.”

    I see no evidence that “mind” alters the brain. How? Isn’t the “mind” supposedly a non-physical and non-observable entity? If the “mind” can physically alter the brain then this “mind” must interact with the physical world and should be objectively detectable. Where is it? All we see is the brain, the function of which consistently correlates with subjective and reported awareness – even when that functioning is externally and arbitrarily manipulated.

    “Mind” and “brain” are two terms that refer to a subset of concepts about the observable world. Can anyone demonstrate that there are in fact two entities at play and that the two terms don’t merely refer to different qualitative aspects of the same entity? Because so far that’s what all the objective evidence indicates.

  101. hardnoseon 03 Apr 2017 at 8:18 pm

    https://www.theatlantic.com/science/archive/2016/11/quantum-brain/506768/

    Read the article. Quantum woo is the future of neuroscience.

  102. Ian Wardellon 03 Apr 2017 at 8:28 pm

    chikoppi
    “If the “mind” can physically alter the brain then this “mind” must interact with the physical world and should be objectively detectable”.

    I have no idea why. Anyway, you’re wrong. It’s not objectively detectable, yet we know it influences physical reality.

  103. chikoppion 03 Apr 2017 at 9:04 pm

    [Ian Wardell] I have no idea why. Anyway, you’re wrong. It’s not objectively detectable, yet we know it influences physical reality.

    Interaction with the physical world is what makes something detectable. We can’t “see” the electro magnetic field, but because it interacts with things we can see it can be detected.

    If the “mind,” as a supposed non-physical entity, interacts with the physical and chemical components of the brain then it can be detected in the same way electro magnetic fields can be detected.

    In other words, you can’t have it both ways. Either the “mind,” as a supposed non-physical entity, does interact with the physical world or it doesn’t. If it does, that interaction can be objectively observed.

  104. TheGorillaon 03 Apr 2017 at 9:21 pm

    Wow! My point is proven. Let’s see:

    (1) no serious dualist denies the mind brain relationship or thinks we’ve observed a mind without a brain. Bringing this up is arguing against a strawman.

    (2) the razor only matters if materialism is not incapable of accounting for Qualia, but that’s literally the issue at play– the whole point is materialism having challenges with subjective experience. So to invoke it is, as I suggested, question begging.

    BillyJoe,

    I’m trying to follow a rule of ignoring you, but I figured id tell you to save your time. Don’t bother reading or responding to me. The same advice I always give you: I would recommend reading about a field before holding such strong opinions, but you have no willingness. This isn’t a matter of disagreement — I’m atheist and not a dualist — but a matter of you being objectively uninformed about the topic itself. Anyways, same rule I apply to hardnose.

  105. bachfiendon 03 Apr 2017 at 9:39 pm

    Hardnose,

    ‘Quantum woo is the future of neuroscience’.

    No, it isn’t. Just because a single physicist publishes a paper suggesting that quantum effects are important in the functioning of the brain it doesn’t mean that neuroscientists are going to accept it without considerable argument. And it certainly doesn’t explain consciousness (as the title of the article states).

  106. BillyJoe7on 04 Apr 2017 at 12:24 am

    TG,

    “I’m trying to follow a rule of ignoring you”
    As with everything else, you’re failing.

    “but I figured id tell you to save your time”
    Thanks, but I’ll decide how to spend my time.

    “Don’t bother reading or responding to me”
    Thanks again, but I’ll decide what to read and respond to.

    “I would recommend reading about a field before holding such strong opinions”
    I would recommend answering the challenges to your hardnosian pontifications.

    “This isn’t a matter of disagreement”
    I disagree.

    “I’m atheist and not a dualist”
    Then stop talking like one.

  107. mumadaddon 04 Apr 2017 at 4:07 am

    Gorilla,

    Let me elaborate. Take the two propositions:

    1. Consciousness arises from brain function
    2. Consciousness arises from brain function PLUS something else

    Brains are made of the same ‘stuff’ as everything else that we understand.
    Brains can be understood physiologically / anatomically (not invented specifically to explain brains).
    Brain function can be understood electro-chemically (not invented to explain brain function.
    Brain function does not break the laws of physics.
    Brains came about by known processes (evolution).
    We can manipulate the brain to produce effects in consciousness, reliably and predictably.

    Now add the something else…

    What is it – a force, a field, a substance?
    What is it made of?
    Where is it located?
    What does this stuff do except for interact with brains to cause consciousness?
    How did it arise?
    How does it interact with the brain or vice versa?

    It seems the answer to all these questions is a big shrug — but it must be there because subjectivity is conceptually tricky. When you have to invoke something which is undetectable, would require new physics or a complete redefinition of reality, doesn’t do anything except where consciousness is concerned, has no process of origin, and has no mechanism of action, you are effectively saying, “magic!”.

    So, if you look at the number and magnitude of new entities required for the ‘something else’, vs the idea that consciousness arises from brain function (though we don’t yet understand how), Occam’s razor clearly favours the latter explanation.

  108. mumadaddon 04 Apr 2017 at 5:56 am

    I realise that I haven’t addressed property dualism. I’m not sure I understand it properly — to me, stating that mental properties are separate from physical properties doesn’t seem incompatible with subjective experience being produced by brain function only. There is probably some deeper distinction between ‘physical’ and ‘mental’ properties that I’m missing.

  109. TheTentacleson 04 Apr 2017 at 6:50 am

    Dendritic spines? No idea what they are. But you think they therefore must somehow produce learning ability, memories and whatever?

    As expected you just keep moving your goalposts as you defend smaller and smaller gaps. This just demonstrates the pointlessness of following a metaphysics at all costs. Working scientists fruitfully use their understanding of dendrites to develop useful drugs that will help millions of people with cognitive decline, while armchair philosophers ignorant of the world they live in buff their egos pushing ever more absurd toy epistemologies.

    Consider the following fictional cautionary tale

    I’ll just invoke Dennett’s hammer on this trivial tale. Again you keep calling me things that I simply have not stated.

    I’m not sure where you’re getting that from?

    Your comments section of your post "Are Perceptual illusions always necessarily illusions?" discussing with a cognitive neuroscientist who studies vision who was trying to educate you. You can submit all you want, you are wrong. Colour is a perceptual phenomenon where a limited number of wavelength bandpass sensors combine together in complex antagonistic circuits and lots of bayesian prior experience within wet visual systems. Bringing up Bishop Berkeley is tangential to this point (and I don’t know why you think Scientists support his view, though I am a fan of his two treatise on vision, even if he was wrong). No one has said this describes the whole of reality, not sure why you are tilting at this windmill to be honest. Prove that mauve surrounded by an orange background illuminated with moonlight exists uniquely in the world and you will win a Nobel prize. Actually scratch that, I predict your response will be practically irrelevant.

    I note you haven’t addressed my previous post …

    I don’t know what to say. Your thesis depends on your own simplistic understanding of consciousness and presuppose logical conditions I don’t agree with. Your points are much better made by working philosophers, and their refutation by other working philosophers, and I get more out of reading philosophy papers than arguing metaphysics on the internet. Your position is unconvincing. This post and discussion was actually about AI, not your reductive toy epistemologies which seem to overcome any other ability to reflect on this topic.

  110. BillyJoe7on 04 Apr 2017 at 8:31 am

    TT,

    “Are Perceptual illusions always necessarily illusions?”

    Yeah his take on the checkerboard illusion. 😀

    You need go no further than this post on his blog to see how completely clueless this guy is. He has been slaughtered time and again on this blog about that same entry on his blog but he simply refuses to correct even a single word, even though the whole post deserves to be thrown on the trash can.

  111. Ian Wardellon 04 Apr 2017 at 9:48 am

    TheTentacles, you’re not addressing any of the questions I’m asking you, but simply going off on a tangent. And why the heck you’re talking about Berkeley is beyond me. I’ve never mentioned him.

    Regarding perceptual illusions. Are you talking about Steve who alleges he is a “cognitive neuroscientist”? The one who says “Try digging into a textbook on visual color perception”, when the issue has nothing to do with science but is a philosophical one? In other words, someone who has as much difficulty comprehending as you do? If you look at the comments below that blog entry, I respond to that particular “anonymous” in full.

    He says that I make a wild unsubstantiated speculation when I say “Indeed if someone claimed to see the squares as being precisely the same colour, then it is doubtful that he could proficiently visually apprehend his environment.” Er . . you either get this or you don’t. I suppose that if someone were blind from birth, but regained his vision, that would be one way of finding out. Steven the neuroscientist would maintain he would be able to see perfectly. I maintain that Steven the neuroscientist is utterly clueless.. He might have perfect vision, but initially he’d have a great deal of difficulty apprehending what he is actually seeing.

    And I’m not sure what this “Steven” is maintaining. That the checker-shadow illusion is a real illusion? Fine, but there again this would then entail that everything we ever perceive through the 5 senses is also an illusion. The problem here is that the word “illusion” just becomes redundant. Moreover, we then have no way of distinguishing between when we see truly, and when our perceptions are mistaken.

    I’m sick to death of arguing about this checker-shadow illusion on here. It’ll only start Billie-Joe off with his asinine clueless comments on this subject. And I note you keep deliberately changing the subject so you don’t have to address my proof that consciousness is necessarily causally efficacious.

  112. Ian Wardellon 04 Apr 2017 at 9:52 am

    BillyJoe:
    “I’m sick to death of arguing about this checker-shadow illusion on here. It’ll only start Billie-Joe off with his asinine clueless comments on this subject”.

    Yep, I just noticed that he has. No matter how much I explain this to him, he is simply incapable of understanding. He even keeps linking to this youtube video which looks like this “checkerboard illusion”, but is just trickery with the shadow painted on! I just completely give up with the guy, and I haven’t come on here to discuss that so-called “illusion” again. I’m off.

  113. mumadaddon 04 Apr 2017 at 9:56 am

    “I’m off”

    Again!

    Never to return… until next time.

    That was Ian repeating his classic storming off in an indignation huff, everyone.

  114. Ian Wardellon 04 Apr 2017 at 11:02 am

    I think you’ll find that I normally go when the subject turns to this “checker-shadow illusion”, which it persistently does on this blog regardless of the previous topic of conversion. There’s certain people on here that seemingly have an obsession with it. I now have been arguing with BillyJoe on whether this is correctly called an “illusion” for around 15 years now, starting on the James Randi board. He, and they, and you lot have this entrenched position that it is correctly labelled an illusion, and I think that is definitely a misuse of the word “illusion”. I have been patiently explaining for around 15 years *and still not one damn materialist gets it*!! It’s not even as if it has anything to do with materialism per se. It stems from the fact they tend to be scientists and hence misuse the word “colour” to refer to wavelengths of light reflected rather than the proper use of the word colour to label a particular experience.

    I wouldn’t go if the argument was about psi, life after death, materialism etc. But I’ve had my damn fill of this perceptual illusion thing.

  115. Pete Aon 04 Apr 2017 at 11:20 am

    “It stems from the fact they tend to be scientists and hence misuse the word “colour” to refer to wavelengths of light reflected rather than the proper use of the word colour to label a particular experience.”

    I’ve told you more than once that scientists do not misuse the word colour in the way you claim they do. I’ve asked you before: What is the wavelength of magenta?

  116. TheTentacleson 04 Apr 2017 at 11:32 am

    I’m obviously naïve to the ebb and flow of this comments section, but I’m learning quickly.

    @mumaddad (& Gorilla): thank you for what have been some balanced and thoughtful posts. Don’t forget that there is a distinction between property dualism and substance dualism too 😉

    I don’t know whether anyone has read the work by the philosopher Galen Strawson? He wrote a wonderful book (I had trouble following through, but it was stimulating nevertheless) called Selves where he arrives at a quite radical conception of the self. I bought it because of the review by Thomas Nagel (anti-materialist yet atheist and radical skeptical philosopher, famed for “What is it like to be a Bat”, which crops up in almost every collection of essays about consciousness) which was very complimentary in a way only a philosopher could be 🙂

    https://www.lrb.co.uk/v31/n21/thomas-nagel/the-i-in-me

    Anyway, Strawson wrote a nice pithy provocation for materialists and dualists in the NY Times last year:

    https://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html

    He inverts the usual set of predicates everyone squabbles over, and I am quite sympathetic to this line of provocation. Consciousness is not hard, or mysterious, it is matter that is mysterious. He uses this inversion to beat the eliminativist fringe of materialists and dualists (kind unspecified) with the same stick. Anyway food for thought.

  117. TheTentacleson 04 Apr 2017 at 11:50 am

    Actually, thinking about Strawson’s Matter Mystery and Nagel’s anti-materialism they are somewhat related. As I understand it, Nagel argument against materialism is that is not a complete description, not that it can never be a complete description. I suspect Nagel would buy into Strawson’s perspective, matter and subjectivity don’t match *yet*, but that is due to weaknesses in our understanding of matter.

  118. chikoppion 04 Apr 2017 at 12:33 pm

    [TheTentacles] Anyway, Strawson wrote a nice pithy provocation for materialists and dualists in the NY Times last year:

    Good stuff!

  119. Ian Wardellon 04 Apr 2017 at 1:02 pm

    Tentacles, here is a relevant article about free will and Libet’s experiments. The problem is that materialists and scientists seem to have an asinine conception of free will — namely that one has free will if their behaviour is not predictable.

    http://www.irishtimes.com/culture/can-science-ever-tell-us-whether-free-will-exists-1.3029041

  120. Ian Wardellon 04 Apr 2017 at 1:22 pm

    What a load of cr@p! It won’t allow me to comment unless I pay £10 a month for a subscription! I tried to post the following:

    Contrary to Markus Schlosser’s position, it seems to me that free will *does* require a non-physical self, or at least non-physical consciousness. If materialism is correct, then everything has the ability to be explained in terms of physical chains of causes and effects. Our conscious decisions would then be causally inert.

    Of course some materialists maintain that our mental activity, including our reasoning processes, are *literally identical* to physical processes in the brain. If a train of thought is *literally identical* to some physical processes, and these physical processes have causal powers, then it necessarily follows that the train of thought itself has causal powers too. And that might seem to suggest we have free will (if by free will we’re simply invoking a causally efficacious consciousness).

    I beg to differ.

    Let’s suppose that in the brain we have a physical causal chain:

    1. A → B → C → D → E

    And we have a mental chain representing a chain of reasoning:

    2. a → b → c → d → e

    Now, of course, the materialist claims that “A” is identical to “a”, “B” is identical to “B” etc.

    But nevertheless, we have *2 different accounts* of how A/a progresses to E/e. In “1” we have the interactions of molecules as mathematically described by the laws of physics. In “2” we have a train of reasoning which, when we attain an understanding of something, will have involved rational connections between thoughts.

    Now, if materialism/physicalism is true, then everything has the ability to be explained in terms of the physical as exemplified in “1”. Account “2” is simply not required, since physical laws, which describe physical processes, make no reference to reasoning, nor indeed do they make any reference to intentions, desires, plans, or any other aspect of consciousness. Indeed, reasoning only comes into the picture for a vanishingly small part of the world; namely brain processes, and a minority of brain processes at that. And it is held by materialists that physical laws provide a sufficient explanation for these minority of brain processes just as much as they provide a sufficient explanation for the rest of the Universe.

    But it then follows that reasoning something through is causally irrelevant. Hence identifying reasoning, and the rest of our mental life, with physical processes, doesn’t allow us to escape an epiphenomenalist position. Oh yes, and I also regard this as a reductio ad absurdum of materialism.

  121. chikoppion 04 Apr 2017 at 2:22 pm

    “I don’t like the idea of cognition being deterministic. Therefore, the mind must be a non-physical entity not subject to the laws of physics – because that’s the only way I can think of to preserve free will. Further, I declare by fiat that the interaction of the non-physical mind with the physical world somehow cannot be detected or observed (thereby guarding this invention from falsifiability).”

    How is this not an argument from incredulity or an argument from consequence?

    We don’t yet know whether sufficiently complex yet wholly “physical” systems are capable of non-deterministic processing. The question of determinism does not necessitate that the mind exist as a distinct entity separate from the brain. Even if it did, you’d still have to prove non-deterministic free will exists before you could use it as a premise for an argument.

  122. Ian Wardellon 04 Apr 2017 at 2:43 pm

    Ah, I jus’ luv these intelligent thoughtful responses . .

  123. chikoppion 04 Apr 2017 at 3:10 pm

    @Ian Wardell

    Your premises are false:

    1) We don’t know if non-deterministic outcomes require a non-physical system.
    2) We don’t know if thought is non-deterministic.

    Your reasoning is flawed:

    1) Argument from consequence. The desire to escape the potential that thought is deterministic is not sufficient cause to assert the existence of properties or entities without evidence.

    2) Argument from incredulity. The fact you can’t think of any other way to make your preferred understanding of consciousness work is not evidence that your assertions have any basis in reality.

  124. Ian Wardellon 04 Apr 2017 at 3:41 pm

    chikoppi, I don’t know what either “deterministic” nor what “non-deterministic” means. Nor indeed have I ever heard a satisfactory definition. It hinges around whether physical laws *coerce* or merely describe reality. So I certainly wouldn’t use such a term.

  125. Ian Wardellon 04 Apr 2017 at 3:44 pm

    “Your premises are false:

    1) We don’t know if non-deterministic outcomes require a non-physical system.
    2) We don’t know if thought is non-deterministic”.

    Is anyone able to make any sense of this at all?? All these none’s.

  126. Ian Wardellon 04 Apr 2017 at 3:47 pm

    It’s not remotely related to anything I’ve said, that’s for sure.

  127. mumadaddon 04 Apr 2017 at 4:03 pm

    Ian,

    “I don’t know what either “deterministic” nor what “non-deterministic” means.”

    Seriously?!

    “But it then follows that reasoning something through is causally irrelevant.” Are your thoughts part an unbroken causal chain of events (deterministic), or is your ability to ‘reason’ somehow free from this deterministic causation (non-deterministic)?

    “Is anyone able to make any sense of this at all?? All these none’s.”

    Yes.

  128. Pete Aon 04 Apr 2017 at 4:09 pm

    Ian Wardell,

    Endlessly regurgitating the screeds on your blog does not increase the validity of your screeds. Your comments demonstrate only that — for at least 15 years — it is you who abjectly refuses to change your beliefs in the light of new evidence.

    You wrote above: “I now have been arguing with BillyJoe on whether this is correctly called an ‘illusion’ for around 15 years now, starting on the James Randi board.”

    I’m guessing that it will take you far longer than 15 years to answer my question: What is the wavelength of magenta?

  129. Ian Wardellon 04 Apr 2017 at 4:12 pm

    mumadadd, does determinism = an unbroken causal chain of events refer to physical events only, or non-physical events too?

    If determinism includes the latter, so that my thoughts feelings, decisions etc, determine my behaviour, how is this opposed to free will?

  130. mumadaddon 04 Apr 2017 at 4:12 pm

    Ian,

    Physical systems are deterministic* (at the macroscopic level). Therefore if there is no non-physical component to thought, thoughts are deterministic and so is your ‘reasoning’ your way through a problem to a conclusion. You appear to be arguing that because reasoning is clearly non-deterministic, there must be a non-physical component to thought.

    * For the sake of argument

  131. mumadaddon 04 Apr 2017 at 4:14 pm

    “If determinism includes the latter, so that my thoughts feelings, decisions etc, determine my behaviour, how is this opposed to free will?”

    Because you could never do anything other than what you do.

  132. Ian Wardellon 04 Apr 2017 at 4:14 pm

    I wrote my most recent reply simultaneously with mummadad’s response, so he’s not responding to it.

  133. Ian Wardellon 04 Apr 2017 at 4:16 pm

    mumadadd
    “Because you could never do anything other than what you do”.

    That in no way contradicts free will. See my blog entry:

    http://ian-wardell.blogspot.co.uk/2014/05/free-will-and-notion-of-could-have.html

  134. mumadaddon 04 Apr 2017 at 4:20 pm

    Ian,

    I accept that there are other definitions of free will but never found any of them particularly compelling. I shall decline your offer to see your blog. 🙂

  135. Pete Aon 04 Apr 2017 at 4:35 pm

    Ian Wardell,

    You endless repetition, on this website, of “see my blog” has a very long history of not working in your favour — to put it as mildly as possible!

  136. chikoppion 04 Apr 2017 at 5:11 pm

    @Ian Wardell

    I’m happy to amend my comment to facilitate your eventual substantive reply.

    Your premises are false:

    1) We don’t know that a physical system is incapable of producing free will.
    2) We don’t know that the phenomena of consciousness is incompatible with the absence of free will.

    Your reasoning is flawed:

    1) Argument from consequence. The desire to escape the potential that consciousness is strictly determined according to physical laws is not sufficient cause to assert the existence of properties or entities without evidence.

    2) Argument from incredulity. The fact you can’t think of any other way to make your preferred understanding of consciousness work is not evidence that your assertions have any basis in reality.

    So yeah, “A → B → C” might be all there is. Given that there’s a lot we don’t know about physical laws, that may very well be sufficient for both the phenomena of consciousness and the potential for free will. Even if it isn’t, facts about reality require independent and objective evidence, not just assertion based on our limited understanding or imagination.

    There’s a parallel here with Dark Matter.

    Galaxies don’t behave according to our understanding of physical laws. Astronomers and physicists put a pin in that observation and started looking for an answer. There are many potential hypotheses that could explain the phenomenon, but the pin remains because none have been verified.

    What no one did was to assert that the answer must be an inscrutable force that exists outside of physical reality for the benefit of excusing our limited understanding.

  137. bachfiendon 04 Apr 2017 at 5:14 pm

    Ian,

    The brain is in the business of producing illusions. Almost always very useful illusions, and often reliable illusions, but they’re still illusions.

    The prime example is vision. The eyes don’t provide full information regarding the surroundings to the brain. Fine vision is restricted to the central foveola of the retina with its high concentration of colour sensitive cones, which corresponds to the area of the nail of the thumb with the hand held at arm’s length in the visual field.

    The rest of the retina (and visual field) is served by colour blind rods which are sensitive to little more than movement.

    And the brain takes this very poor input and turns it into an illusion of high quality vision, also filling in the blind spot. It’s an incredible illusion, very close to a hallucination (because the brain is putting in details that it’s not getting from the eyes). And if the brain isn’t getting anything from part of the retina, then it will produce hallucinations (the Charles Bonnet Syndrome – visual release hallucinations).

    The brain produces other illusions too. Such as the one of a conscious mind with free will, which is making all the decisions.

    The brain is the mind, and the mind is the brain. They’re equivalent. Thoughts in the mind are just physical processes within the brain. There’s no evidence that they’re anything else.

    What is free will? There’s 4 possibilities. Is it making decisions that are conscious and caused? Conscious and uncaused? Unconscious and uncaused? Or unconscious and caused?

    Obviously free will would have to be conscious uncaused decision making. Conscious so the person is deliberately making a choice. Uncaused (not determined by the person’s genetics, previous experiences and current circumstances, otherwise an observer with perfect knowledge would be able to predict with 100% accuracy the person’s choice and as a result he couldn’t have done anything else).

    But the evidence is that decision making is unconscious and caused, and there’s no free will.

  138. BillyJoe7on 04 Apr 2017 at 5:28 pm

    Regarding Ian Wardell and the checkerboard illusion.

    Just for the record, IW has never responded to my criticism of his account of this illusion.

  139. hardnoseon 04 Apr 2017 at 6:15 pm

    https://phys.org/news/2014-01-discovery-quantum-vibrations-microtubules-corroborates.html

  140. Ian Wardellon 04 Apr 2017 at 6:27 pm

    chikoppi, sorry, but I don’t agree with “my” premises.

  141. bachfiendon 04 Apr 2017 at 6:41 pm

    Hardnose,

    No, it doesn’t corroborate that microtubule quantum vibrations cause consciousness. All it means is that microtubule vibrations may (MAY) be involved in brain function. Nothing more. It’s all just conjecture.

  142. Pete Aon 04 Apr 2017 at 6:42 pm

    BillyJoe7,

    “Just for the record, IW has never responded to my criticism of his account of this illusion.”

    Ditto.

    Also, just for the record, Ian Wardell has never responded to my repeated question: What is the wavelength of magenta?

  143. chikoppion 04 Apr 2017 at 6:46 pm

    [Ian Wardell] chikoppi, sorry, but I don’t agree with “my” premises.

    Then you should take it up with this guy…

    [Ian Wardell] Contrary to Markus Schlosser’s position, it seems to me that free will *does* require a non-physical self, or at least non-physical consciousness. If materialism is correct, then everything has the ability to be explained in terms of physical chains of causes and effects. Our conscious decisions would then be causally inert.

  144. Ian Wardellon 04 Apr 2017 at 6:57 pm

    The utter stupidities that people can be persuaded to believe in the name of science never ceases to amaze me.

  145. bachfiendon 04 Apr 2017 at 7:04 pm

    The utter stupidities that people (such as Ian Wardell and hardnose) can be persuaded to believe in, in contradiction to and the absence of evidence, never ceases to amaze me.

  146. Pete Aon 04 Apr 2017 at 7:25 pm

    QUOTE [Wardell, Ian, Can consciousness be causally inefficacious?, retrieved 2017-04-04 UTC from his website]

    My very first blog entry “a logical proof that we all have free will” wasn’t greeted with a great deal of comprehension.

    When we maintain something has no causal efficacy, what we are saying is that it has no causal impact on its environment whatsoever. If some object — let’s say a rock — has no causal efficacy whatsoever this means that we wouldn’t be able to see it since no light could be reflected off it to enter our eyes.

    http://ian-wardell.blogspot.co.uk/2015/06/can-consciousness-be-causally.html
    END QUOTE

    I shall leave it to the readers to decide the bar that Ian Wardell set in the above for the definition of causal efficacy.

  147. bachfiendon 04 Apr 2017 at 8:54 pm

    Pete A,

    I liked it when Ian wrote in this thread; ‘Thus we have an apparent chain of thought a > b > c > d > e, but this is in fact an illusion’ (where a, b, c, d and e are mental events and > is a right pointing arrow).

    Exactly correct. The brain is in the business of manufacturing illusions, usually very useful ones, and often reliable, but still illusions. Such as the illusion of having perfect highly detailed colour vision over the entire visual field. And the illusion of having free will. And the illusion of consciousness.

  148. mumadaddon 05 Apr 2017 at 5:11 am

    “The utter stupidities that people can be persuaded to believe in the name of science never ceases to amaze me.”

    Well, as we’re at this point in the Ian Wardell behaviour loop, I’ll what I’ve said before at this point:

    Ian, you could at this point acknowledge that your argument rests on unfounded premises, and either work to justify those premises or drop the argument. Just sayin’.

  149. mumadaddon 05 Apr 2017 at 5:13 am

    Typo: I’ll *repeat* what I’ve said before at this point:

  150. BillyJoe7on 05 Apr 2017 at 8:08 am

    Ian see-my-blog Wardell who hilariously thinks he has a proven that the 2D checkerboard illusion is not really an illusion by creating a 3D version using an ACTUAL checkerboard which….wait for it….has squares which ARE different colours! Amazing! 😀

  151. Pete Aon 05 Apr 2017 at 12:42 pm

    bachfiend,

    I shall add an item to your list: “Such as the illusion of having perfect highly detailed colour vision over the entire visual field. And the illusion of having free will. And the illusion of consciousness.”

    And the illusion of the self: me; myself; I.

    Bruce Hood (2012). The Self Illusion: Why there is no ‘you’ inside your head.

  152. Ian Wardellon 05 Apr 2017 at 2:36 pm

    Pete A
    “Bruce Hood (2012). The Self Illusion: Why there is no ‘you’ inside your head”.

    Materialists can’t believe in a self. Non-materialists have absolutely no reason to reject such a self.

    A blog entry by me might be of interest:

    http://ian-wardell.blogspot.co.uk/2014/02/does-self-as-opposed-to-mere-sense-of.html

  153. mumadaddon 05 Apr 2017 at 2:58 pm

    Ian,

    See my blog: https://en.m.wikipedia.org/wiki/Narcissistic_personality_disorder

  154. bachfiendon 05 Apr 2017 at 4:20 pm

    Ian,

    The brain is in the business of producing illusions by using materialist means with known physical structures and physicochemical processes. The illusions are very convincing. If they weren’t, they wouldn’t be illusions.

    What is the non-materialist explanation for the illusion that humans have rich high definition fully coloured vision right out to the edge of the visual fields, whereas the best information that the eyes can provide to the brain is a little high definition coloured vision in the centre of the visual fields, the rest being a monochromatic blur?

  155. Ian Wardellon 05 Apr 2017 at 5:12 pm

    Same as the materialist explanation I would guess?

  156. edamameon 05 Apr 2017 at 5:17 pm

    bachfiend the view that consciousness is an illusion is nutty. See: binocular rivalry, anesthesia, etc.. It’s pretty important for your anesthesiologist to know if you are conscious or not.

  157. bachfiendon 05 Apr 2017 at 5:49 pm

    Ian,

    Illusions are convincing. If they weren’t, they wouldn’t be illusions.

    Your non-materialist explanation for the illusion of perfect vision is lacking a little detail. You can’t deny the checkerboard illusion as an illusion, while accepting the much bigger illusion of perfect vision.

    Edamame,

    Anyway, what does binocular rivalry have to do with the illusion of consciousness? And anyway, anaesthetists are more concerned with whether their patients are storing memories that they can later retrieve consciously. That they’re feeling the illusion of pain causing unwanted movement interfering with the procedure.

  158. hardnoseon 05 Apr 2017 at 5:51 pm

    “The brain is in the business of producing illusions by using materialist means with known physical structures and physicochemical processes.”

    Plus quantum effects that no one understands.

  159. BillyJoe7on 05 Apr 2017 at 5:56 pm

    edamame,

    The “illusion of consciousness” does not mean “consciousnes does not exist”. It means our intuition about consciousness is wrong. Bachfiend’s account is based on scientific evidence (as opposed to the evidence-free dead-weight philosophising of IW – had to get that in 😀 )

    Here’s Daniel Dennett on the “illusion of consciousness”:

    https://www.ted.com/talks/dan_dennett_on_our_consciousness/transcript?language=en

  160. hardnoseon 05 Apr 2017 at 6:03 pm

    https://en.wikipedia.org/wiki/Quantum_mind

  161. Ian Wardellon 05 Apr 2017 at 7:39 pm

    Daniel Dennett doesn’t believe in the existence of consciousness.

  162. morris39on 05 Apr 2017 at 8:17 pm

    Six days ago (about 120 posts prior) I asked a simple question (see below). There has not been any response. I am not presenting an argument, just asking. If the question can be easily answered or dismissed why has it not been? I am inclined conclude that the topic is not really serious, maybe just a game. If so, that does not speak well for this blog.

    “I have only a very superficial knowledge about AI from media/blogs such as this one. I do not understand how humans would communicate with some AI, assuming that it is orders of magnitude more intelligent than humans. Do humans have to pose extraordinarily intelligent queries to AI to obtain practical super intelligent answers? Is the human/dog intelligence difference a possible analog? Dogs are unable to pose intelligent questions in human terms. This question strikes me as fundamental. Am I the only one who does not get it? Dr. Novella?”

  163. bachfiendon 05 Apr 2017 at 8:57 pm

    Ian,

    I don’t know what Daniel Dennett believes in but I believe that he (and I) believes in the illusion of consciousness.

    Hardnose,

    Quantum effects may be important in the function of the brain (the entire universe is quantum from top to bottom), but it’s entirely conjectural to claim that it explains consciousness.

    It’s not that no one understands quantum effects. The problem is that we lack a natural language which expresses them adequately.

  164. BillyJoe7on 05 Apr 2017 at 11:06 pm

    morris,
    I answered you.

    Ian Wardell,
    Stop lying about what Daniel Dennett believes.

  165. morris39on 05 Apr 2017 at 11:15 pm

    @billyJoe7
    Thanks. I took it as irony.

  166. chikoppion 06 Apr 2017 at 1:24 am

    @morris39

    I think the answer is likely “we don’t know.” Perhaps a super-intelligence would be more capable of interpreting and communicating comprehension than we can imagine. We might not be able to comprehend as it would, but that doesn’t mean we wouldn’t benefit from its ability to provide what is, for us, maximally comprehensible dialog.

  167. Pete Aon 06 Apr 2017 at 4:03 am

    My favourite illusion is the graphical user interface. It’s difficult to believe that there are no buttons on the screen; and very easy to believe that all of the controls physically exist.

  168. Ian Wardellon 06 Apr 2017 at 5:32 am

    Daniel Dennett wrote an article called “quinning qualia”. He’s stated he’s a p-zombie. He’s been involved in furious arguments with John Searle about this very issue.

    I really aren’t interested in him though and what he believes. The guy is just a complete loon of the highest order. Same as the other eliminative materialists.

  169. Pete Aon 06 Apr 2017 at 5:48 am

    “The guy is just a complete loon of the highest order.”

    And you aren’t?

  170. BillyJoe7on 06 Apr 2017 at 8:22 am

    This guy’s just an ignorant fool….

    Ian: “Daniel Dennett wrote an article called “quinning qualia”. He’s stated he’s a p-zombie.”

    He can’t stop lying about Daniel Dennett.
    Not only does Daniel Dennett not believe he is a p-zombie, he does not even believe that p-zombies exist or that it is possible for them to exist.
    I don’t always agree with him (perhaps so much the worse for me), but he at least bases his philosophy in science, and this always makes him worth reading and listening to – as opposed to the complete loon IW sees every time he looks in the mirror.

    https://plato.stanford.edu/entries/zombies/ (Stanford Encyclopedia of Philosophy)

    “Daniel Dennett thinks those who accept the conceivability of zombies have failed to imagine them thoroughly enough: ‘they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition”

    https://en.m.wikipedia.org/wiki/Philosophical_zombie

    “Daniel Dennett counter[s] that Chalmers’s physiological zombies are logically incoherent and thus impossible”

    http://consc.net/zombies.html (David Chalmers)

    “Of course even the logical possibility of zombies is controversial to some (e.g. Dennett [1995]), as conceivability intuitions are notoriously elusive”

    So that just gives you the measure of this guy…he is totally unfamiliar with Daniel Dennett and his ideas, but he is quite happy to dismiss him as a “complete loon of the highest order”.
    I think I’ve established who that epithet belongs to.

  171. edamameon 06 Apr 2017 at 10:13 am

    When you say consciousness is an illusion, you are at best being misleading because by any reasonable interpretation you are saying it isn’t real (not just that consciousness is real, but that you have some intuitions which are wrong, which is obviously not controversial).

    bachfiend I don’t want to be conscious during surgery because I don’t want to experience the pain of surgery while conscious, whether I remember experiencing said pain or not. You are pushing a lunatic fringe view, even among materialists, that consciousness doesn’t exist.

    Dennett is tricky, and frankly not sure why we should get into hand-wringing matches about dennett exegesis as if he is some voice of the materialist: he is not. He is very idiosyncratic as a philosopher, and not very neuroscientific in his thinking. Any dualist treating him as some necessary voice of materialism is full of it, and any materialist who thinks you have to think like Dennett has been seriously misled.

    There is a lot of good neuroscience of binocular rivalry and anesthesia, sleep/wake, illusions, hallucinations, etc.. E.g., Koch’s first book. Llinas, Edelman, Damasio, Crick, Gazzaniga, etc etc etc..

    For some reason Dr Novella and many people here take Dennett as some kind of hero of materialistic thinking about consciousness, which is frankly weird. This is a neurology blog, not philosophy. Where’s the bloody science? Dennett has said some really silly things, for instance about the blind spot, lead by his ideology about consciousness, things that have been directly shown false by neuroscience. His book consciousness explained has some baldly weird claims that people with their thinking caps on, even with no neuroscience training, should make people wonder WTF he is talking about (e.g., that there is no difference between the Stalinesque and Orwellian scenarios of misperception: this cornerstone of the book is ridiculous).

    Also see his articles quaning qualia, and heterophenomenology for the more eliminativist aspects of his work. At any rate, he has also written some good stuff. He is very inconsistent, not someone I would recommend as a folk hero of materialism.

    Paul Churchland is much better: he actually reads and understands the neuroscience and presents a neuroscientific theory of consciousness based on the data, not philosophical quasi-behavioristic perambulations. For instance, Engine of Reason, Seat of the Soul is much more grounded in neuroscience than anything Dennett has ever written.

  172. Pete Aon 06 Apr 2017 at 11:19 am

    edamame,

    The burden of proof for your assertions rest entirely with you.

    When I’m presented with robust evidence that something exists then I carefully consider that evidence and I’m willing to change my opinion. The existence of such things as qualia and the self is thus far based only on centuries of semantic filibustering combined with wilful obscurantism and logical fallacies.

  173. Ian Wardellon 06 Apr 2017 at 11:30 am

    BillyJoe7 he doesn’t believe in the existence of consciousness. Consciousness as in the proper definition of the word, not as materialists redefine it. No one ever experiences pain, the taste of coffee, greenness etc. If that doesn’t make him a loon, then I have no idea what would.

  174. Pete Aon 06 Apr 2017 at 11:45 am

    Ian see-my-blog Wardell,

    Either provide citations for your claims of what Dan Dennett has said, and what he believes, or kindly desist from your very tiresome, endless, p1ssing into the wind on this website.

    BillyJoe7 has made it blindingly obvious to the readers that you are, or you are a rapidly becoming, a pathological liar.
    https://en.wikipedia.org/wiki/Pathological_lying

    I’m still waiting for your answer to my question: What is the wavelength of magenta?

  175. edamameon 06 Apr 2017 at 12:07 pm

    Pete A no, the burden rests on people making outlandish claims. The claim that we are conscious, that we have sensory experiences like feeling pains or seeing colors and the like, is not outlandish in the least. It would be absolutely outlandish to deny it. If you doubt that you are conscious, I recommend getting a surgery without local or general anesthetic to test your commitment to that claim.

    If you really want a serious discussion, I recommend start with some research for instance Koch’s Quest for Consciousness. Churchland’s Engine of Reason Seat of the Soul, Baars’ Cognition, Brain, and Consciousness. Read up on blindsight, binocular rivalry, anesthesia, the neuropsychological underpinnings of hallucination, dreaming and wakefulness, etc.. It is strange at this blog the lack of detailed discussion of such research, and the weird focus on Dennett, a philosopher with an ax to grind.

    If you are saying consciousness doesn’t exist, you are making an outlandish claim. The burden is on you to fight not just common sense but psychology and neuroscience. The levels of Dunning-Kruger on display is ridic. Reading Dennett doesn’t make one an expert on consciousness. I have seen it make people think they are experts. Dunning-Kruger.

  176. Pete Aon 06 Apr 2017 at 12:23 pm

    edamame,

    “I have seen it make people think they are experts. Dunning-Kruger.”

    Are you an expert in this field, or are you just someone who thinks that they are an expert?

  177. chikoppion 06 Apr 2017 at 12:26 pm

    I think language may be getting in the way of communication.

    There is “consciousness” as a more or less specific collection of phenomena related to perception. Then there is “consciousness” as a supposed non-physical entity that is not the product of brain function. It isn’t always clear how the term is being utilized.

    I’m not familiar enough with Dennett to know, but is it possible that he acknowledges the first usage while rejecting the second? In other words, that the perception of an “I” existing separate from from the body is an illusory sensation?

  178. Pete Aon 06 Apr 2017 at 12:57 pm

    chikoppi,

    I think that you are fundamentally correct in all of the points in your comment.

    However, I would like to address your opening statement: “I think language may be getting in the way of communication.”

    I totally agree, but have you considered the possibility that proponents of philosophies that are neither based in solid empirical evidence, nor in falsifiable hypotheses, are simply bullshitting for the purposes of attempting to bolster their ideology, which they frequently and pathetically attempt by straw-manning the scientific method?

  179. edamameon 06 Apr 2017 at 1:48 pm

    Pete A I have expertise, yes. Though I am not one of the authors I cited I did work directly with one of them as a member of my thesis committee. I won’t say more than that because I like to remain semi-anonymous here, and I don’t want this to be a D swinging competition.

    I just want to point out that people are saying silly things in this thread, which materialism does not commit you to, like consciousness does not exist. I gave good references above. I strongly suggest them for anyone who things that consciousness is a brain process. This seems right, and there is great science behind it. The alternative view that consciousness doesn’t exist? Nope. I haven’t seen any good reasons to buy it.

    Do more research guys. You are Dunning-Krogering all over the place.

  180. chikoppion 06 Apr 2017 at 2:40 pm

    [Pete A] I totally agree, but have you considered the possibility that proponents of philosophies that are neither based in solid empirical evidence, nor in falsifiable hypotheses, are simply bullshitting for the purposes of attempting to bolster their ideology, which they frequently and pathetically attempt by straw-manning the scientific method?

    Something I tried to address earlier, but perhaps failed to articulate well, is the conflation of concepts and actual things. By concept I mean here a mental construct, a collection of thoughts that represents or describes an abstract collection of phenomena.

    The chair I sit on exists. “Chairs,” as objects that share common form or function as conceived in the abstract, exist. “Fuzzy chairs” and “folding chairs” are also concepts and real things, differentiated by the phenomena included within each concept.

    However, I can have a concept of a thing without that thing actually existing. A parallel universe(s) is also a concept. The concept itself exists, but the entity, the thing is supposedly associated with the collection of phenomena included in the concept, may or may not exist.

    I think sometimes concepts are (mistakenly) treated as real things, when there is no evidence the thing described by the concept actually exists. This might be a form of domain error, wherein ontological existence is granted to a purely conceptual object and then a rational argument is constructed on that unproven premise. I don’t think it’s necessarily done intentionally.

  181. edamameon 06 Apr 2017 at 3:14 pm

    You are making good points that ‘consciousness’ is an accordion concept that can be stretched and contracted to meet the needs of different interlocutors. People use it to refer to self-consciousness, language, high-level conceptual thought, and for some reason it draws out crazy people, wu-pushers, mystics of all stripes. Dualists.

    But the study of perceptual awareness and its neuronal basis is vanilla and amenable to empirical scrutiny. It’s also the kind of thing you definitely don’t want during surgery, and has clear neurophysiological hallmarks.

    I am working a bit on automating detection of these things because the software anesthesiologists use is proprietary and very expensive. Yesterday I had an animal plugged in and we looked at its brain under different levels of isoflurane anesthesia (simple experiment: hold it at level 1, record for 5 minutes, level 2, record for five minutes, etc). It is truly amazing–you don’t need to look at the animal to know when it is about to wake up from anesthesia: you literally see it in its brain in well-defined electrical signatures that propagate globally across the brain. This is not news to anyone that works with animals (including humans) under anesthesia–the loss, and regaining of consciousness is something that bursts forth in vunmistakable ways that even my grandma could see when I point out the bursts of fireworks on the oscilloscope.

    The affects of anesthesia are not subtle in the brain, and they are not subtle in terms of their effects on conscious perceptual awareness. Obviously that is just one crude measure: giving anesthesia to the brain is a bit like pulling the power cord on your computer: it won’t tell you how your CPU works. But it is one important aspect of the emerging story of perceptual awareness and its neuronal basis.

    (In the literature there is a sometimes controversial distinction between research into so-called ‘creature consciousness’, which is the state of generally being awake and aware and able to move about, and ‘state consciousness’ which studies states of conscious awareness of things like illusory contours or binocular rivalry etc. Study of anesthesia would be considered the study of ‘creature’ consciousness, which is more linked to reticular activating systems involved in sleep/wake cycles etc.. For more on this see: http://philosophyofbrains.com/2007/01/06/consciousness-and-the-brainstem.aspx).

  182. Pete Aon 06 Apr 2017 at 3:38 pm

    edamame,

    Thank you for your explanation and I fully appreciate that you, me, and other commentators have valid reasons to “remain semi-anonymous here”.

    I’ve stated previously in my comments that I was indoctrinated with a plethora of things during my childhood, and that it has taken me decades to go through the long, deeply humiliating, and excruciatingly painful process of slowly de-programming myself from my former belief systems. This task is, for me, worthwhile even though I know that I’ll never be able to complete it during the remainder of my lifetime.

    Dr. Novella’s articles, and the very knowledgable commentators who freely share their expertise, have helped me immensely: I am extremely grateful to them for their time, effort, and due diligence.

    So, I hope you will begin to understand why it seriously pisses me off when commentators resort to: misquotes; logical fallacies (especially the straw-manning of 21st-century science and epistemology); wilful obscurantism; and/or semantic filibustering.

    You wrote “I just want to point out that people are saying silly things in this thread, which materialism does not commit you to, like consciousness does not exist.”

    Obviously, consciousness does exist: it exists in the form of a word that describes by far the most convincing and the most awesome illusion that most, but not all (including myself), people will ever experience.

    A tiny percentage of people who have suffered neurological damage have been rendered fully capable of spotting the magician, the trickster, the all-time greatest illusionist who fools everyone else into truly believing that they possess both a self and free will.

    Over the years, I’ve had various friends and various colleagues who’ve implored me to write about my very unusual insight into the deeply profound topics of consciousness and self-identity. After very careful consideration, based on feedback from some experts in clinical psychology, I’ve come to the conclusion that busting these bubbles of illusion would do far more harm than good.

  183. Pete Aon 06 Apr 2017 at 3:52 pm

    “[chikoppi] This might be a form of domain error, wherein ontological existence is granted to a purely conceptual object and then a rational argument is constructed on that unproven premise. I don’t think it’s necessarily done intentionally.”

    Bingo! It is indeed a form of domain error. And likewise, I don’t think it’s necessarily done intentionally; but it is always the result of either excusable ignorance [a lack of knowledge or information] or wilful ignorance.

  184. mumadaddon 06 Apr 2017 at 4:05 pm

    edamame,

    Thanks for the sources. I just read In which I argue that consciousness is a fundamental property of complex things…* by Koch — interesting read so thanks for that. Some interesting allusions in that piece to the other topics you mentioned. I’ve been wanting to read Paul Churchland for a while, but since I don’t really read books anymore but listen to audiobooks, I’m subject to the vagaries of Audible’s limited selection.

    * Free e-book available here: https://mitpress.mit.edu/books/which-i-argue-consciousness-fundamental-property-complex-things%E2%80%A6

  185. edamameon 06 Apr 2017 at 4:55 pm

    mumadadd: Koch gets a little weird when he gets philosophical, but his neuroscience is excellent. Churchland is fun and an extremely clear writer, and forceful critic of dualism, advocate of neuroscience.

  186. BillyJoe7on 06 Apr 2017 at 5:46 pm

    edamame,

    “If you are saying consciousness doesn’t exist, you are making an outlandish claim”

    Nobody is saying that.
    You’ve just fallen for Ian Wardell’s lies.

    For the record:

    I do not believe consciousness does not exist.
    Steven Novella does not believe consciousness does not exist.
    Daniel Dennett does not believe consciousness does not exist.

    To say consciousness is an illusion is not equivalent to saying consciousness does not exist. It’s simply a lie perpetrated by IW that you’ve fallen victim to. When we say that consciousness is an illusion, we simply mean that our common intuitions about consciousness are unreliable and often wrong. This is not controversial (ie blindspot, saccadic vision)

    Also Daniel Dennett is not my “hero”.

    For example he believes in free will. And he believes in free will mostly for the reason that he thinks we need to believe in free will, which, in my opinion, is not a good reason to believe in anything.

  187. BillyJoe7on 06 Apr 2017 at 5:53 pm

    chikoppi,

    “There is “consciousness” as a more or less specific collection of phenomena related to perception. Then there is “consciousness” as a supposed non-physical entity that is not the product of brain function…I’m not familiar enough with Dennett to know, but is it possible that he acknowledges the first usage while rejecting the second? In other words, that the perception of an “I” existing separate from from the body is an illusory sensation?”

    That is correct.

  188. BillyJoe7on 06 Apr 2017 at 5:58 pm

    Ian Wardell,

    “BillyJoe7 he [Daniel Dennett] doesn’t believe in the existence of consciousness. Consciousness as in the proper definition of the word, not as materialists redefine it.”

    How long did it take you to come up with that excuse.
    It would have been easier to admit you were wrong…or lied!

    No one ever experiences pain, the taste of coffee, greenness etc. If that doesn’t make him a loon, then I have no idea what would

    Well, either you’re an ignorant fool or a liar.
    Probably both.

  189. edamameon 06 Apr 2017 at 6:21 pm

    No I’m ignoring Wardell and just reading what you wrote. And Bach especially. Dennett is actually not that clear either. But it seems we largely agree so I will drop it. I suggest don’t just say it is an illusion bc that is misleading.

  190. Pete Aon 06 Apr 2017 at 6:30 pm

    “I suggest don’t just say it is an illusion bc that is misleading.”

    I suggest that you are easily misled.

  191. hardnoseon 06 Apr 2017 at 6:45 pm

    “It’s not that no one understands quantum effects. The problem is that we lack a natural language which expresses them adequately.”

    bachfiend understands it all.

  192. hardnoseon 06 Apr 2017 at 6:50 pm

    “Quantum effects may be important in the function of the brain (the entire universe is quantum from top to bottom), but it’s entirely conjectural to claim that it explains consciousness.”

    How could quantum effects explain consciousness when no one can explain quantum effects?

    But we can be pretty sure that your Sean Carroll style materialism is wrong.

  193. bachfiendon 06 Apr 2017 at 8:54 pm

    Hardnose,

    I know that the are many things I don’t understand, including quantum physics.

    But I do understand that your particular brand of non-materialism is just absolute bullsh*t. That your delusions that the Universe is intelligent. That evolution has goals and aims. That there’s an inherent tendency towards increasing intelligence and complexity within biological systems.

    They’re all just nonsense.

    I’ll take Sean Carroll’s materialism over your non-materialism any day. Although I concede that Sean Carroll may be wrong in some aspects, perhaps in many.

    ‘How could quantum effects explain consciousness when no one can explain quantum effects?’ That’s just as silly as claiming ‘How could Newton’s law of gravity (or Einstein’s General Gravity for that matter) explain objects falling to Earth when no one can explain gravity?’

    Agreed. Quantum effects don’t explain consciousness. But if they did, then not understanding them wouldn’t stop them from causing consciousness (in the same way that not understanding gravity doesn’t stop objects falling to Earth) – if it was real and something discrete and not just an illusion manufactured by physical brains.

  194. edamameon 07 Apr 2017 at 12:19 am

    BillyJoe7, so you are saying free will is an illusion? 🙂

  195. bachfiendon 07 Apr 2017 at 4:24 am

    Edamame,

    Decisions can be either conscious and caused (by the individual’s genetics and previous experiences, and the circumstances at the time the decision is being made), conscious and uncaused, unconconscious and caused, or unconscious and uncaused.

    Free will, if it exists, can only happen if the choice being made in decision making is both conscious and uncaused, otherwise an observer, who has perfect knowledge of the individual making the decision, would be able to predict with 100% accuracy the decision that was going to be made. In which case, the individual can’t have free will, because no other decision was possible.

    The evidence is that decision making is unconscious and caused, for reasons which aren’t consciously apparent to the person making the decision. And the conscious mind (another illusion) rationalises the decision already made with other reasons.

    Free will is an illusion manufactured by the brain, the same as the illusion that an individual has perfect colour vision to the edge of the visual fields. Like vision, the illusion of free will is very useful, and society couldn’t exist without it, because otherwise there wouldn’t be any justification for punishment for crimes, besides putting the criminal away in gaol, preventing further crimes.

  196. Ian Wardellon 07 Apr 2017 at 8:33 am

    edamame
    “Churchland is fun and an extremely clear writer, and forceful critic of dualism, advocate of neuroscience”.

    He completely fails to understand dualism. His criticisms are asinine in the extreme. He’s under the delusion that dualism is a scientific hypothesis. Neither dualism nor materialism are scientific hypotheses, they are metaphysical hypotheses.

  197. Ian Wardellon 07 Apr 2017 at 8:37 am

    edamame
    “You are making good points that ‘consciousness’ is an accordion concept that can be stretched and contracted to meet the needs of different interlocutors. People use it to refer to self-consciousness, language, high-level conceptual thought, and for some reason it draws out crazy people, wu-pushers, mystics of all stripes. Dualists”.

    No it isn’t. It refers essentially to qualia in its broadest sense, and intentionality. Least of all is it a function or physical process. If people use it for these other things then they are using the word incorrectly.

  198. Ian Wardellon 07 Apr 2017 at 8:41 am

    bachfiend
    “Decisions can be either conscious and caused (by the individual’s genetics and previous experiences, and the circumstances at the time the decision is being made), conscious and uncaused, unconconscious and caused, or unconscious and uncaused”.

    No, a decision is defined as a conscious choice and one’s consciousness is causally efficacious in bringing about that choice.

  199. Ian Wardellon 07 Apr 2017 at 8:42 am

    bachfiend
    “Free will, if it exists, can only happen if the choice being made in decision making is both conscious and uncaused, otherwise an observer, who has perfect knowledge of the individual making the decision, would be able to predict with 100% accuracy the decision that was going to be made. In which case, the individual can’t have free will, because no other decision was possible”.

    You fail to understand what free will means. Even if someone can predicts another’s actions 100%, that does not have any implications for his free will.

  200. mumadaddon 07 Apr 2017 at 8:43 am

    Ian,

    “If people use it for these other things then they are using the word incorrectly.”

    I think the dictionary would beg to differ. But really, as long as you make it clear how you are defining it, what does it matter?

  201. mumadaddon 07 Apr 2017 at 8:45 am

    “You fail to understand what free will means. Even if someone can predicts another’s actions 100%, that does not have any implications for his free will.”

    In same vain as my comment above… If you have another definition of free will that you think is defensible, please elucidate it.

  202. mumadaddon 07 Apr 2017 at 8:48 am

    Ian,

    “No, a decision is defined as a conscious choice and one’s consciousness is causally efficacious in bringing about that choice.”

    Oh my. In every one of your last three comments you have somehow assumed the role of arbiter of definitions, apparently exempt from having to defend your non standard word usage.

  203. edamameon 07 Apr 2017 at 8:55 am

    Bach you missed my point

  204. Ian Wardellon 07 Apr 2017 at 9:05 am

    mumadadd, which dictionary definition?

    Re free will

    Imagine you could travel backwards in time to some famous historical event. Imagine also that you don’t reveal your presence to anyone and you have no impact whatsoever on the environment. In this case these historical figures will say and behave as the history books tell us. We will know their future lives in their entirety. And however many times we revisit a specific time and place, these historical figures will say and behave precisely as they did on all previous occasions.

    So in that case these peoples’ behaviour are entirely predictable. Now, would that entail they don’t have free will? Why should it? Why would they make a different decision if everything else is unchanged?

  205. Ian Wardellon 07 Apr 2017 at 9:07 am

    Of course this brings me back to the ridiculous definitions of words that materialists employ. They seem to think that to have free will means to act *randomly*. Little wonder they reject free will!

  206. Ian Wardellon 07 Apr 2017 at 9:08 am

    Some of them reject free will I should say. Not all of them, although as I argued above I do not believe that materialists can believe in free will.

  207. edamameon 07 Apr 2017 at 9:20 am

    Just to explain.

    bj7 has said fw (free will) is an illusion, and has been clear he thinks fw does not exist. Yet now he is saying consciousness is an illusion, but balking at my suggestion that this is misleading, that this might make people think this is saying it doesn’t exist (because this is a not uncommon eliminativist position among more fringe materialists who cannot think of how to explain consciousness). It is certainly provocative to say that consciousness is an illusion, but it is provocative precisely because it will be interpreted as you saying it isn’t real. It’s not what you actually mean. Of course you are free to say it, and then to clarify you are not saying thee more provocative controversial thing, but this other trivial thing that nobody would ever balk at (i.e., some of our intuitions are wrong, e.g., about our visual field). But that frankly just seems disingenuous, it isn’t how anyone studying consciousness would ever talk (e.g., your anesthesiologist, etc), it comes off as a provocation without a point.

    Ian Wardell if you think the word ‘consciousness’ is not one of the more slippery words, semantically, in the English language, then you need to get out more. Churchland doesn’t treat dualism as a scientific hypothesis in some mindless way; he takes each argument on its merits and evaluates it as such. Sometimes science is relevant for his attacks on dualism, just like biology is sometimes relevant for attacks on creationism. But most of his time (e.g., Engine of Reason, Neurophilosophy at Work) is not spent parrying ghosts, but constructing a positive story, trying to knit together neuroscience, computational theories of brain function, and philosophy. Note he is a philosopher, not a neuroscientist, so there is a certain flavor that brings to his work. Just like Dennett, caveat emptor.

  208. Ian Wardellon 07 Apr 2017 at 9:22 am

    edamame
    “If you are saying consciousness doesn’t exist, you are making an outlandish claim. The burden is on you to fight not just common sense but psychology and neuroscience”.

    No, science as currently conceived *completely* leaves out consciousness in its description of reality. As far as science is concerned we are all p-zombies. This is why materialists have to either deny the existence of consciousness, or *identify* it with some function or physical process, or suppose it supervenes on some physical process.

  209. Ian Wardellon 07 Apr 2017 at 9:25 am

    @edamame. Edward Feser takes a look at Churchland’s attack on dualism.

    http://edwardfeser.blogspot.co.uk/2009/12/churchland-on-dualism-part-i.html

  210. mumadaddon 07 Apr 2017 at 9:40 am

    Ian,

    “So in that case these peoples’ behaviour are entirely predictable. Now, would that entail they don’t have free will? Why should it? Why would they make a different decision if everything else is unchanged?”

    As I said before, if you have some definition of free will that you think is defensible, you need to give us that definition. So far you aren’t actually adding anything beyond the fact that you think free will exists and is compatible with determinism. You insist, repeat, and use exclamation mark as though that somehow represents an argument — it doesn’t.

  211. Ian Wardellon 07 Apr 2017 at 9:42 am

    Also:
    http://edwardfeser.blogspot.co.uk/2009/12/churchland-on-dualism-part-ii.html

    http://edwardfeser.blogspot.co.uk/2009/12/churchland-on-dualism-part-iii.html

    http://edwardfeser.blogspot.co.uk/2010/06/churchland-on-dualism-part-iv.html

    http://edwardfeser.blogspot.co.uk/2013/09/churchland-on-dualism-part-v.html

  212. Ian Wardellon 07 Apr 2017 at 9:46 am

    @mumadadd I think of free will as being where consciousness is sometimes causally efficacious in bringing about our chain of thoughts and our voluntary behaviour.

    @edamame Feser’s critique of Churchland’s analysis of dualism has 5 parts, but when I linked to them it says my post is awaiting moderation.

  213. mumadaddon 07 Apr 2017 at 9:50 am

    Ian,

    “@mumadadd I think of free will as being where consciousness is sometimes causally efficacious in bringing about our chain of thoughts and our voluntary behaviour.”

    How does adding another process to decision making change the definition of free will? Is this process non deterministic and non random? If not, it’s the same (in this context) as finding that another region if the brain, previously thought to not be involved, is involved in decision making.

  214. mumadaddon 07 Apr 2017 at 9:50 am

    Region *of* the brain…

  215. Ian Wardellon 07 Apr 2017 at 10:11 am

    My actions are determined by what I want to do. Is this determinism even if mere psychological determinism?

    I don’t know what you mean by adding another process. You asked me what I mean by free will, I told you. If you want to dispute that we have my concept of free will, then look up the page where I link to my 2 appropriate blog entries where I provide my arguments.

  216. mumadaddon 07 Apr 2017 at 10:52 am

    Ian,

    “I don’t know what you mean by adding another process.”

    I’m defining free will as the capacity to have acted differently if a given situation is replayed exactly (including you and your brain state, and down to the Planck scale). I am saying that determinism is a defeater to free will; that it would be impossible to have done anything other than what it is you end up doing.

    You appear to be saying that:

    a.) There is a non-material component to decision making
    b.) We DO have free will
    d.) That behaviour IS deterministic (IW: “You fail to understand what free will means. Even if someone can predicts another’s actions 100%, that does not have any implications for his free will.”

    In order for the non-material component to introduce free will (by my definition) to a system, it would have to be non deterministic and non random. But you have clearly said that behaviour can be 100% predictable and free will still exist, so you MUST be using a different definition of free will.

    Except it appears that you aren’t using a definition, but have simply mislead yourself into believing that adding a non-material component to decision making magically rescues free will.

  217. mumadaddon 07 Apr 2017 at 10:56 am

    You are trapped by your own logic in an untenable position. But that’s a pattern. 😉

  218. Ian Wardellon 07 Apr 2017 at 11:48 am

    @mumadadd I don’t agree with your definition of free will. Or at least your definition is ambiguous. I explore this issue in the following blog entry:

    http://ian-wardell.blogspot.co.uk/2014/05/free-will-and-notion-of-could-have.html

  219. edamameon 07 Apr 2017 at 12:16 pm

    Ian your post about science leaving out consciousness contradicts itself. I leave it as an exercise for the reader.

    On Churchland, it seems you regurgitated a criticism from a blog and you haven’t actually read Churchland.

    I recommend Engine of Reason, or Neurophilosophy at Work (or if you like abstract concepts, then Plato’s Camera). I don’t endorse it full cloth, he is a philosopher after all. But I just recommend it as a more neuronally-oriented positive story than Dennett, who takes a more behavioral approach.

    I’m not gonna go tit-for-tat with you because it will be a time sink and I am at work, but let me say that I do agree that consciousness is particularly thorny, and there are no knock-down arguments at this time against property dualism. Substance dualism is pretty much dead in the water, but property dualism is harder to refute (the fact that in most forms it implies epiphenomenalism and panpsychism makes it seem pretty crappy to me, however, close to self-refuting).

    Right now the arguments in its favor are about of the same flavor as the arguments for vitalism were in the late 1800s (and yes I know the usual response that vitalism was about functional/causal facts while consciousness pertains to other types of facts, but that is actually a historically dubious statement about vitalism, and also begs the question). It is an empirical/historical question whether the arguments for property dualism will have the same fate as those for vitalism.

    I strongly lean toward materialism, but I wouldn’t try to prove, in any mathematical philosophical sense, the other side wrong. That’s not how science works. Philosophy is proven useless for settling substantive issues about how the world is structured.

    Ultimately until I can convince the likes of Koch, Block, Chalmers, Nagel and other people who are extremely reasonable, smart, and without a religious ax to grind, then I will consider this an ongoing debate where reasonable people can disagree. I would suggest you could use a similar dose of humility in your writings instead of pretending people who disagree with you are deluded or ignorant. That’s asinine. After all, you are the one peddling the strange position that there is this magic property superadded to the brain.

    Now the people that would deny the existence of consciousness? Yes, they are deluded. I don’t waste my time with that crap.

    With that, I’m gonna have to bow out of this thread because as I said I’m at work, and if there’s one thing I learned over the years is that discussing the metaphysics of consciousness is the Great Grandmother of all time sinks.

  220. chikoppion 07 Apr 2017 at 12:18 pm

    [Ian Wardell] My actions are determined by what I want to do. Is this determinism even if mere psychological determinism?

    What do you think determines (causes) your ‘wants?’

    Also, far less complex organisms behave with intent. What determines their actions? At what degree of complexity do biological motivations require a disembodied mind?

  221. hardnoseon 07 Apr 2017 at 1:27 pm

    “Quantum effects don’t explain consciousness. But if they did, then not understanding them wouldn’t stop them from causing consciousness (in the same way that not understanding gravity doesn’t stop objects falling to Earth) – if it was real and something discrete and not just an illusion manufactured by physical brains.”

    Quantum effects might cause consciousness — you don’t know, you only think you know. And if they cause consciousness, then materialism can’t explain consciousness.

    Materialists like Sean Carroll have been insisting that quantum effects are not relevant to our macro level. Well they already have been found in photosynthesis and bird navigation. We don’t know where else they will be found, probably everywhere, including the brain.

    Matter is not understood or explained, consciousness is not understood or explained. Saying matter causes consciousness is just more stupid BS.

  222. mumadaddon 07 Apr 2017 at 1:47 pm

    “Quantum effects might cause consciousness — you don’t know, you only think you know. And if they cause consciousness, then materialism can’t explain consciousness.”

    Pahaha! So… matter might explain consciousness, but if it does then it can’t! Yep, obviously we’re a bunch of saps.

  223. Ian Wardellon 07 Apr 2017 at 2:25 pm

    edamame
    “Ultimately until I can convince the likes of Koch, Block, Chalmers, Nagel and other people who are extremely reasonable, smart, and without a religious ax to grind, then I will consider this an ongoing debate where reasonable people can disagree. I would suggest you could use a similar dose of humility in your writings instead of pretending people who disagree with you are deluded or ignorant. That’s asinine. After all, you are the one peddling the strange position that there is this magic property superadded to the brain”.

    Magic property? “Superadded”?

    Consciousness exists. Why imagine it is magic if it is some essential ingredient of reality? Indeed what is meant by labelling it “magical”?

    If the existents that come under physics is insufficient to explain consciousness, then why not imagine consciousness might be one of the basic existents in *addition* to electrons, quarks, and the space-time continuum? i.e consciousness is fundamental. I see no reason why some expanded physics could not accommodate consciousness. It may be that the brain produces consciousness, or that consciousness is somehow entangled with the brain and cannot exist without it; nevertheless it would still be fundamental i.e it cannot be explained by reducing it to anything else.

    No I haven’t read Churchland. However, I do read a lot of what skeptics/materialists say. All over the net for example. And I read the 700 odd page volume “the myth of an afterlife” written by various authors.

    I know the arguments they employ. And the one’s that Feser alleges Churchalnd uses are pretty much universal. And they are simply irrelevant as Feser says and some of which I *independently* concluded.

    So! I should use humility?? Even in the face of obvious false or ridiculous arguments? If someone says 2+2 = 5, should I use some humility? Or should I just call a spade a spade?

    There can be too much humility. Sometimes it’s right to be forthright.

  224. chikoppion 07 Apr 2017 at 2:34 pm

    Yeah, that was schizophrenic even for hardnose.

    “Quantum effects” simply describe the behavior of matter and energy at a particular scale. A quantum-scale cause for consciousness would be 100% compatible with monism, physicalism, and “materialism.”

    I think he’s confusing “materialism” with “particles with mass.” “Materialsm” encompasses all the particles, fields, and forces that manifest in observable reality. It is a form of physicalism, which contrasts with idealism.

    If quantum effects were shown to cause consciousness that would be a “materialist” solution, as the phenomena of conscious supervenes on a “physical” state and not the other way ’round.

  225. chikoppion 07 Apr 2017 at 2:45 pm

    [Ian Wardell] If the existents that come under physics is insufficient to explain consciousness, then why not imagine consciousness might be one of the basic existents in *addition* to electrons, quarks, and the space-time continuum? i.e consciousness is fundamental. I see no reason why some expanded physics could not accommodate consciousness. It may be that the brain produces consciousness, or that consciousness is somehow entangled with the brain and cannot exist without it; nevertheless it would still be fundamental i.e it cannot be explained by reducing it to anything else.

    This is a reasonable position, understood as a hypothesis.

    The friction I have is when the fact that there is no present explanation for consciousness is cited as proof that consciousness is not compatible with physicalism (and subsequently used as a premise for dualism).

  226. Pete Aon 07 Apr 2017 at 4:08 pm

    Ian,

    Suppose we attempted to explain how a television system works in terms of only fundamental particles and fields. Such an explanation must be possible because the system doesn’t use anything other than the known fundamentals.

    However, the explanation would be so long-winded that it couldn’t be read from start to finish in a lifetime.

    While we’re watching a really interesting programme, we become so engrossed that we lose awareness of the fact that all we’re looking it is a rectangular screen that can display nothing other than red, green, and blue dots.

    There is nothing in fundamental physics that can, in and of itself, explain the illusion that the screen is inducing in our brain. Does this mean there is a yet-to-be-discovered fundamental particle, field, or theory? Does it imply that the television system somehow quantumly entangles our brain with the original scenes that were being recorded by the cameras and microphones?

    A television system is a highly-complex system. Few people understand it well enough to thoroughly explain some of its sub-systems, let alone all of them. But those who do understand the whole system can fully explain how it induces this very useful illusion — without resorting to arguments from ignorance! The whole purpose of a television system is to induce this illusion.

    NB: The system does not create the illusion; the illusion is created by the brain. This is just one example of the illusory nature of consciousness.

  227. hardnoseon 07 Apr 2017 at 4:13 pm

    [If quantum effects were shown to cause consciousness that would be a “materialist” solution, as the phenomena of conscious supervenes on a “physical” state and not the other way ’round.]

    You have no idea what you’re talking about chikoppi;

    https://www.elsevier.com/about/press-releases/research-and-journals/discovery-of-quantum-vibrations-in-microtubules-inside-brain-neurons-corroborates-controversial-20-year-old-theory-of-consciousness

    “The origin of consciousness reflects our place in the universe, the nature of our existence. Did consciousness evolve from complex computations among brain neurons, as most scientists assert? Or has consciousness, in some sense, been here all along, as spiritual approaches maintain?” ask Hameroff and Penrose in the current review. “This opens a potential Pandora’s Box, but our theory accommodates both these views, suggesting consciousness derives from quantum vibrations in microtubules, protein polymers inside brain neurons, which both govern neuronal and synaptic function, and connect brain processes to self-organizing processes in the fine scale, ‘proto-conscious’ quantum structure of reality.”

  228. chikoppion 07 Apr 2017 at 4:55 pm

    [hardnose] “The origin of consciousness reflects our place in the universe, the nature of our existence. Did consciousness evolve from complex computations among brain neurons, as most scientists assert? Or has consciousness, in some sense, been here all along, as spiritual approaches maintain?” ask Hameroff and Penrose in the current review. “This opens a potential Pandora’s Box, but our theory accommodates both these views, suggesting consciousness derives from quantum vibrations in microtubules, protein polymers inside brain neurons, which both govern neuronal and synaptic function, and connect brain processes to self-organizing processes in the fine scale, ‘proto-conscious’ quantum structure of reality.”

    Quick primer for you…

    IDEALISM: Consciousness is the fundamental reality and phenomena of the physical world emerges from it.

    PHYSICALISM: The “physical” world is the fundamental reality and phenomena of consciousness emerges from it.

    MONISM: There is a single ontological category of existence, including both mind and matter.

    DUALISM: There are two INDEPENDENT ontological categories of existence, mind and matter.

    MATERIALISM: Materialism is a form of monistic physicalism in which consciousness is a subset of phenomena that emerges from physical existence.

    The quantum scale is PART OF THE SINGLE-ONTOLOGY (monism) PHYICAL EXISTANCE (physicalism). If consciousness EMERGES FROM interactions of matter and energy as governed by physical laws that is MATERIALISM.

    A review and update of a controversial 20-year-old theory of consciousness published in Physics of Life Reviews claims that consciousness derives from deeper level, finer scale activities inside brain neurons. The recent discovery of quantum vibrations in “microtubules” inside brain neurons corroborates this theory, according to review authors Stuart Hameroff and Sir Roger Penrose. They suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations, and that from a practical standpoint, treating brain microtubule vibrations could benefit a host of mental, neurological, and cognitive conditions.

  229. chikoppion 07 Apr 2017 at 5:39 pm

    In the author’s words, from the paper linked…

    Three general possibilities regarding the origin and place of consciousness in the universe have been commonly expressed.

    (A) Consciousness is not an independent quality but arose, in terms of conventional physical processes, as a natural evolutionary consequence of the biological adaptation of brains and nervous systems. …

    (B) Consciousness is a separate quality, distinct from physical actions and not controlled by physical laws, that has always been in the universe. Descartes’ ‘dualism’, religious viewpoints, and other spiritual approaches …

    (C) Consciousness results from discrete physical events; such events have always existed in the universe as non-cognitive, proto-conscious events, these acting as part of precise physical laws not yet fully understood. …

    In summary, we have:

    (A) Science/Materialism, with consciousness having no distinctive role.
    (B) Dualism/Spirituality, with consciousness (etc.) being outside science.
    (C) Science, with consciousness as an essential ingredient of physical laws not yet fully understood.

    Nonetheless, in the Orch OR scheme, these events are taken to have a rudimentary subjective experience, which is undifferentiated and lacking in cognition, perhaps providing the constitutive ingredients of what philosophers call qualia. We term such un-orchestrated, ubiquitous OR events, lacking information and cognition, ‘proto-conscious’. In this regard, Orch OR has some points in common with the viewpoint (B) of Section 1, which incorporates spiritualist, idealist and panpsychist elements, these being argued to be essential precursors of consciousness that are intrinsic to the universe. It should be stressed, however, that Orch OR is strongly supportive of the scientific attitude that is expressed by (A), and it incorporates that viewpoint’s picture of neural electrochemical activity, accepting that non-quantum neural network membrane-level functions might provide an adequate explanation of much of the brain’s unconscious activity. Orch OR in microtubules inside neuronal dendrites and soma adds a deeper level for conscious processes.

  230. mumadaddon 07 Apr 2017 at 5:57 pm

    chikoppi,

    If it ain’t made of tiny little ball-bearings then it ain’t matter. I don’t care what you say — it’s ball-bearings all the way down.

  231. Ian Wardellon 07 Apr 2017 at 6:08 pm

    @Pete A As I keep banging on about, consciousness cannot *in principle* be explained by materialism as currently conceived.

    http://ian-wardell.blogspot.co.uk/2016/04/neither-modern-materialism-nor-science.html

    Now, if I am wrong about this, if there’s something I’m not understanding, something I’m not getting — or whatever, then I needs to be explained to me where my error is.

    Simply repeating the reductivist mantra ain’t gonna work!

    *Please* explain to me **what the bloody hell I am not understanding**???

  232. mumadaddon 07 Apr 2017 at 6:14 pm

    “*Please* explain to me **what the bloody hell I am not understanding**???”

    Neuroscience, philosophy of mind, and logic.

  233. chikoppion 07 Apr 2017 at 7:04 pm

    [Ian Wardell] *Please* explain to me **what the bloody hell I am not understanding**???

    Yeah, you have a number of misconceptions in that post. I don’t want to go point by point, but here are a few notes.

    Science and Materialism are not synonymous. Science is a process and Materialism is a philosophy. The fact there there are some things that science cannot investigate or (currently) explain does not impede on the verity of Materialism/Monism nor necessitate Idealism/Dualism.

    Science does not demand reductionism. In fact, scientific theories such as thermodynamics are overtly non-reductive. It would be possible to know the state of every particle in a system and not be able to predict the qualities of the system as a whole without measuring the system itself. The reference barrier between quantum and macro states is another example of this non-mechanistic state transition. Therefore, when you use the term “quantitative” you are ignoring some of the fundamental theories in physics.

    You are relying on the fact that Science (a process) cannot produce a mechanistic theory for consciousness consistent with Materialism (a philosophical position). This is a misconception because 1) Science natively addresses emergent properties and 2) the absence of a scientific theory of consciousness is not evidence that consciousness cannot emerge from the laws that govern the interaction of matter and energy.

  234. Pete Aon 07 Apr 2017 at 7:23 pm

    Ian,

    If we watch a really good illusionist performing a trick: the audience can experience only the illusion; the illusionist can experience only the trick, not the illusion.

    IOW: It takes enormous effort and a great deal of knowledge to see the trick itself rather than the illusion.

    If we start with the assumption that consciousness is an illusion [or mainly illusory] then it is very easy to understand my example.

    If instead, we start from the assumption that consciousness is not an illusion then we have to run through endless hoops, and special pleading, to explain each and every illusion that we experience — e.g., the plethora of optical and auditory illusions, including watching television!

    Currently, consciousness cannot *in principle* be fully explained by materialism. It can’t be explained at all by non-materialism: a philosophy of mind cannot provide a testable explanation — a falsifiable hypothesis — it has to invoke magic.

    To me, if it can be shown that the mind really is the result of the material brain and nothing else, this would be far more awesome than finding out that it is supernatural. I remember the day I learnt that nearly all of my atoms are remnants of stars that exploded eons ago, we are literally made of stardust: it’s by far the most awe-inspiring fact I’ve thus far learnt. The truth is often stranger and more wonderful than fiction and make-believe.

  235. mumadaddon 07 Apr 2017 at 7:39 pm

    chikoppi (and perhaps edamame if you’re still around), maybe you can help me to understand something:

    I said way back in this thread that I don’t understand property dualism — this is because I intuitively categorise any form of ‘dualism’ as woo, and yet can’t see how subjectivity could not be a property of reality if it exists within it. Is wetness a property of the universe? Is heat? Gravity? Is there some distinction between properties that emerge from more ‘fundamental’ interactions of stuff vs stuff way down the chain of causality? Does scale make a difference? It seems to me that every property of anything anywhere in the universe could be said to be ’emergent’ if it wasn’t present at the exact beginning of the universe, so I can’t see any clear way to separate emergent properties from fundamental properties (save the distinction I just made).

    Am I missing something? Specifically in the context of subjectivity?

  236. Ian Wardellon 07 Apr 2017 at 7:45 pm

    I note AI chatbots are developing in leaps and bounds! {coughs}

    https://www.youtube.com/watch?v=8478kLLQEG8&t=0s

  237. mumadaddon 07 Apr 2017 at 7:53 pm

    To add to that — subjectivity being purely a product of brain function is perfectly compatible with consciousness as a property of brains, therefore matter, therefore the universe; in fact I can’t find a way to allow for subjective experience and it not be a property of the universe. And I don’t mean that any property present in a part is present throughout the whole, just that anything contained within a subset is contained within the whole.

  238. mumadaddon 07 Apr 2017 at 8:21 pm

    Repetition — Lose 3 points. I’m definitely out…

  239. mumadaddon 07 Apr 2017 at 8:32 pm

    Ian,

    “I note AI chatbots are developing in leaps and bounds! {coughs}”

    I will give you some serious props for that link if you can provide a link to some non-material entities designed by non-material entities communicating in a more fluid and articulate manner. Then, by Jove, you will have made a spectacular point.

  240. chikoppion 07 Apr 2017 at 9:21 pm

    @mumadadd

    Eh…it’s messy.

    Property dualism is easy enough. That’s just the position that there is a fundamental difference between some of the properties of some entities. These properties might be reducible or irreducible, but the entity itself is the thing that exists ontologically and the properties either arise from or supervene upon it. In other words, the properties are concepts that ultimately describe the thing.

    This is distinct from Dualism, which would claim that one or more of the properties is an ontological entity in and of itself, not arising from or supervening upon the physical entity it is associated with and not merely a concept that describes it.

    P = physical property
    M = mental property
    [ ] = ontological (independent) existence

    Property dualism: [E] → P, M
    Mental properties are conceptually distinct from physical properties, but both exist because the physical entity exists.

    Dualism: [M] + ([E] → P)
    The physical entity exists and its physical properties arise as a result, but the mental properties exist independently and are, are non-conceptual, and are not dependent upon the physical entity.

    I don’t know if that’s helpful. It really comes down to the question of what is conceptual and what actually exists and to what does it owe that existence.

  241. chikoppion 07 Apr 2017 at 9:33 pm

    @Ian Wardell

    [Ian Wardell] I note AI chatbots are developing in leaps and bounds! {coughs}

    You’ll also appreciate this bit:

    A Neural Network Generated These Can’t-Fail Pickup Lines
    http://boingboing.net/2017/04/07/a-neural-network-generated-the.html

    I have a cenver? Because I just stowe must your worms.
    Hey baby, I’m swirked to gave ever to say it for drive.
    You must be a tringle? Cause you’re the only thing here.
    I’m not on your wears, but I want to see your start.
    Hey baby, you’re to be a key? Because I can bear your toot?
    I don’t know you.
    If I had a rose for every time I thought of you, I have a price tighting.
    Etc.

    😉

  242. BillyJoe7on 08 Apr 2017 at 3:17 am

    edamame,

    Regarding “illusion of consciousness” vis-a-vis “illusion of free will”.

    Fair enough.

    Perhaps DD could have said “illusions about the nature of consciousness”. But he did make it clear exactly what he was talking about and not, as IW claimed, that “consciousness does not exist”.

    Granted that free will is, lock stock and barrel, an illusion, but, in the checkerboard illusion, we are not saying that the rhomboids are an illusion, only that thinking they are squares and perceiving them as different in colour is an illusion, so that’s perhaps analogous to “consciousness is an illusion”.

  243. BillyJoe7on 08 Apr 2017 at 5:06 am

    BTW,

    I’ve found the source of IW’s misconception about Daniel Dennett and p-zombies:

    Daniel Dennett, “Consciousness Eplained” 1991, p. 406.
    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious — not in the systematically mysterious way that supports such doctrines as epiphenomenalism.”

    In a footnote Dennett states:
    “It would be an act of desperate intellectual dishonesty to quote this assertion out of context!”

    Perhaps IW is not necessarily intellectually dishonest in this particular instance, perhaps he was just blindly repeating what someone else who was intellectually dishonest.

    🙂

  244. Ian Wardellon 08 Apr 2017 at 5:38 am

    @BillyJoe
    Not all statements derive their meaning from context. The statement “we’re all zombies” is clear and unambiguous (and as false a statement as anything could possibly be). It cannot be ameliorated by sentences surrounding that key statement.

    @chikoppi

    No, one can be a substance dualist, and still think the mind cannot exist independently of the body. Being an interactive substance dualist doesn’t entail that we survive our deaths.

  245. Ian Wardellon 08 Apr 2017 at 5:40 am

    Dennett says:
    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies”.

    BillyJoe, didn’t you say above somewhere that Dennett said that zombies are metaphysically impossible?

  246. bachfiendon 08 Apr 2017 at 7:24 am

    Ian,

    The argument that materialism is false because philosophical zombies (physically identical beings to humans but incapable of consciously feeling sensations) are logically possible is just an incoherent argument.

    Actually, it’s a really stupid argument. It’s a circular argument. It’s arguing that since materialism is false, it’s possible to have a being which is physically identical to a human and yet not consciously feeling sensations. Therefore materialism is false.

    There are two possible counter arguments. Firstly, philosophical zombies are impossible. If you were to create a being physically identical to a human, then it would also be able to consciously feel sensation.

    Or secondly, we’re all philosophical zombies, not able to physically able to feel sensation, but physically perfectly able to feel the illusion of sensation. The brain doesn’t feel pain (there are no pain receptors in the brain), but it does feel the illusion of pain.

    I suspect Daniel Dennett is using both counter arguments. The philosophical zombie argument is so stupid I wonder why it needs to be taken seriously.

  247. mumadaddon 08 Apr 2017 at 8:07 am

    chikoppi — thanks. 🙂

  248. mumadaddon 08 Apr 2017 at 8:25 am

    “The brain doesn’t feel pain (there are no pain receptors in the brain), but it does feel the illusion of pain.”

    On the topic of the illusory nature of consciousness (I know a lot of this has been covered already so forgive the repetition): I recall one practical example from something or other I read — if you imagine a pirate, you may subjectively feel that you have a full colour, detailed visual image in your mind, but when asked to provide details such as what the pirate is wearing, which way he is facing etc., most people come up blank; the point being that your subective sense of your own subjective experience is misleading. I’ve noticed many times when recalling the faces of people I know intimately, and the more scrutiny I give the ‘image’ conjured up in my mind, the more vague and non image like it seems.

    However, I definitely can’t buy the idea of subjectivity itself being an illusion; you would after all need a subject to experience the illusion.

    On pain specifically, I’m struggling to see how it could be illusory; it seems to me to be a raw, indivisible sensation. I am aware that the ‘remembered self’ can have its perception pain manipulated by psychological factors (e.g. peak end rule / duration neglect), and the ‘experiencing self’ can have its perception also manipulated (e.g. VS Ramachandran’s mirror treatment for phantom limbs), but I don’t see how this could make the sensation of pain in any way illusory.

  249. BillyJoe7on 08 Apr 2017 at 8:42 am

    Ian,

    Are you really that dense!!!

    BJ:
    “In a footnote Dennett states:
    “It would be an act of desperate intellectual dishonesty to quote this assertion out of context!”

    So what do you do?
    You quote him out of context:

    Ian:
    “Dennett says:
    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies”.”

    You quote him out of context even though he specifically warns you not to, labelling you as desperately intellectually dishonest if you do,!!!
    But, no, you insist:

    Ian:
    “Not all statements derive their meaning from context. The statement “we’re all zombies” is clear and unambiguous (and as false a statement as anything could possibly be). It cannot be ameliorated by sentences surrounding that key statement.”

    Incredible!
    And what utter BS.
    Anyway, here’s the context:

    “Daniel Dennett, “Consciousness Eplained” 1991, p. 406.
    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious — not in the systematically mysterious way that supports such doctrines as epiphenomenalism.””

    Highlighted and all.
    If I was to say “oh my god”, you’d probably conclude I’m no longer an atheist!!!

    Unbelievable!!!

    (And what bachfiend said)

  250. BillyJoe7on 08 Apr 2017 at 8:45 am

    Anyway, I now understand how you can call him a “loon of the highest order” – you are simply incapable of understanding what he says.

  251. Pete Aon 08 Apr 2017 at 10:37 am

    BJ7, He’s clearly demonstrated that he hasn’t read the works from which he quotes.

  252. Pete Aon 08 Apr 2017 at 10:57 am

    mumadadd,

    Your examples of visualizing (imagining) a pirate and “recalling the faces of people I know intimately” are really good examples of the illusion of explanatory depth:

    Leonid Rozenblit and Frank Keil (2002). “The misunderstood limits of folk science: An illusion of explanatory depth”. Cognitive Science, 26, 521-562.

    See also:
    http://scienceblogs.com/mixingmemory/2006/11/16/the-illusion-of-explanatory-de/

  253. Ian Wardellon 08 Apr 2017 at 3:34 pm

    BillyJoe said

    “In a footnote Dennett states:
    “It would be an act of desperate intellectual dishonesty to quote this assertion out of context!””

    So what do you do?
    You quote him out of context:

    My Response:

    Well yes, that’s cos I don’t idolise Dennett and suppose everything he says must be correct.

    BillyJoe said

    “Daniel Dennett, “Consciousness Eplained” 1991, p. 406.
    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious — not in the systematically mysterious way that supports such doctrines as epiphenomenalism.””

    Highlighted and all.
    If I was to say “oh my god”, you’d probably conclude I’m no longer an atheist!!!

    Unbelievable!!!

    My Response:

    He’s saying that consciousness is “systematically mysterious”. So what? This doesn’t somehow mean that he’s not being serious in describing everyone as a p-zombie.

    And I should also add that you were claiming that his position isn’t that consciousness doesn’t exist. You’ve managed to find a quote by him that refutes your claim.

  254. bachfiendon 08 Apr 2017 at 4:08 pm

    Ian,

    Stating that the illusion of consciousness exists isn’t claiming that consciousness doesn’t exist.

    A brain which is capable of creating the illusion of perfect colour vision right out to the edge of the visual fields is also capable of creating the illusion of a conscious mind, with free will, making all the decisions.

    And the philosophical zombie argument is a really stupid one. It should have been laughed out of existence the first time it was proposed.

  255. Ian Wardellon 08 Apr 2017 at 4:12 pm

    @bachfiend Dennett has stated that we are all zombies. That means he rejects the existence of consciousness.

    But I’m really not interested! Him and his ilk, the Churchlands and other eliminativists and metaphysical behaviourists, are just loons. I have no time for them.

  256. mumadaddon 08 Apr 2017 at 4:51 pm

    Pete A,

    Thanks for the link. Funnily enough, I actually said to a client on Friday, “You know when you have a sense that you understand something really well, but try to explain it to somebody and it turns out there’s nothing there…” Now I have a name for that phenomenon (which has dogged my life).

    Having read that blog post, I don’t think my examples really fall into this category though — primarily because I don’t think the three factors “that play a role in the illusion of explanatory depth” are a fit:

    – Confusing environmental support with representation
    – Levels of analysis confusion
    – Indeterminate end state

  257. bachfiendon 08 Apr 2017 at 5:54 pm

    Ian,

    No one else (except perhaps hardnose and Michael Egnor, and they’re loons too) are interested in your views. Daniel Dennett doesn’t deny the illusion of consciousness. Illusions don’t mean that they don’t exist. Illusions can be very convincing, otherwise they wouldn’t be illusions.

    You need to open your eyes. The illusion of perfect colour vision right out to the edge of the visual fields is an incredible illusion, demonstrating just what the human brain is capable of doing.

    Your non-materialistic views, whatever they are, lack something. Such as explanatory power. A mechanism. Evidence. They lack a lot of things in fact.

  258. BillyJoe7on 08 Apr 2017 at 6:10 pm

    Ian,

    For the record:

    Daniel Dennett does not believe we are all p-zombies.
    He believes that p-zombies cannot exist.
    And he does not believe that consciousness does not exist.

    That you deny this shows how ignorant you are about someone you blithely dismissed as a loon.

    Even though Daniel Dennett specifically warning you in his footnote not to take his statement out of context and thereby misunderstand the point he was making, that is exactly what you did, which is, of course, why you misunderstood his point. Even though I reiterated that advice by highlighting his footnote, and even though I pointed you in the direction of the point he was making by highlighting the relevant part of his statement, and even though I strongly hinted at the tenor of his statement by pointing out that I am not a theist simply because I say “oh my god”, you still continue to ignore his footnote, take him out of context, and thereby completely misunderstand the point he was making.

    It’s like your misunderstanding of the checkerboard illusion.

    You are incapable of understanding that it is simply a more elaborate version of the basic illusion where you take two squares of the SAME COLOUR and place one on a white background and the other on a black background and observe that the two identically coloured squares now look completely different in colour. And you are completely incapable of understanding that the 3D version you conjured up is not an illusion because those squares are NOT THE SAME COLOUR and therefore not analagous to the basic illusion described above where the squares are the SAME COLOUR. That is why I showed you a 3D version that IS analagous to the basic illusion – because the two squares in that 3D version are the SAME COLOUR, as in the case of the basic illusion. But you still don’t get it.

    It’s pretty sad how your mind has deteriorated over the years.
    I remember a time when you could actually put together a reasonable argument.

  259. mumadaddon 08 Apr 2017 at 6:18 pm

    BJ7,

    “Even though Daniel Dennett specifically warning you in his footnote not to take his statement out of context and thereby misunderstand the point he was making,”

    What was the point he was making? I Googled the text string but only found what you quoted (including the injunction to not take the quote out of context). [I have read, and have a copy of Consciousness Explained, but I’m not about try to dig the context out of that bastard with its graphene-thin pages and tiny print.]

  260. Ian Wardellon 08 Apr 2017 at 6:30 pm

    BillyJoe7
    “Daniel Dennett does not believe we are all p-zombies.
    He believes that p-zombies cannot exist.
    And he does not believe that consciousness does not exist”.

    Dennett says:

    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious”

    Again, the fact he follows that with “not in the systematically mysterious way that supports such doctrines as epiphenomenalism”, is completely irrelevant. Epiphenomenalists, like virtually everyone else that has ever existed, believe in the existence of consciousness — that we experience toothache, the smell of roses, redness, experience hope, contentment, despair etc. Saying that such conscious experiences are mysterious doesn’t alter the fact that we are completely certain, and justified in our certainty, that we do in fact have such experiences.

    So! He says he and everyone else is a zombie. This directly contradicts what you claim he says…

  261. mumadaddon 08 Apr 2017 at 6:47 pm

    Jesus, Ian…

    “Saying that such conscious experiences are mysterious doesn’t alter the fact that we are completely certain, and justified in our certainty, that we do in fact have such experiences.”

    Dennett: “not in the systematically mysterious way that supports such doctrines as epiphenomenalism”

    NOT!. Do you know what NOT means??

    I would still like to see the quote in context though.

  262. bachfiendon 08 Apr 2017 at 6:59 pm

    I’m currently reading ‘Conciousness Explained’. It’s a very long argument for a viewpoint as to what Daniel Dennett regards consciousness to be. It’s misleading, and dishonest, to take something out of context (as creationists do with Darwin’s comment about the eye) as reflecting what the author actually believes.

  263. mumadaddon 08 Apr 2017 at 7:19 pm

    bach,

    I think he’s revised his view since 1991. And if he hasn’t, his view should be discarded anyway, given the advances in neuroscience he’d have to have been one prescient genius of a philosopher to have nailed the 2017 state of the evidence in 1991.

  264. mumadaddon 08 Apr 2017 at 8:09 pm

    ; even…

  265. mumadaddon 08 Apr 2017 at 8:56 pm

    I will be frank and admit that I do not and have never ‘grokked’ Dennett’s explanation/s of consciousness (multiple drafts). I read Kinds of Minds and totally got it — a useful primer on intentionality; read Consciousness Explained and was left baffled — thought it maybe too dense for me to comprehend (10 years ago); listened to Darwin’s Dangerous Idea and found it a useful supplement to my understanding of evolution but not really altering it; now listening to Bacteria to Bach and feeling proud whenever I can relate what he’s saying to concepts I’ve heard elsewhere (e.g. Shannon information, memes etc.).

    But if you read that one free e-book by Koch: https://mitpress.mit.edu/books/which-i-argue-consciousness-fundamental-property-complex-things%E2%80%A6 you get a much clearer exposition of the current state of the art, and contenders for, working theories of consciousness. Sure — it’s sandwiched between waffle and crazy, but it’s there, and conveyed a way you’ll actually understand.

  266. bachfiendon 08 Apr 2017 at 8:56 pm

    mumadadd,

    I realise that.

  267. mumadaddon 08 Apr 2017 at 8:57 pm

    Tag fix:

    I will be frank and admit that I do not and have never ‘grokked’ Dennett’s explanation/s of consciousness (multiple drafts). I read Kinds of Minds and totally got it — a useful primer on intentionality; read Consciousness Explained and was left baffled — thought it maybe too dense for me to comprehend (10 years ago); listened to Darwin’s Dangerous Idea and found it a useful supplement to my understanding of evolution but not really altering it; now listening to Bacteria to Bach and feeling proud whenever I can relate what he’s saying to concepts I’ve heard elsewhere (e.g. Shannon information, memes etc.).

    But if you read that one free e-book by Koch: https://mitpress.mit.edu/books/which-i-argue-consciousness-fundamental-property-complex-things%E2%80%A6 you get a much clearer exposition of the current state of the art, and contenders for, working theories of consciousness. Sure — it’s sandwiched between waffle and crazy, but it’s there, and conveyed a way you’ll actually understand.

  268. mumadaddon 08 Apr 2017 at 9:12 pm

    “you get a much clearer exposition of the current state of the art”

    Actually 2014. I have no idea what’s happened to the state of the art since then.

  269. Pete Aon 09 Apr 2017 at 5:33 am

    mumadadd,

    I agree with you that those three factors often don’t seem to fit our personal experiences of the illusion of explanatory depth. I can’t find a link to the paper that I read several years ago, and my recent searches have located only papers that are behind a paywall.

    However, I’ve often found that the “levels of analysis confusion” factor seems to fit (if we take a loose general meaning of the phrase). E.g., think of an electric kettle. Do you know how it works? I’m sure most people would answer “Yes”. Their affirmative answer is based on our subconscious heuristic that if we are able to recall the high-level description of something then we are very likely able to recall the next level down, when requested to do so. In other words, our experiences, since early childhood, have ‘programmed’ our heuristic criteria for proficiency and sufficiency — especially in terms of explanatory power and explanatory depth. Children learn this quickly by responding to each increase in the depth of explanation with “Why?”, until they are told “You don’t need to know that!” or “Stop asking silly questions!”. Such rebukes are issued by explainers who become irritated when their lack of explanatory depth is exposed.

    I’m sure most of us have become aware that the commentators on this blog who attack their contrived straw-man depictions — ‘materialism’ and ‘materialists’ — have a dire lack of both explantory power and explantory depth for their alternatives to science.

  270. BillyJoe7on 09 Apr 2017 at 6:36 am

    mumadadd,

    Thanks for asking.

    To provide the immediate context, I will give the extended quote at the end of this post.
    But here is the extended context:

    The quote comes from his 1991 book, “Consciousness Explained” (yes, Ian, he admits he is hyperbolising here!); Part 3 titled “The philosophical Problems of Consciousness”; chapter 12 titled “Qualia Disquailfied” (no, Ian, he is not saying that subjective experience or qualia do not exist, he is saying that the common intuitions about qualia are false); subsection of chapter 12 titled “”Epiphenomenal” Qualia”, where he deconstructs the epiphenomenalist’s version of qualia. If you have the book, the actual quote is right at the end of the subsection on page 406, but you may need to read the whole subsection (about 8 pages) to get a good feel for what he is saying.

    He distinguishes two versions of “epiphenomenalism”, the cognitive scientist’s version, and the philosopher’s version. Cognitive scientists define epiphenomena as non-functional by-products that have actual, but unintended, physical effects (ie the heat and hum of your computer). Philosophers define epiphenomena as non-functional by-products that have no physical effects. But, he says, the two definitions are often confused, especially by philosophers. In Dennett’s opinion, philosophers bait you with the philosopher’s definition but (presumably unitentionally) switch to the cognitive scientists definition in order to foster support for the philosopher’s version of “epiphenomenalism”. In his opinion, that is the only way the philosopher’s version of “epiphenomenalism” can be supported (unless you live solipsistically in your own world of ideas and epiphenomenal qualia, separated from the rest of the universe – because that is the only way your qualia can have no physical effects on the universe).

    So, Dennett has no problem with the cognitive scientist’s version (it fits in easily with materialism), but he has no truck with the philospopher’s version of epiphenomenalism (because a universe with epiphenominalism that has no physical effects is indistinguishable from one without it) and, of course, he is critical of philosophers who bait and switch from one definition to the other in order to support the philosopher’s version of piphenomenalism.

    In the quote, he is saying that IF the philosopher’s version of epiphenomenalism is true, THEN we are all p-zombies:

    The short quote:

    “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious — NOT in the systematically mysterious way that supports such doctrines as epiphenomenalism

    (You were right to put further emphasis on the word “not” – though only necessary for those who just don’t get it…because they don’t want to get it…because they don’t want their statement that Dennett is a loon to be false…because then the accusation would apply more aptly to them.)

    The extended quote makes it clearer:

    “There is another way to address the possibilities of zombies and, in some regards I think it is more satisfying. Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious — not in the systematically mysterious way that supports such doctrines as epiphenomenalism. I can’t prove that no such sort of consciousness exists. I also cannot prove that gremlins don’t exist. The best I can do is to show that there is no respectible motivation for believing in it.”

    (His reference to gremlins is the silly idea that there could be invisible undetectable gremlins inside the pistons of your car)

  271. BillyJoe7on 09 Apr 2017 at 6:43 am

    Ian,

    Read mumadadd’s response and the extended quote above…revisited below with relevant emphasis…in the hope that you finally get it….but not holding my breath:

    “There is another way to address the possibilities of zombies and, in some regards I think it is more satisfying. Are zombies possible? They’re not just possible, they’re actual. We’re all zombies. Nobody is conscious — NOT in the systematically mysterious way that supports such doctrines as epiphenomenalism. I can’t prove that no such sort of consciousness exists. I also cannot prove that gremlins don’t exist. The best I can do is to show that there is no respectible motivation for believing in it.”

    .

  272. Pete Aon 09 Apr 2017 at 7:45 am

    “[Dennett] is saying that IF the philosopher’s version of epiphenomenalism is true, THEN we are all p-zombies”

    Precisely!

  273. edamameon 09 Apr 2017 at 11:02 am

    mumadadd I wouldn’t fret. His multiple drafts model isn’t hard to understand, but you probably feel like it is leaving something out, which is because it is. It is an old old-fashioned view of the mind (not a shock: it is a quarter-century old book by a verificationist philosopher that got his doctorate with Gilbert Ryle, who thinks his primary job is to explain behavior). What surprises me is how many people genuflect to this relic written in the age of preneuroscientific philosophy.

    People who act patronizing as if you just need to be a little smarter, or read Dennett a few more times, are basically providing the courtier’s reply. If you have tried a few times and don’t see how it is supposed to work, maybe the problem isn’t you. Just move on. If he is right, it will come out in other ways in other authors. It’s not like he has the cornerstone on how to express the truth, if he is indeed right. I’m always wary of focus on one particular philosopher (e.g., Heidegger, Wittgenstein, Aquinas), this is typically a bad sign.

  274. edamameon 09 Apr 2017 at 11:05 am

    This is a good summary of the multiple drafts model. One thing I’d be wary of is any theory where someone says you need to read one particular author’s entire book and understand it in its entirety to really get what they are saying. That’s what the Heideggarian’s and Wittgensteinian’s say.

    http://www.scholarpedia.org/article/Multiple_drafts_model

    This might be a better point of reference than an entire bloody book, for discussion of his specific theory of consciousness. Which isn’t actually very complicated.

  275. hardnoseon 09 Apr 2017 at 12:13 pm

    “the commentators on this blog who attack their contrived straw-man depictions — ‘materialism’ and ‘materialists’ — have a dire lack of both explantory power and explantory depth for their alternatives to science.”

    There are alternatives to materialism, such as quantum consciousness. These are NOT alternatives to science, they are science. Materialism is an ideology, it is NOT science.

  276. edamameon 09 Apr 2017 at 12:25 pm

    One thing you will notice is that Dennett has trouble moving beyond metaphors (‘multiple drafts’, ‘cartesian theatre’, ‘ fame in the brain’, etc), and his old-fashioned linguicentrism where he is still left perplexed about whether animals that do not use language even have consciousness. “What we can talk about in public if we choose is, ipso facto, what we are conscious of.” Seriously?!

    This is basically 1950s quasi-behaviorist type of verificationist theorizing where what you have to explain is behavior (verbal reports), and everything else is just an imponderable mystery. This is the kind of relic thinking I was talking about.

    For a pretty decent overview of multiple neuropsychological theories of consciousness, see:
    http://www.scholarpedia.org/article/Models_of_consciousness

  277. Pete Aon 09 Apr 2017 at 4:47 pm

    “[edamame] One thing you will notice is that Dennett has trouble moving beyond metaphors (‘multiple drafts’, ‘cartesian theatre’, ‘fame in the brain’, etc), and his old-fashioned linguicentrism where he is still left perplexed about whether animals that do not use language even have consciousness. ‘What we can talk about in public if we choose is, ipso facto, what we are conscious of.’ Seriously?!”

    Whareas the following is what Daniel Dennett and Kathleen Akins actually stated [my emphasis]:

    “What, then, is the importance of (verbalizable) recollectability? Intuitively, the ability to report a content is conclusive evidence of consciousness. Why should this be? Not because it is an infallible sign of what is going on in one’s mind–after all, a subject’s later recollections can fade or become distorted–but because our interpersonal communications, our discussions and comparisons, generate both the terms and the topics of consciousness. The personal level of explanation is defined by the limits of our abilities to respond to queries about what we are doing and why. What is off-limits to such inquiries, however cognitive or sophisticated, is sub-personal and unconscious. A reported episode or nuance, current or recollected, has left the privacy of the subpersonal brain and entered the interpersonal public forum of consideration and comparison. How are we able to do this, when we cannot similarly ‘introspect’ the private processes occurring in our kidneys or immune systems? The details of the answer are still to be worked out, but the main point to make is simply that any evolved and matured capacity to frame and utter speech acts must identify a domain of topics or contents about which such speech acts can be controllably formed. What we can talk about in public if we choose is, ipso facto, what we are conscious of. In species that lack anything functionally analogous to this ‘publication’ competence, it remains an open question whether anything like the personal/subpersonal distinction can be drawn, and that is why the attempts to extrapolate claims anchored in human consciousness to claims about the consciousness or lack thereof in other species are currently so imponderable. Until we have developed accounts of the specific sorts of competitions that sort out the use of resources in the brains of these species, we won’t have any leverage for settling such questions.”
    http://www.scholarpedia.org/article/Multiple_drafts_model#Probes.2C_the_ability_to_recollect.2C_and_the_personal_level_of_explanation

  278. bachfiendon 09 Apr 2017 at 5:01 pm

    Hardnose,

    The various quantum consciousness theories (and there are many) are all materialist theories. Just because they’re vague and incoherent, incapable of experimental testing, with their proponents grasping at any straw – claiming that it supports their highly improbable speculations (when it does nothing of the sort) – doesn’t make the theories non-materialist. Although they show many of the signs of being just woo.

    Materialist theories of the nature of reality aren’t just confined to classical mechanics. Quantum effects may be important in the functioning of the brain, but it’s highly conjectural. Quantum effects are important in the functioning of computers. If you’re claiming that if brains employ quantum effects to function then that makes their functioning non-materialistic, then you’d also be claiming that the functioning of computers is also non-materialistic too? Yeah, right.

  279. Pete Aon 09 Apr 2017 at 5:32 pm

    “Quantum effects are important in the functioning of computers.” Yes indeed! They are important in the functioning of many things, such as photographic film.

  280. edamameon 09 Apr 2017 at 8:29 pm

    PeteA: Your quote exemplifies exactly what I said. For one, we are conscious of much more than we can talk about in public (e.g., color shades I have no words for but can easily discriminate perceptually). Two, he gets perplexed about consciousness in animals without language, talking about this question (in classic philosopher style) as an imponderable.

  281. Pete Aon 09 Apr 2017 at 8:39 pm

    edamame, Have you considered the possibility that “we are conscious of much more than we can talk about in public” can easily be explained by the fact that you haven’t bothered to study the existing science that adequately explains that which you are vaguely conscious of?

    It seems that you have yet to grasp your own illusion of explanatory depth!

  282. Pete Aon 09 Apr 2017 at 9:08 pm

    imponderable [noun]: A factor that is difficult or impossible to estimate or assess.
    ‘there are too many imponderables for an overall prediction’
    https://en.oxforddictionaries.com/definition/imponderable

  283. bachfiendon 09 Apr 2017 at 9:36 pm

    Edamame,

    Or have you considered that the reason that ‘we are conscious of much more than we can talk about in public’ is because we just lack the words to do so? If you don’t have the relevant words, then you can’t talk about them.

    There are plenty of words for colours, often highly specific ones, with fine nuances, but there’s no word for skin colour, despite it being highly important as an indication of emotional state (for example blushing).

    The conscious life of animals without language is perhaps unknown and may be unknowable, but it’s certainly something that can pondered, so it’s not imponderable. I recently read ‘Other Minds: the Octopus and the Evolution of Intelligent Life’ by Peter Godfrey-Smith. The author notes he he’s often observed octopuses in their dens undergoing rapid and elaborate skin colour changes without the need for camouflage (they’re still and hidden away from potential predators) or signalling to other octopuses, and wondered whether it’s a sign of dreaming. Or a rich inner conscious life.

    Octopuses with their rapid and elaborate skin colour changes have the material for a sophisticated language. Do they have one? No one knows. Octopuses with extremely few exceptions are solitary and not social animals, although they’re quite inquisitive.

    Baboons, with just 4 apparent vocalisations, manage to have a very rich social life. A baboon, hearing a lower individual giving a submissive call to a higher one, displays very little interest. But a higher individual giving the same call to a lower one elicits a lot of interest as an indication that something significant has happened. And indicates that animals with very little language, if at all, are aware (?conscious) of social relationships.

  284. Pete Aon 09 Apr 2017 at 10:30 pm

    bachfiend,

    Thanks for you well-illustrated comment.

    You have reminded me of the philosophical notion of how deeply interesting it would be for us to talk to a lion that had learnt human language. Only to realize after the fascinating learning experience that a lion that has learnt human language isn’t actually a lion!

  285. edamameon 10 Apr 2017 at 12:49 am

    Given that monkeys show the same kinds of sensitivity to similar illusions, with similar underlying representational machinery in the brain, there is no reason to posit language as necessary or sufficient for consciousness. That’s a relic of an anachronistic pre-evolutionary and pre-neuroscientific way of thinking.

    So, while animal communication systems are interesting and worthy of study in their own right, they aren’t really relevant to elucidate the basic neuronal mechanisms of perceptual consciousness, for instance in your dog’s brain.

  286. bachfiendon 10 Apr 2017 at 3:09 am

    Edamame,

    I never said that language was necessary for consciousness. I wasn’t evern stating that it was necessary for an animal to be conscious of a concept.

    ‘Given that monkeys show the same kinds of sensitivity to similar illusions, with similar underlying representational machinery in the brain, there is no reason to posit language as necessary or sufficient for consciousness’.

    Evidence please for the first two statements. They’re probably true, but I wonder how you would go about showing that they’re true. The third statement is also probably true, but it’s a non sequitur.

  287. Ian Wardellon 10 Apr 2017 at 9:27 am

    edamame
    “Given that monkeys show the same kinds of sensitivity to similar illusions, with similar underlying representational machinery in the brain, there is no reason to posit language as necessary or sufficient for consciousness”.

    When I was at uni my fellow students all seemed to express the view that consciousness requires language. The thing is some leading scholar spots forth the most ridiculous absurdity imaginable, and they just passively soak it up! The completely stupidities believed in by many so-called “educated” people is mind blowing.

    A similar patterns occurs in this blog with many people on here idolising Dennett.

  288. edamameon 10 Apr 2017 at 10:39 am

    bachfiend.
    Some evidence
    There is tons of psychophysics and neuroscience on the sensory side. To name just a few.
    Blindsight
    https://www.ncbi.nlm.nih.gov/pubmed/7816139

    Binocular rivalry
    https://www.ncbi.nlm.nih.gov/pubmed/8596635

    Illusory contours
    https://www.ncbi.nlm.nih.gov/pubmed/19046395

    More generally, there is no break in evolution in the underlying computational mechanisms that support sensory representation, attention, working memory, decision-making: all the processes that seem clearly important in consciousness are there in full bloom in nonhumans (pubmed will show you the huge literature on this quickly). None of these processes popped on the scene with language (which is an idiosyncratic and weird system of communication that only 1 out of 40,000 vertebrate species evolved).

    Further, the brain-stem core that supports the waking conscious state is common to all vertebrates: the thalamocortical processing stream and reticular activating system for arousal/wakefulness:
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2701283/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2962410/

    This seems to be part of the machinery that is shut down when you are anesthetized:
    https://www.ncbi.nlm.nih.gov/pubmed/25172271

    The nice thing in animals is you can manipulate this, even recently using optogenetics you can wake mice from anesthesia by optogenetically stimulating the arousal system:
    http://www.pnas.org/content/113/45/12826.abstract

    Comments
    I am not saying we can prove that animals have subjective experience, this is the wrong standard. In science, we don’t prove, we find what is reasonable to conclude based on the cumulative evidence and our best thinking.

    Note also I realize that Dennett doesn’t think that language is necessary/sufficient for consciousness. However, his human-first, language-first, heterophenomenology-first approach is not biological, and leads him to say very strange things (like how much of a mystery it is whether nonhumans are conscious because they can’t talk). One thing to notice is that philosophers who are fond of language as a model for human brain processes tend to ignore sleep, anesthesia, dreaming, and other “mere physiological” processes. It’s because they don’t have the tools to handle them from within their linguocentric perspective.

    Neuroscientists and psychologists have been tackling this stuff for decades in nonhuman model systems, looking at Dennett with bemusement while he uses a lot of words and metaphors to say a few interesting things while he contributes…not so much, but is extremely popular among nonspecialists. This is typical of philosophers: how much have philosophers contributed to the first-order development of physics, math, geology, etc the past 50 years?

  289. edamameon 10 Apr 2017 at 10:40 am

    bachfiend there is a long post with lots of links awaiting moderation, probably because it has so many links. :O

  290. LKMon 10 Apr 2017 at 12:05 pm

    I think the premise that an AI needs to be conscious or self-aware in order to be dangerous is wrong. All that really needs to happen for an AI to be dangerous is:

    – It needs to be able to control systems that can harm us (this is already the case; where I live, simple AIs control the water supply, for example)
    – It needs to be allowed to continue learning without constant supervision of what it is doing (this is currently usually not the case, the learning algorithms are typically turned off when AIs are put into production)
    – The feedback given to the program must allow for a solution that provides positive feedback, but that goes against the original intention of the people who designed the feedback system (e.g. a water treatment system is rewarded for lowering lead poisonings in people, and thus kills everybody to get that number to zero)

    None of those factors require self-awareness or consciousness. In fact, none of them require particularly intelligent AIs.

  291. bachfiendon 10 Apr 2017 at 4:56 pm

    Ian,

    When were you in university and what were you studying? You haven’t given evidence, you’ve recounting an anecdote, which is recovering a memory – perhaps a very old memory – and memory is notoriously unreliable.

    Recovered memories aren’t pristine, reflecting what happened at a certain time (and the recollection that your fellow students appeared to be telling you that they’d learnt that language was necessary for consciousness would be unlikely to have come from a single time), but they’re edited according to what the person believes at the time the memory is recovered.

    You’ve developed a very non-materialist worldview, including the idea that the mind is non-material -somehow bound to the brain in some way, so that when the brain dies the mind dies also (if I understand you correctly) – so you’ve developed the habit of editing your perception of whatever anyone with different views has actually expressed so as to create straw man arguments you can more easily dismiss.

    I don’t think anyone commenting on this blog has expressed the view that species without language don’t have consciousness. It’s perhaps unknown, possibly even unknowable, whether octopuses, lizards or honey bees have consciousness without them having a language we can understand.

    What good is consciousness? Actually not much. The brain spends a lot of time shifting skills it laboriously learned into the subconscious, where they’re performed flawlessly unconsciously – including riding a bike, playing the violin (if you’re a professional violinist) and even language. The symbols on a page and the sound vibrations in air representing utterances in languages you’ve learned are interpreted effortlessly and unconsciously without having to think consciously what the words mean.

    Consciousness is very useful for developing new skills. And being able to respond to novel situations not experienced by the species previously. I take the view that species which demonstrate the ability to learn new habits (so the habits aren’t hard wired somehow genetically) have consciousness, so I think that octopuses are conscious. I just don’t know about lizards or honey bees. Perhaps, perhaps not.

  292. edamameon 10 Apr 2017 at 5:27 pm

    bachfiend my response to you seems to be stuck in moderation limbo, so I pasted it here:
    https://justpaste.it/15dxo

    I resisted the urge to edit it…

    One quote:
    “I am not saying we can prove that animals have subjective experience, this is the wrong standard. In science, we don’t prove, we find what is reasonable to conclude based on the cumulative evidence and our best thinking.”

    And because it is Monday I have to work now so will unfortunately have to bow out yet again from this interesting time-sink of a thread.

  293. mumadaddon 11 Apr 2017 at 7:51 am

    BJ7,

    Thanks for clarifying the Dennett quote.

  294. mumadaddon 11 Apr 2017 at 7:52 am

    edamame, thanks for the links.

  295. Paul Parnellon 12 Apr 2017 at 9:16 pm

    # edamame

    Interesting link. But what do you think of Jhon Searle’s Chinese Room? Remember that Searle was a materialist dispite being something of a mysterian on A.I.

    Actually I’m not sure Searle would admit to being a mysterian.

  296. TheTentacleson 13 Apr 2017 at 1:37 am

    bachfiend: do you recommend the Peter Godfrey-Smith’s book? I’ll admit, I am a sucker for cephalopods so I’ll probably enjoy it anyway…

    I posted this on the recent memory post, but to keep it more closely linked I’ll repost here too:


    As I don’t think it has been mentioned, another approachable and “from-the-trenches” neuroscience based book on consciousness research I would recommend is “Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts” by Stanislas Dehaene. I recently went to a talk of his, mostly on his work on number and letter representation in animals and humans but chatted to him afterwards about about visual perception and consciousness.

    Regarding Tononi, this is a fairly nice and recent review of his theory from him and Christof Koch (I think the article is open access); IIT is very top-down in its approach and I certainly don’t really grok how he goes from phenomenology to calculus!?, but as I study the role of neural feedback, his theory positively tickles my confirmation bias!

    http://rstb.royalsocietypublishing.org/content/370/1668/20140167

  297. TheTentacleson 13 Apr 2017 at 1:42 am

    oh and edamame, excellent post @10 Apr 2017 at 10:39 am!

  298. BillyJoe7on 14 Apr 2017 at 4:12 am

    For anyone still interested in this topic…

    The following is a link to a well-written, informative, and nuanced article on a topic relevant to this thread. It was published recently in “The New Yorker” and written by Indian American physician, Siddartha Mukherjee. The question explored is whether AI can outdo MD in diagnosis, with special reference to skin lesions, specifically melanoma.

    [Siddartha Mukherjee is the author of two excellent books, “The Emperor of all Maladies: A Biography of Cancer”, and “The Gene: An Intimate History”. Both books are a pleasure to read as well as accurate factual accounts of their subject matter, interlaced with the author’s family experiences with cancer and a genetic disorder respectively]

    http://www.newyorker.com/magazine/2017/04/03/ai-versus-md

    I guaranteed you will not be disappointed by the article, and I can recommend his books without hesitation.

  299. TheTentacleson 14 Apr 2017 at 5:48 am

    I can also recommend it as a nuanced and interesting read (I also linked to it somewhere up the comments @03 Apr 2017 at 12:46 pm). I found it a bit wordy, but the author went to the source, interviewing the Godfather of modern AI systems Geoffrey Hinton, and his somewhat provocative statements are very well contextualised by Siddartha.

  300. BillyJoe7on 14 Apr 2017 at 9:46 pm

    …sorry, I didn’t see that. But I think it’s only wordy if you are familiar with the material. But, for the educated layman, it’s pretty spot-on in my opinion.

  301. BillyJoe7on 14 Apr 2017 at 11:03 pm

    Paul,

    “What do you think of John Searle’s Chinese Room?”

    Searle’s original purpose was to show that AI machines have no understanding, which he later expanded to include consciousness. It is a variation on the Turing test. He puts the human inside the computer to show that there is no understanding by that human of Chinese.

    There are many objections to JS’s Chinese Room.

    One is that the whole system understands Chinese, even if the human inside it reading the instructions, accessing the memory files, executing the intructions, and typing the output does not.
    Searle’s objection was that, if the human inside the machine could memorise the instructions and the contents of the files, execute the instructions in his head, and then type out the response, he would constitute on his own the entire system and still not understand Chinese.
    However, it is then no longer obvious that he would not understand Chinese. On a quick read, what this human is required to do sounds achievable. In actual fact, paying attention to the details, what he actually needs to to is vastly more complicated. He has to memorise ALL the instructions, ALL the contents of the memory files, execute ALL the the instructions, ALL in his head. Who is to say that, if he could do ALL this, he would also be able to understand Chinese?

    Another objection is similar to the main objection to Mary’s Room. Mary is born and raised in a black and white room using a black and white computer screen which she uses to obtain total knowledge about colour. If physicalism is true, then she would not learn anything when she steps outside the room and experienes colour for the first time.
    The problem is that, what Mary would need to know to know EVERYTHING about colour, is vastly understated. When the vast amount of knowledge that she would actually have to have is taken into account, it is no longer obvious that she would not know what it is like to experience colour.

    Another objection is that “experiencing colour” is a bit of knowledge that Mary does not yet have until she steps outside that room. It’s a bit like Mary learning everything there is to know about bicycles and about riding bicycles and expecting her to be able to then simply jump on a bicycle and start riding it. Perhaps if she really did have ALL the knowledge, she WOULD be able to ride that bike. Or perhaps knowing everthing about bikes and cycling includes getting on a bike an practicising until you can actually ride it.

    Another objection is that for the human inside the machine to do all that Searle requires of it would take millions of years to produce a single response. The analogy to the human brain fails on the fact that the human brain performs billions of operations per second, whereas the human inside the machine manipulates objects inside the machine at the rate of about one per second at most. It would be like trying to create an electric current in a wire sufficiently strong to light up a light bulb by moving a magnet up and down at a rate of once per second, when it actually takes trillions of cycles per second to achieve this feat.

  302. grabulaon 19 Apr 2017 at 7:01 am

    I’ve always believed that the move to AI, or an artificial life form is a next step in our evolution, that those ‘beings’ would be our antecedents.

  303. skep4lifeon 20 May 2017 at 8:25 pm

    In the 5/19/17 episode revisiting AI Steven Novella said, “we don’t know how to make self awareness at this point.”

    I feel bad for Steven’s wife given the evidence suggests at one point he did know.

    I guess at my core I’m still waiting to be convinced that humans aren’t nature’s greatest artificially intelligent machine.

    And waiting to be convinced that free will is real and that all organic and biotic matter isn’t governed by laws of physics and premandated algorithms.

    And waiting to be convinced that your view on the subject has less to do with the degree of your asperger’s and what shape it takes than with your intelligence, imagination and individual psychology.

    Take that for data.

  304. TheTentacleson 06 Jun 2017 at 9:53 pm

    For those still interested, IEEE has an excellent engineering view of the recent neuroscience and computer science surrounding AI and neuromorphic computing:

    http://spectrum.ieee.org/static/special-report-can-we-copy-the-brain

    …including a general summary of the $1×10⁸ US Micron project:

    http://spectrum.ieee.org/biomedical/imaging/ai-designers-find-inspiration-in-rat-brains

    …Jeff Hawkin’s important idea about the critical importance of sensorimotor embodiment:

    http://spectrum.ieee.org/computing/software/what-intelligent-machines-need-to-learn-from-the-neocortex

    …the very divergent opinions about when/if general AI will arrive:

    http://spectrum.ieee.org/computing/software/humanlevel-ai-is-right-around-the-corner-or-hundreds-of-years-away

    …and Koch and Tononi’s very basic summary of IIT and why (I think wrongly) they limit consciousness to neuromorphic hardware only:

    http://spectrum.ieee.org/computing/hardware/can-we-quantify-machine-consciousness

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.