Jul 10 2017

Why Are We Conscious?

brain3While we are still trying to sort out exactly what processes and networks in the brain create consciousness, we are also still uncertain why we are conscious in the first place. A new study tries to test one hypothesis, but before we get to that let’s review the problem.

The question is – what is the evolutionary advantage, if any, of our subjective experience of our own existence? Why do we experience the color red, for example – a property philosophers of mind call qualia?

One answer is that consciousness is of no specific benefit. David Chalmers imagined philosophical zombies (p-zombies) who could do everything humans do but did not experience their own existence. A brain could process information, make decisions, and engage in behavior without actual conscious awareness, therefore why does the conscious awareness exist?

This idea actually goes back to soon after Darwin proposed his theory of evolution. In 1874 T.H. Huxley wrote an essay called, “On the Hypothesis that Animals are Automata.”  In it he argued that all animals, including humans, are automata, meaning their behavior is determined by reflex action only. Humans, however, were “conscious automata” – consciousness, in his view, was an epiphenomenon, something that emerged from brain function but wasn’t critical to it. Further he argued that the arrow of cause and effect led only from the physical to the mental, not the other way around. So consciousness did nothing. We are all just passengers experiencing an existence that carries on automatically.

I reject both Chalmers’ and Huxley’s notions. There are many good hypotheses as to what benefit consciousness can provide. Even the most primitive animals have some basic system of pleasure and pain, stimuli that attract them and stimuli that repel them. Vertebrates evolved much more elaborate systems, including the complex array of emotions we experience. In more complex animals the point of such emotions is to provide motivation to either engage in or avoid certain behaviors. Often different emotions conflict and we need to resolve or balance the conflict. Consciousness would seem to be an advantage for such complicated motivational reasoning. Terror is a good way to get an animal to marshal all of their resources to flee a predator.

Problem solving could also benefit from the ability to imagine possible solutions, to remember the outcome of prior attempts, and to make adjustments and also come up with creative solutions.

Consciousness might also help us distinguish a memory from a live experience. They are both very similar, activating the same networks in the brain, but they “feel” different. Consciousness may help us stay in the moment while accessing memories without confusing the two.

Attention is another critical neurological function in which it seems consciousness could be an advantage. We are overwhelmed with sensory input and the monitoring of internal states and memories. We actually use a great deal of brain function just deciding where to focus our attention and then filtering out everything else (while still maintaining a minimal alert system for danger). The phenomena of consciousness and attention are intimately intertwined and it may just not be possible to have the latter without the former.

Some have argued that consciousness also helps us synthesize sensory information, so that when we experience an event the sights and sounds are all stitched together and tweaked to form one seamless experience.

And finally we get to the hypothesis addressed by the current study – that consciousness allows for faster adaptation and learning (which would certainly be an adaptive advantage). In their study, Travers, Firth, and Shea compared adaption to conscious vs subliminal information. Subjects viewed a computer screen in which they were first prompted with a “X” in the middle of the screen. This was then replaced by arrows pointing either to the left or the right. In one group the arrows would stay on the screen for 33 ms, in the other for 400 ms. The shorter duration is not long enough to register consciously, but previous experiments have shown it is enough to respond unconsciously. The longer time is long enough for conscious awareness.

After the arrows then an “X” would appear either on the left side or the right side of the screen, and the subjects had to quickly identify which side. When the arrows pointed in the opposite direction to the eventual side of the X this introduced a short delay in response time, even when the arrows lingered for only 33 ms. The subjects had to mentally adjust for the conflicting information.

The results show that the group exposed to the 400 ms arrows, and were therefore conscious of their presence, were able to adapt to misinformation and increase their response time, while the 33 ms group were not. The conscious exposure group was also able to experiment with a variety of strategies to deal with the misinformation until they hit upon the best solution.

In other words, when the subjects were consciously aware of the incongruent information they were better able to adapt and learn to deal with it. Subjects were not able to adapt to the unconscious information.

This is an interesting experiment with a provocative result. Of course, with these types of experiments it is difficult to impossible to draw firm conclusions from a single study. The subject matter is extremely complex and it takes many experiments to account for all variables. There may also be other differences in how we process 33 ms information vs 400 ms other than conscious awareness. But the researchers did come up with a way to specifically test their hypothesis.

I don’t think this answers the question, why are we conscious? Perhaps all of the factors I outlined above are important, and there may be others. It does provide preliminary support for the notion that there are potential advantages to conscious awareness over the subconscious processing of information, at least in some situations.

And to be clear, our brains engage simultaneously in conscious and subconscious processing of information which both affect our behavior. Both are important and have different advantages and disadvantages. In fact, we have been learning more in the last few decades about how extensive and important subconscious processing is. The two kinds of information processing work together to create the final result of human thought and behavior.

It is also interesting to consider the parallels in artificial intelligence technology. We have been discovering how powerful AI systems without consciousness can be. In fact it seems that AI systems will be able to do much more than we previously imagined without a conscious layer to them at all. It will be interesting to see what kinds of limits we run into, and if it will ever be necessary to add a conscious layer in order to achieve certain functionality.

But we have to be aware also that computers are different from brains. Brains evolved and had to work with the material at hand. Perhaps consciousness was an available solution to the needs of complex adaptation and learning. It may not be the only one, however. We can top-down design computers without constraint. There are also things that silicon does better than biology, and we may leverage that in ways not available to organic brains.

I suspect a lot will depend on what we want the AI systems to do. But it seems for now they will be able to do what we want, such as drive cars, without the need for conscious awareness.

89 responses so far

89 Responses to “Why Are We Conscious?”

  1. BillyJoe7on 10 Jul 2017 at 8:52 am

    “Some have argued that consciousness also helps us synthesize sensory information, so that when we experience an event the sights and sounds are all stitched together and tweaked to form one seamless experience”

    Shades of the dualistic “ghost in the machine” or the misconceived “homunculus”. 🙁

  2. PaulTon 10 Jul 2017 at 9:12 am

    The study is, for me, another validation to the theory that consciousness is a complex adaptation of the “awake” form REM sleep. The theory that REM sleep is an adaptation to allow emotional brains the ability to store up (rehearse) memories that can later be recalled and used without actually being put in potentially life threatening situations seems like a reasonable one. That very definition seems to match up closely to what some definitions of consciousness is – being able to separate at least part of our emotional and logical processes away from the impulse of the moment. It doesn’t seem to be that far of a leap to imagine an animal who already enjoyed a technological and cultural domination of their environment to make the leap from “sleeping” objectivity to also enjoying this advantage while awake.

  3. TheTentacleson 10 Jul 2017 at 10:43 am

    There are several other better methods to render otherwise identical stimuli “invisible” to perception and therefore “unconscious”. The difference between 30ms and 400ms is highly significant to visual perception, I’ve yet to read the paper in detail so perhaps this study is better controlled.

    Note this paper is a preprint, so has yet to make it through peer review. Chris Frith is one of the most distinguished cognitive neuroscientists, so I’d be a little surprised if he used integration time, but then he isn’t a psychophysicist and this is out of his field of expertise…

    Binocular rivalry was the “original” tool to investigate consciousness, inspired by one of the greatest experimental polymath scientists, Hermann von Helmholtz. But modern techniques like constant flash suppression or motion induced blindness are better techniques (so says Randolph Blake, who dedicated most of his career to studying binocular rivalry).

    BillyJoe7: many non-dualist scientists think consciousness emerges/unifies multiple parallel representations, this does not imply dualism or the presence of homonculi.

    @PaulT: here is a paper by J Allan Hobson and Karl Friston on the explicit link to sleeping as a form of predictive virtual reality: https://doi.org/10.3389/fpsyg.2014.01133 — there is somewhat different variant of their paper here: Hobson JA & Friston KJ (2014) “Consciousness, Dreams, and Inference: The Cartesian Theatre Revisited” Journal of Consciousness Studies 21(1-2) p. 6–32

  4. Steven Novellaon 10 Jul 2017 at 10:51 am

    BJ – what Tentacles said. Also “stitched together” is just a metaphor. That is, in fact, a core way that metaphors work, from the concrete to the abstract. No dualism is implied.

    To get more technical, different circuits in the brain compare various sensory streams and then modify them so that they are more consistent with each other. This is all just brain processing.

  5. Daniel Hawkinson 10 Jul 2017 at 11:02 am

    But we have to be aware also that computers are different from brains. Brains evolved and had to work with the material at hand. Perhaps consciousness was an available solution to the needs of complex adaptation and learning. It may not be the only one, however. We can top-down design computers without constraint. There are also things that silicon does better than biology, and we may leverage that in ways not available to organic brains.

    I suspect a lot will depend on what we want the AI systems to do. But it seems for now they will be able to do what we want, such as drive cars, with[out?] the need for conscious awareness.

    (I suspect you meant “without” in that last sentence?)

    I mostly agree with you on this article Steve, however, I think you missed the mark a bit on the AI sections. Please consider bringing on a guest expert to the SGU to discuss how modern AI research and machine learning works.

    For instance, you mention that “we can top-down design computers without constraint,” however in the context of machine learning, that is not actually true, or at least, misleading and incomplete. Data scientists provide explicit constraints on how a neural network “learns”, and provide implicit constraints through the training data, but they do not top-down constrain the output of the network via hard-coded algorithms. In fact, top-down designed algorithms currently cannot match the performance of machine learning algorithms, and the gap is increasing every year.

    Secondly, we’re continuing to decrease the amount of human input needed in these machine learning systems. That is an active goal of the research into unsupervised learning and automated feature learning. Why? Because designing the structure of neural networks is costly, time-consuming, and often performs poorly when facing data only moderately different than what it was trained on. Why? Because basic neural networks don’t have access to the outside domain knowledge that humans do. And whether you’re talking about driving cars, recognizing faces, translating languages, or anything else, that lack of outside domain knowledge will always limit the performance of a neural network.

    In order to get better performance, you’d need to integrate many different neural networks, which share expertise. You’ve already covered this a couple of weeks ago on the podcast. But while the article y’all mentioned then described a small number of neural networks working together, you’d need far more networks than that to handle the complexity of making good decisions in our world. Perhaps you’d have other layers that recruited different groups of neural networks into specific tasks, and then you’d need something to “decid[e] where to focus [its] attention and then filte[r] out everything else.”

    In short, I don’t think artificial consciousness is as unnecessary as you’ve implied here and in the past.

  6. Steven Novellaon 10 Jul 2017 at 11:13 am

    Daniel,

    By top down I mean that we can design the computer technology from scratch. We are not constrained in the same way evolution is. Evolution cannot start from scratch. Evolution had a lizard brain and added cortex to create a mammal brain, then added more layers to create a primate brain, then added neocortex to create a human brain. To go from primate to human it could not start from scratch or undo more primitive basic brain anatomy.

    Your broader point about AI is interesting, and exactly why I qualified my statement by saying that it depends on what we wan the AI to do. What I think is clearly true is that we have exceeded predictions as to what unconscious AI can do, and we continue to do so.

    The alleged need for general AI has been progressively pushed off into the future as we get better and better at specific AI systems. I also left it as an open question – what limits we will run into and if it will become necessary to add consciousness to AI to achieve a desired ability. We are not there yet, and until we get there it is an open question.

  7. BillyJoe7on 10 Jul 2017 at 11:15 am

    Each to his own, but “stitched together” sounds like a bad metaphor for the technical description “different circuits in the brain compare various sensory streams and then modify them so that they are more consistent with each other”. It sounds more like a metaphor for the Cartesian theatre, intentional or not.

  8. Daniel Hawkinson 10 Jul 2017 at 11:44 am

    @Steve,

    I agree with everything in your response, including that it is an open question and that we’ll have to wait and see. That being said, my prior toward artificial consciousness arising naturally (although probably not for decades at least) out of current research programs is quite a bit higher than yours, at least based on what you’ve expressed on the podcast. My reading of current research leads me to believe that it is likely that it will arise naturally, your reading of the same evidence leads you to believe that it would require a dedicated research program specifically geared at achieving consciousness for its own sake.

    I was responding more to those views than what you’ve written explicitly here, because I felt that what you wrote regarding how the brain works, and what purpose consciousness has in humans, provides decent evidence for why machine learning will need something like consciousness in order to solve the more complex problems we will inevitably pose to it.

    That being said, we are so far away from achieving anything like artificial consciousness, that my priors are not very strong either way. I just don’t think it follows that artificial consciousness may not be necessary because self-driving cars already do a very good job. We’re at the stage of picking the low-hanging fruit for machine learning. The solutions we’ve come up with don’t have nearly as much success with the more interesting, but far more complex, problems we’d like to tackle—e.g. reliable fact-checking, generating synopses of articles, novels, or videos, generating 3D models from 2D photographs, etc. We’ve made decent stabs at each of those, but they are still far from achieving human-level performance, and not likely to be solved by throwing more training data at it.

  9. edamameon 10 Jul 2017 at 12:48 pm

    BillyJoe look up McGurk effect and see demonstration. That’s just one example of multisensory integration. It’s not controversial or dualistic.

    Three cheers for the Cartesian Theatre! 😛

  10. Steven Novellaon 10 Jul 2017 at 1:09 pm

    Daniel – we are not too far off, and my feelings have been evolving on this issue as the technology advances. My primary point is that I think it is less likely than I used to. This is mostly because dedicated AI systems are more powerful than I thought they could be, at least based on popular discussions of AI.

    It will be fascinating to follow for exactly these reasons – when will it become necessary, and for what tasks, and how will it emerge?

    But I would add that this statement: “The solutions we’ve come up with don’t have nearly as much success with the more interesting, but far more complex, problems we’d like to tackle—e.g. reliable fact-checking, generating synopses of articles, novels, or videos, generating 3D models from 2D photographs, etc. ”

    Sound a lot like what experts have been saying my entire life, meanwhile the threshold has been steadily advancing. Every time so far experts say AI won’t be able to do some task, it eventually does. So I take such statements with a massive grain of salt now.

  11. Daniel Hawkinson 10 Jul 2017 at 2:00 pm

    @Steve

    Every time so far experts say AI won’t be able to do some task, it eventually does. So I take such statements with a massive grain of salt now.

    I agree, but the difference is how those tasks will be achieved—will it be by simply throwing more data and computing power at the problem, or will those tasks require new techniques and ideas? I believe the latter, and I’m sure you do as well. But in particular, I believe that the kind of breakthroughs we will need could very plausibly lead to artificial consciousness.

    Right now we’ve more or less achieved the limit of what you can do with “simple” neural networks, at least in some domains. The solution to the lack of regularity and generalizability has been to throw more training data at the model. And that works to an extent, but you very quickly reach diminishing returns, and very quickly (exponentially) increase the computation time. This is why Google and other researchers are looking at ways to build more complex networks, where components are trained on subsets of different data, and then their expertise is synthesized by another layer on top of those networks.

    That process no different than how we make progress in every other scientific field—a basic concept is built upon, expanded, combined with new insights, and used to perform increasingly impressive tasks. The kinds of problems modern machine learning/neural networks face at the very least resemble the kinds of challenges biological brains face—incorporating vast amounts of data into a cohesive model of the world, and using that model to make assessment/decisions about new data. So that’s why I find it plausible that machine learning researchers will hit upon solutions that are similar to the ones that evolved. Certainly other solutions may exist, but we know of one example already.

  12. MosBenon 10 Jul 2017 at 2:47 pm

    “In it he argued that all animals, including humans, are automata, meaning their behavior is determined by reflex action only. Humans, however, were “conscious automata” – consciousness, in his view, was an epiphenomenon, something that emerged from brain function but wasn’t critical to it.”

    I’m not an expert, but have followed some of the evolving understanding of “free will” over the last few years. As I understand it, there’s some compelling data suggesting that our brains start our bodies in taking actions before our conscious minds make the decision, therefore what’s really happening is our consciousness is post-hoc coming up with reasons for what our bodies are doing, meaning that we didn’t really make a “choice”. This sounds a rather lot like what Huxley was saying, though Steve rejects it. Is it just a matter of a degree, that is, that Huxley’s statements go too far in saying that the conscious mind does nothing? Can someone square this for me?

  13. Karl Withakayon 10 Jul 2017 at 3:08 pm

    “Sound a lot like what experts have been saying my entire life, meanwhile the threshold has been steadily advancing. Every time so far experts say AI won’t be able to do some task, it eventually does.”

    Tell me about it. I seriously (non figuratively literally) think we are on the brink of developing a web page captcha “I’m not a robot” system that an AI can pass and I can’t.

  14. Steven Novellaon 10 Jul 2017 at 3:46 pm

    MosBen – That research, showing that decisions are made prior to conscious awareness, only predicts decisions 60-70% of the time. The current conclusion is that the subconscious processing makes a preliminary decision, but then the conscious level can accept or reject that decision. Consciousness has hierarchical control.

    It is better to think of the brain as having constant subconscious processing doing most of the heavy lifting, with consciousness floating on top, attending to a small subset of the subconscious processing and with ultimate control. That control takes a huge mental effort, and therefore should be used strategically.

    Further, the brain is constantly trying to automate frequent activity by learning how to do it subconsciously, relieving the more resource-intensive conscious processing from the burden. But we have to do it consciously first.

    So, in reality there is a more complex relationship between the subconscious and conscious mind. The conscious mind is not superfluous.

  15. BaSon 10 Jul 2017 at 5:14 pm

    Typo: “to have the former latter without the former”

  16. minkon 10 Jul 2017 at 5:25 pm

    “… attending to a small subset of the subconscious processing and with ultimate control.” Let me start by saying I’m not a neurologist nor an AI expert. But, I’m not at all convinced that conscious thought has “ultimate control” for what we do. As an anecdotal example, I recently had the opportunity to stand on one of those glass floors that are hundreds of meters in the air. I could observe that it was safe. I could see that thousands of other people had stood on the glass. I could even make myself walk on the glass briefly. I absolutely could not will myself to sit on the glass. I observed that many other people had this same type of irrational fear. I consciously wanted to photograph myself sitting on the glass, but I didn’t possess the power to make myself do it. This is but one anecdote, but it fits with a metaphor I recently read in the “Happiness Hypothesis” by Jonathan Haidt. He uses a metaphor of a rider on an elephant to describe the way we go through life, where conscious thought is the rider and the subconscious is the elephant. The rider THINKS they are in control. But, if you are on the back of an elephant and try to tell the elephant to go somewhere it doesn’t want to (in front of a hazard, for example), you will quickly realize that you are not in control.

    I realize this is slightly off the original topic, but I couldn’t help but to point what I see as a slight flaw in the model of conscious thought being at the “top” of a hierarchy of control.

  17. MosBenon 10 Jul 2017 at 5:39 pm

    Steven, Thanks! That clears things up considerably. Well, makes things significantly more complex, but in interesting ways.

  18. BillyJoe7on 10 Jul 2017 at 5:55 pm

    edamame,

    All I am saying is that “stitched together” is the wrong metaphor and alludes to the Cartesian Theatre. Information is processed throughout the brain, and there’s plenty of feedback to and forth between different parts of the brain. But the processed information remains distributed throughout the brain. It doesn’t get stitched back together again. For example, there’s no centre in the brain where all visual information gets stitched back together to form a picture.

    I’m surprised I got so much kick back here. I thought it was a simple comment with which everyone would agree. It’s a common error to speak of the brain in dualistic terms even by those who are not dualists, or who think they’re not.

  19. Willyon 10 Jul 2017 at 7:33 pm

    I’ve got nothing but snide, but I find it surprising that Doctor Egnor hasn’t weighed to advance his quadriplegic Thomist explanation.

  20. TheTentacleson 10 Jul 2017 at 8:37 pm

    BillyJoe7: We still have quite a way to go to understand how distributed information across multiple asynchronous brain areas nevertheless ends up as subjectively unitary. Here is a reassessment of vision from Semir Zeki, whose lab probably did more than anyone else in discovering the anatomy of the visual system, he has to his credit reassesed his simpler view of conscious vision:

    https://www.ncbi.nlm.nih.gov/pubmed/27153180

    Colour and motion are the two canonical streams, which are often misperceived by observers. So “binding” of the colour of a moving object to the object requires the distributed representations to somehow “coalesce” at the level of perception. Using a metaphor like “stitching”, or the more correct terms like “binding” are still ways for us to talk about a major remaining mystery of linking the machinery to unitary awareness. Hypotheses like binding-by-synchrony (using oscillatory activity to link distributed representations) are still far from being proven.

    Yes we know about feedback, for example area V4 that is part of what is thought of as the colour processing system and area V5 which specialises in motion are connected together by multiple pathways between them. So no homunculus is needed, but scientists still have not shown how the colour and motion is bound together to yield unitary perception. We are so far away that using a metaphor like “stitch” is as appropriate as any other.

    There are several illusions where we can make a subject perceive colour and motion are not linked when in fact they are, so we have the tools to probe this. Here is a paper by one of my favourite perception psychologists, Shinsuke Shimojo:

    https://www.ncbi.nlm.nih.gov/pubmed/15152242

    We are getting close to being about to do the dream experiment on this, simultaneously record hundreds of visuotopically aligned neurons across V4 and V5 while manipulating the perceived bound/misbound state.

  21. TheTentacleson 10 Jul 2017 at 9:18 pm

    @Daniel Hawkins: I attended a Brain-AI workshop at NYU last week where indeed the limits of current leading edge single NNs were very much understood. Kyeunghyen Cho (NYU) exactly discussed how multiple specialised modules of NNs may be built, and unlike humans who are incredibly poor at understanding multidimensional data, using supervisor NNs to inspect other NNs actually works incredibly well. Indeed the supervisor can reweigh the connectivity of its “child” NNs, to teach the NN a better solution. This is a very hot area of current research, where the maths and algorithms are being hammered out.

    Yann LeCun spoke about his current feeling for where “general” AI must go, which is to incorporate a “cognitive” schema where predictive learning across multiple domains drives the flexible behaviour of the AI. This deals with generalisability in the way at least humans do, by using internal generative models (predictive coding theories etc.)

    And I agree that many people have not yet really understood that these increasingly non-linear parallel distributed NN AIs that are now close to being built do increase the probability of artificial consciousness arising, and even before that point of the inevitability of non-predictable behaviour for these AIs. Yann LeCun suggested that in the “intuitive physics” generative models we build into their cognitive systems, we explicitly build in strong limiting bounds relating to humans in close proximity. But the interesting fact that supervisor NNs can reprogram its input NNs does raise the plausibility of AIs reweighing their own priors; trivially rewiring the “limits” we have imposed upon them.

  22. jasonnybergon 10 Jul 2017 at 9:20 pm

    Thoughts on the evolutionary advantage of “consciousness” (i.e. “self-awareness”):

    I don’t think it’s controversial to say that being able to make predictions or think strategically can be advantageous for an individual; i.e. the ability to model “elements of agency” in hypothetical scenarios…

    The flip side of that coin is that the individual must recognize *itself* as one of the elements with agency, the agent with the ability to actually *act* on a hypothesis in order to optimize for some particular outcome (regardless of whether that outcome directly or indirectly benefits the individual’s “heritage.”)

  23. PaulTon 10 Jul 2017 at 9:59 pm

    Thank you so much for the awesome reply. I gave that article a glance and suspect I’ll be chewing on it for who knows how long!

  24. edamameon 10 Jul 2017 at 11:49 pm

    billyjoe — I’m saying materialist versions of the Cartesian theatre were never actually refuted. These are active debates not settled matters, and Dennett is not doing very well frankly. For instance there is a great deal of multisensory integration and construction of higher-level representations after the visual scene has been decomposed (e.g., in the retina). It didn’t have to be that way. But it is. This is just empirical fact.

    Multiple drafts is a cute metaphor, and might work in West World, but it is not very helpful for thinking about neuronal representational systems.

  25. edamameon 11 Jul 2017 at 12:05 am

    I like mink’s example of riding an elephant I think it probably gets the general credit assignments just about right.

    However, in general, people unthinkingly overinterpret Libet-style results. Such results don’t show consciousness has not control in behavior. For goodness’ sake. In those particular experiments they arguably show that the experience of intention to move isn’t the thing generating the intention to move on a given trial. That doesn’t mean that conscious experience, writ large, has no effects on behavior.

    The experience of the instruction to push the button when you feel like it certainly played a role in their button-pushing behavior. It helped them form the general plan in the experimental context. When I get a toothache I go to the dentist. Conscious perceptions generally are really useful for such longish-term planning and decision-making.

    But for reflexive behaviors that likely don’t even require the cortex, like pushing a button periodically? Nope. No consciousness needed.

    But looking at the clock and knowing what time you had the intention to push the button? That requires consciousness? So this is another hint about a function of consciousness: working memory, which is bound up in attention as Dr Novella said, a nonreflexive holding of multiple sources of information from different modalities to later report. If you remove V1 from Libet’s subjects I guarantee they won’t be telling you when they intented to move based on their visual inspection of those clocks…That is, blindsight subjects aren’t going to be doing Libet’s (original) experiments.

    These facile interpretations from hacks that I won’t name, that actually get published, who say that Libet implies consciousness is epiphenomenal, are frankly astounding in their ability to actually make it through reviewers.

    When I heard Libet speak someone asked him about this. His view was that, even in his experiments, the intention to move was generated by unconscious processes, and then made conscious, and then that very fact that it was made conscious gave us the ability to veto the intention and not act on it. So it actually provided a basis for (psychological) free will, and to decide whether to accept the deliverances of the unconscious. Which I find an interesting theory. My guess is cats and dogs don’t have that level of sophistication of will, but humans do.

  26. zapp7on 11 Jul 2017 at 1:18 am

    Maybe I don’t quite understand what consciousness is, but it seems to me that we are asking the wrong question. Our brains are complex organs that have 10^x sensory mechanisms and signaling mechanisms. If we start from the simplest cell that is only interested in one type of transmitter/receptor combo and work our way up in the direction of increasing complexity, we get to the human brain. Everything in between is a spectrum of complexity in terms of what organisms can sense and respond to. Couldn’t it be that what we believe is consciousness is just a really complex superposition of these mechanisms? If so, then at what point when we start stripping away these mechanisms one by one does consciousness cease to be? I don’t think there is an answer there.

  27. chikoppion 11 Jul 2017 at 1:54 am

    This is an interesting discussion. I found the article below intriguing in light of it.

    https://www.quantamagazine.org/how-nature-solves-problems-through-computation-20170706/

    Collective computation is about how adaptive systems solve problems. All systems are about extracting energy and doing work, and physical systems in particular are about that. When you move to adaptive systems, you’ve got the additional influence of information processing, which we think allows a system to extract energy more efficiently even though it has to expend a little extra energy to do the information processing. Components of adaptive systems look out at the world, and they try to discover the regularities. It’s a noisy process.

    […]

    We found that as the monkey initially processes the data, a few single neurons have strong opinions about what the decision should be. But this is not enough: If we want to anticipate what the monkey will decide, we have to poll many neurons to get a good prediction of the monkey’s decision. Then, as the decision point approaches, this pattern shifts. The neurons start to agree, and eventually each one on its own is maximally predictive.

    We have this principle of collective computation that seems to involve these two phases. The neurons go out and semi-independently collect information about the noisy input, and that’s like neural crowdsourcing. Then they come together and come to some consensus about what the decision should be.

  28. Paul Parnellon 11 Jul 2017 at 4:08 am

    As always it is a little frustrating to see biologists type people commenting on the strong AI issue without enough familiarity with computer science. Our problem starts here in the very title where our man Novella asks “why are we conscious?”. Wrong question. The proper question is not why but how. Without knowing how we are conscious we cannot know why we are conscious. Unless you know the mechanisms and abilities of consciousness we cannot know if it is more powerful than unconscious deterministic algorithms.

    And that leads to the second problem. Lets assume consciousness is strictly more powerful than mathematically defined algorithms and can do things that algorithms cannot. That would mean consciousness breaks the Church/Turing thesis. That has implications not only for neuroscience but for fundamental physics. It would mean that the world is in some sense noncomputable. On the other hand if consciousness has no advantage over mindless algorithms then there seems to be no reason for evolution to pick one over the other.

    And calling something an “epiphenomenon” is empty rhetoric that in itself has no explanatory power or predictive power. In one way or another everything is an epiphenomenon. The classical world around you is an epiphenomenon of the underlying quantum world as it is subjected to decoherence. But I know how and why the classical world materializes from the quantum underpinnings. I can understand it as a process. I cannot see the appearance of consciousness as a process nor can I see that it gives me predictive power. It is just hand waving.

    Novella rejects both Huxley and Chalmers’ which is good but then in the same paragraph he losses his way.

    “—-I reject both Chalmers’ and Huxley’s notions. There are many good hypotheses as to what benefit consciousness can provide. Even the most primitive animals have some basic system of pleasure and pain, stimuli that attract them and stimuli that repel them. Vertebrates evolved much more elaborate systems, including the complex array of emotions we experience. In more complex animals the point of such emotions is to provide motivation to either engage in or avoid certain behaviors. Often different emotions conflict and we need to resolve or balance the conflict. Consciousness would seem to be an advantage for such complicated motivational reasoning. Terror is a good way to get an animal to marshal all of their resources to flee a predator.—- ”

    But first he is trying to use emotional consciousness as a kind of algorithm that helps us navigate the world. But we don’t know what consciousness is as a process. If we believed in clairvoyance we could argue that it evolved to help us plan the future. But the elephant in the room would be that we still wouldn’t know what clairvoyance was or how it worked. It would answer the useless question of why we are clairvoyant but the interesting question of how is passed over.

    And second you can have ways to avoid certain behaviors that don’t involve magical things like clairvoyance or consciousness. Even a chess playing computer can learn to avoid certain types of behaviors by algorithmic methods. It does not need fear losing the game or feel the pain of lossing a rook. You could argue feeling things is algorithmically more powerful but first you still don’t have a mechanism and second you are still breaking the Church/Turing thesis. You have no explanatory or predictive power.

    And then we have:

    “—-Problem solving could also benefit from the ability to imagine possible solutions, to remember the outcome of prior attempts, and to make adjustments and also come up with creative solutions.

    Consciousness might also help us distinguish a memory from a live experience. They are both very similar, activating the same networks in the brain, but they “feel” different. Consciousness may help us stay in the moment while accessing memories without confusing the two.—-”

    Read up on Alphago. It constantly plays against both itself and real human players to extract lessons learned and remember them for future games. It is a deterministic algorithm. it does not need to feel fear or pain in order to win. And the idea that it would confuse memory of past games with the current game is… amusing. Novella is again showing his naivete when it comes to computers.

    Then we have:

    “—-But we have to be aware also that computers are different from brains.—-”

    Brains are computers. A computer isn’t defined by what it is made of or the particular engineering of its functions. It is a mathematical definition and a really simple one. In simple form a computer is something that can add two numbers together and then decide what two numbers to add next based on the results. Can you add two numbers? Can you decide what to do next based on the results? Congratulations, you are a computer.

    You can argue that you are also something more than a computer that is more powerful than a computer. But first show the mechanism. And second there is that Church/Turing thing and the implications for fundamental physics.

  29. michaelegnoron 11 Jul 2017 at 6:55 am

    [“But first show the mechanism”]

    Therein lies the problem with explaining consciousness from a materialist perspective. What if nature generally, and consciousness specifically, aren’t ‘mechanisms’.

    Mechanical philosophy is a specific metaphysical perspective, and a poor one at that. Neither we nor nature are machines.

  30. michaelegnoron 11 Jul 2017 at 6:58 am

    Steven:

    It’s hard to imagine a more hilarious assemblage of ‘just-so-stories’. Maybe consciousness helped us avoid predators!… maybe consciousness helped us know present from past!… maybe consciousness helped us learn!…

    Maybe just-so-stories and fairytales aren’t science.

  31. michaelegnoron 11 Jul 2017 at 7:26 am

    The other problem with explaining “consciousness”, aside from the train wreck of materialism and Darwinism, is that we never define “consciousness” with any rigor.

    Is consciousness ‘awareness’? We are aware of subconscious thoughts (that’s what Freudianism is all about), but we’re aware of them in a different way than we are aware of ‘conscious’ thoughts. Is consciousness self awareness? Can we be conscious if we are not self-aware?

    Generally today we associate consciousness with qualia–the subjective experience of things. Classically, the mind (our predecessors didn’t think in terms of consciousness per se–Cartesian metaphysics seperated soul from body and made ‘consciousness’ into a problem that didn’t exist before he created it) was understood as several powers of the soul, including sensation, perception, memory, passion, intellect, will, etc. If there was any hallmark of the mind, it was intentionality–the ability of a thought to be about something, which no purely material thing is.

    Most of our modern gibberish about ‘consciousness’ is just the intersection of bad metaphysics and bad science.

    Hylemorphism explains nature and the mind quite well.

  32. BillyJoe7on 11 Jul 2017 at 7:27 am

    TheTentacles,

    Unfortunately, apart from the abstracts, I cannot access of the two articles you referenced without forking out for them.

  33. chikoppion 11 Jul 2017 at 8:18 am

    [michaelegnor] _________ explains _________ quite well.

    Yes. Inventing explanations is cheap and easy without accountability, as “magic” is always at the ready to fill in the gaps. Actually demonstrating facts with objective evidence is where work and patience are required. Sufficient is literally child’s play. Necessary is the hard part.

  34. Bill Openthalton 11 Jul 2017 at 9:40 am

    Steven Novella —

    Consciousness has hierarchical control.

    It would suggest that consciousness is allowed to have control, circumstances permitting.

  35. Steven Novellaon 11 Jul 2017 at 9:40 am

    Paul,
    I have to disagree with your central premise – the question of why is distinct from the question of how, and is not necessarily dependent on it.

    First of all, we know that we are conscious and can take that as a premise. Consciousness is not magic nor akin to clairvoyance.

    So, I was asking the question I wanted to ask. It is not the “wrong” question.

    We have some information about mechanism – we know consciousness is a phenomenon of brain function, and we also know what minimal brain function is necessary to generate wakeful consciousness. We don’t understand consciousness at the level of specific neural networks and their function.

    The question of “why” we are conscious has to do with what possible evolutionary advantage it could have. I outlined hypotheses (they are not explanations, and Egnor only demonstrated he does not understand what a hypothesis is). This was a lead up to an experiment testing in a limited way one specific hypothesis.

    You also completely missed my point about brains being distinct from computers. Of course brains are a type of computer in that they compute information. But they are very different from the technological computers we have created, and I outlined specifically how. These differences mean that we cannot simply extrapolate our knowledge of computers to our knowledge of neuroscience and evolution.

    Technology can change in a top-down manner, while evolution is forced to work bottom up and has constraints technology does not have.

    Further, computers are hardware and software while brains are neither, they are wetware. Sure, we are just now designing neural networks that more closely mimic organic brains, and that is the point – closing the functional gap between computers and brains.

    Finally, another glaring point you miss is that arguing that consciousness is a possible solution to certain function problems does not imply or require that it is the only solution. Evolution does not have to hit upon the only or even the best solution, just a workable one.

    It is possible that, given the basic architecture of vertebrate brains, which are built as massively parallel processors optimized for pattern recognition, consciousness was a viable solution. It may not be the best or a necessary solution to computers we design. That remains to be seen.

    BTW – don’t miss the irony of accusing me of being naive about computers (an accusation you failed to support, in my opinion) while displaying profound naivete about neuroscience.

  36. Steven Novellaon 11 Jul 2017 at 9:42 am

    Bill – Depends on context. There are things you can 100% control, like your breathing. You can hold your breath until you pass out. You have limited control over your emotions – you can focus your thoughts in a way to modify your emotions, but cannot completely control them.

    There are some pathological movements you can voluntarily control, and some you cannot.

    It all depends on the wiring.

  37. michaelegnoron 11 Jul 2017 at 11:00 am

    Steven,

    [I outlined hypotheses (they are not explanations, and Egnor only demonstrated he does not understand what a hypothesis is).]

    No. Evolutionary explanations aren’t hypotheses about how consciousness came about; they are hypotheses about how consciousness was preserved after it came about. Darwinian mechanisms don’t generate new function-they preserve new function that has arisen.

    And from a materialist perspective, consciousness is the ultimate new function. How subjective experience arises is utterly beyond Darwinian “explanations”–again, Darwinism only attempts to explain preservation of adaptive novelty, not the initial generation of adaptive novelty itself.

    Darwinism has nothing to add to the problem of consciousness. And consciousness is a problem only because you blindly persist in materialist metaphysics, which intrinsically excludes subjective properties of matter.

    Your conundrum on consciousness is a sign that your metaphysics is inadequate.

  38. Steven Novellaon 11 Jul 2017 at 11:13 am

    Michael – you stubbornly persist in your errors.

    Evolutionary processes do not “only” preserve adaptive novelty, because adaption can be incremental. Evolution does not have to wait for a fully formed adaptation to emerge at once and then can only preserve it. Adaptions can evolve from the non-random preservation of variation and incrementally alter a crude trait into one that is progressively more sophisticated.

    The brain is a perfect substrate for this for many reasons. Central nervous systems can increase complexity by duplicating existing units (like cortical columns), but then allowing for more and more complex networks. Networks also have the potential for tremendous variability, allowing for endless experimentation and preservation of incremental benefits. Increasingly complex networks also allow for the emergence of new higher-order functionality, such as consciousness.

    My preference for materialism is not blind. It is based on a solid philosophical background and a mountain of empirical evidence. You can snipe from the sidelines all you want, meanwhile materialist neuroscience continues to be fabulously successful. It is advancing nicely, thank you.

  39. michaelegnoron 11 Jul 2017 at 11:15 am

    chi:

    [Inventing explanations is cheap and easy without accountability, as “magic” is always at the ready to fill in the gaps. Actually demonstrating facts with objective evidence is where work and patience are required.]

    Ironically, it’s materialism that employs magic.

    Thomistic hylemorphism is a rational, detailed, methodically worked-out system of metaphysics. It excels in evidence and requires a lot of patience to learn and develop.

    Materialism is a slipshod hand-waving metaphysical error that employs magic relentlessly. “Everything came from nothing”, “Survivors survive” explains life. And now, it seems, “survivors survive” explains consciousness.

    Just magic, mixed with junk science and fake philosophy.

  40. michaelegnoron 11 Jul 2017 at 11:33 am

    Steven:

    [Increasingly complex networks also allow for the emergence of new higher-order functionality, such as consciousness]

    Magic.

    [meanwhile materialist neuroscience continues to be fabulously successful.]

    Except consciousness, for which you have no explanation whatsoever, and no prospect of an explanation.

    If that’s success, what would failure be?

  41. Steven Novellaon 11 Jul 2017 at 11:50 am

    Michael – It is sad that you are still stuck in the “survivors survive” strawman of evolution. You are literally a century behind. (actually debunked almost 140 years ago – http://www.talkorigins.org/faqs/evolphil/tautology.html)

    Emergent higher order functionality is not magic, unless you are in the business of denying reality.

    Scientific theories are judged not by whether or not they explain everything to an arbitrary level of detail and precision, but how useful they are in making predictions and accounting for what we know to be true. The materialist neuroscience model of the mind is one of the most successful scientific theories we have, and progress continues at an accelerating rate. You are simply pointing to what we have not yet figured out and saying, “Nah, nah!”

    No one’s impressed.

  42. chikoppion 11 Jul 2017 at 12:06 pm

    [michaelegnor] Thomistic hylemorphism is a rational, detailed, methodically worked-out system of metaphysics. It excels in evidence and requires a lot of patience to learn and develop.

    So was Tolkien’s The Lord of the Rings. Apparently, after years of haunting this site you still haven’t figured out what evidence is.

    A premise is true because (?):

    A) It is sufficient to explain an observed phenomena.
    B) It is the only explanation you can think of.
    C) It is internally consistent.
    D) It is conceptually complex.
    E) None of the above.

    “Materialism” (yada yada) “metaphysics” (yada yada).

    Knowledge is demonstrable. Provide a real-world, objective demonstration that your hypothesis is true. Otherwise, you’re spinning yarns about elves and dwarves and, while it may be super entertaining for you, it accomplishes absolutely nothing and is boring for the rest of us. Imaginary things are not suitable explanations for real things.

  43. Paul Parnellon 11 Jul 2017 at 1:42 pm

    Steven,

    I have to disagree with your central premise – the question of why is distinct from the question of how, and is not necessarily dependent on it.

    First of all, we know that we are conscious and can take that as a premise. Consciousness is not magic nor akin to clairvoyance.

    I know that I’m conscious.I think therefore I am. But are you conscious? Is my dog conscious? If a computer program passed every Turing test I gave it can I say that it is conscious?

    If the laws of physics allowed for clairvoyance then yes it would give a powerful evolutionary advantage. But that fact tells us nothing about what it is, how it works or the underlying rules of physics that make it possible. Ditto consciousness. Only clairvoyance is an easier problem because there is an objective test to detect it. The same cannot be said about consciousness. You cannot do a seance on an algorithm. You cannot ask a chemical reaction how it feels. You cannot tell if something feels qualia. Objective proof of the subjective seems to be a contradiction.

    We don’t understand consciousness at the level of specific neural networks and their function.

    Yes but it is worse than that. We cannot even imagine what such an understanding would look like. We can understand the function of neural networks in terms of the algorithm they perform. And that algorithm can then be implemented as a computer program. But that algorithm does not need consciousness to do what it does. An algorithm is just a piece of applied math and will do what it does with or without consciousness. Again, you cannot perform a seance on an algorithm to see how it feels. Not only that but “how it feels” can have no effect on its function.

    You also completely missed my point about brains being distinct from computers. Of course brains are a type of computer in that they compute information. But they are very different from the technological computers we have created, and I outlined specifically how. These differences mean that we cannot simply extrapolate our knowledge of computers to our knowledge of neuroscience and evolution.

    And you don’t understand the Church/Turing thesis. In a deep sense all computers are the same. Any algorithm that can be implemented on one can be implemented on any other. Find out how the brain recognizes faces and you can program a computer to do the same. In fact there is a recent paper describing how they took the brain waves of a monkey and used it to reconstruct the face that it was seeing. But does the computer experience the face? Again, we cannot do a seance on an algorithm. Worse it cannot matter in any objective way if the computer experiences the face or not. It must behave as math dictates conscious or not.

    Further, computers are hardware and software while brains are neither, they are wetware. Sure, we are just now designing neural networks that more closely mimic organic brains, and that is the point – closing the functional gap between computers and brains.

    There is no functional gap and you still don’t understand the implications of the Church/Turing thesis. Neural nets can be implemented on computers just fine. Currently computers may not have the processing power of the brain but that is the only gap. They can still run the same programs as a brain just slower.

    Finally, another glaring point you miss is that arguing that consciousness is a possible solution to certain function problems does not imply or require that it is the only solution. Evolution does not have to hit upon the only or even the best solution, just a workable one.

    But all solutions have to be algorithms and thus are defined mathematically. You cannot do a seance to determine which of them are conscious. And it cannot matter which are conscious anyway because their objective behavior is determined by the math.

    This reminds me of John Searl’s assertion that Martians may not be conscious because they are made of the wrong stuff. But if the hypothetical Martians act as if they are conscious then on what grounds can you ever claim to know that they aren’t?

    Maybe the Martians aren’t conscious due to finding a different even better algorithm. Maybe they have a better more advanced civilization than us due to the better algorithm. But on what grounds could you ever claim to know that they aren’t conscious? And if you ever did know such a thing would it be ok to kill them and steal their stuff? After all they don’t actually feel anything like we do…

    And that reminds me of a line from Daffy Duck: “I’m not like other people. I can’t stand pain. It hurts me.”

  44. MosBenon 11 Jul 2017 at 1:58 pm

    chikoppi, if we’re voting, I’d much rather Egnor contribute stories about dwarves and orcs than the usual strawman arguments that he presents. Maybe he would be more open to literary criticism than scientific study.

  45. Steven Novellaon 11 Jul 2017 at 2:41 pm

    Paul – I understand the Church-Turing thesis, but your application of it is controversial at best. You cannot take it as a solid premise. It does not prove that human brain function can be reduced to a computable algorithm and it will be equivalent in function.

    I can know (reasonably well) that other people are conscious because I am conscious and I have no reason to assume that my brain is functionally different from other human brains. But I do agree beyond that consciousness is uncertain. There is no objective way to prove that an AI that behaves consciously is actually conscious. At that point we would have to understand how it functions.

    But most importantly, you didn’t actually address my points above. You dodged them with a non-sequitur about Church-Turing.

  46. Paul Parnellon 11 Jul 2017 at 4:18 pm

    Steven,

    I think Church/Turing is the core of our disagreement.

    I understand the Church-Turing thesis, but your application of it is controversial at best. You cannot take it as a solid premise. It does not prove that human brain function can be reduced to a computable algorithm and it will be equivalent in function.

    The whole point of C/T is that every process in nature can be reduced to an algorithmic process. That the universe and everything in it is essentially a algorithmic thing. Want to know where the planets will be in a hundred years? A computer can simulate that. Want a tornado? A computer can simulate one to as high a degree of resolution as you want. Want a brain? According to C/T it can be simulated to as high a degree of resolution as you want. Connect the simulation to a robot body and on what grounds can you claim to know that it is not conscious?

    It is true that C/T has not been proven. Worse it cannot be proven because it is as much a statement about physics as about computers. But the empirical evidence is massive. People have searched long and hard for a violation. Any violation now would have serious consequences for fundamental physics. There are those who believe that deep physics violates C/T. Penrose for example. He does not have much of a case. And it is hard to see how a unification of gravity would have implications for a brain.

    C/T is believed to be true by nearly everyone. The universe and everything in it is generally believed to be algorithmic.

    I can know (reasonably well) that other people are conscious because I am conscious and I have no reason to assume that my brain is functionally different from other human brains. But I do agree beyond that consciousness is uncertain. There is no objective way to prove that an AI that behaves consciously is actually conscious. At that point we would have to understand how it functions.

    I agree that we would have to understand how consciousness functions in order to tell if something is conscious. But you miss my point that we have no idea how such an understanding would even look. We can’t do a seance on a process algorithmic or otherwise. Worse it cannot matter to the objective function if there is subjectivity anyway.

    But most importantly, you didn’t actually address my points above. You dodged them with a non-sequitur about Church-Turing.

    I believe that I did and I think the reason that you don’t see that is you don’t understand the absolute central importance of C/T. Unless you have the time and willingness to actually discuss why you don’t believe that C/T implies that the brain is computable I think we are at an impasse.

  47. michaelegnoron 11 Jul 2017 at 6:13 pm

    The CT hypothesis is wrong about the mind. The mind is intentional, which is the antithesis of computation.

    Also the CT hypo is wrong at the quantum level, because qm is non-deterministic (Bells theorem).

    Except for everything, the CT hypothesis is correct.

  48. loncemonon 11 Jul 2017 at 7:09 pm

    The skeptic in me doubts the experiment met the bar of isolating consciousness as the control variable. Although one group may have been conscious of the arrow, and the other not, you don’t need to invoke consciousness as the explanation for the different results seen in the two groups in the study. You also have visual processing time differences which could be sufficient, leaving the questions of consciousness as open as ever.

  49. edamameon 11 Jul 2017 at 7:17 pm

    The Church/Turing thesis is irrelevant even though consciousness is a brain process.

    My simulation of a computer doesn’t destroy cities. This is a basic error.

  50. edamameon 11 Jul 2017 at 7:18 pm

    lol

    My simulation of a tornado doesn’t destroy cities.

    So…there’s two basic errors.

  51. Pete Aon 11 Jul 2017 at 8:06 pm

    “[Michael Egnor] … because qm is non-deterministic”

    If qm [sic] was non-deterministic then it wouldn’t be classified as “quantum” because that which is quantized resides within its specific discretized domain, rather than within a continuous analog domain. FFS!

    If, say, 5% of the mass of one of your coins has been eroded by wear and tear then you deposit this coin into your bank account, would your bank credit your account with 95% of the vaule of this coin; or would it credit you with either 100% of its value or 0% of its value. Rhetorical question.

  52. chikoppion 11 Jul 2017 at 9:03 pm

    @Pete A

    I believe I understand the point you are making, but it took me a minute and I want to be sure.

    Are you saying that if, in a CT algorithm, it matters whether a quantum particle is “spin up” or “spin down” the act of performing the algorithm is analogous to measuring the spin (and the operation is necessarily performed on a discreet quantity)?

    My preferred definition of “deterministic” is that given two perfectly identical systems with identical initial states both systems must produce the same outcome they are deterministic. If the systems (or the initial states) are sensitive to quantum uncertainty that would not be the case. I’m unclear on the implications for CT.

  53. edamameon 11 Jul 2017 at 10:17 pm

    PeteA there is no reasonable sense of deterministic in which (mainstream) quantum mechanics is deterministic.

    There are some non-mainstream interpretations in which special relativity is violated, that let you maintain determinism. The so-called non-local hidden variable theories. E.g., bohm’s pilot waves that travel faster than light and transmit information about what is happening in that entangled photon across the universe instantaneously.

    Very few serious physicists go for the latter. Because it goes in the face of special relativity, which is sort of a big deal.

    Bell proved all this in his famous inequalities. You can have locality and indeterminism, or nonlocality and determinism.

    There is no world in which quantum mechanics isn’t really weird.

  54. Paul Parnellon 11 Jul 2017 at 11:19 pm

    It has been suggested here that quantum randomness. Not so. A computer may not be able to predict when a particle will decay for example but it can calculate the quantum wave function. That’s all you need and that’s all quantum mechanics allows.

    A more interesting question is do quantum computers violate C/T. It turns out that they probably do violate the extended C/T thesis as they appear to be able to solve some problems exponentially faster than classical computers. But they are limited to solving the same class of functions as classical computers. So original C/T still stands.

  55. Paul Parnellon 11 Jul 2017 at 11:24 pm

    edamame,

    Depends on what you use for a computer. If I want to see what dammage a tornado would do to a city I

  56. Paul Parnellon 11 Jul 2017 at 11:29 pm

    edamame,

    My simulation of a tornado doesn’t destroy cities.

    Depends on what you use for a computer. If I want to see what dammage a tornado would do to a city I could just create a tornado over a city. That would be a correct algorithm but the computer it runs on is kinda expensive. It would be better to try running the program on a less expensive computer.

  57. Paul Parnellon 11 Jul 2017 at 11:33 pm

    It has been suggested here that quantum randomness.

    I intended to say “It has been suggested here that quantum randomness violates C/T.”

    And then I posted a message before finishing it. I need sleep.

  58. Ivan Groznyon 12 Jul 2017 at 11:25 am

    Steve Novella,

    this whole discussion looks a lot like a variation on the theme “What is it like to be a bat?”. There are neurological processes we can observe and analyze and then there is that irreducible metaphysical entity called “consciousness” that we understand directly and intuitively and have to reconcile with our physicalistic explanations of behaviour and cognition. It seems to me that there is no operational definition of consciousness that would be in tension with neuroscience. The only reason why people find this an interesting philosophical problem is that they assume, at least implicitly, that there is such a thing as “to be a bat” or to have “experience of oneself” which is a given, axiomatic, intuitive, and in order to defend our physicalistic theory of the brain we have to develop some meta-language to unify it with the “soul”: to “stitch together” the remnants of Cartesian view and new scientific language of brain functioning.

    To put it differently: there are many other aspects of brain functioning that are not well understood, but that does no prompt us to automatically question their evolutionary value. The only reason why we often do so with “consciousness” is the prevalence of the “Ghost in the machine” ontology and the uncritical acceptance that “consciousness” is somehow fundamentally different from anything else in cognitive behaviour. That’s a philosophical and theological assumption, not a scientific one.

  59. Steven Novellaon 12 Jul 2017 at 12:02 pm

    I agree (if I understand what you are saying correctly). Consciousness is no different than any other brain function. It is, I have argued, perhaps just harder for us to conceptualize specifically because our brains evolved to create a seamless experience of our own stream of consciousness.

    We have identified many such mechanisms – the illusion of continuity and internal consistency is powerful and a deliberate construct of our brains (in that there are circuits dedicated to these functions).

    It is possible that the success of that illusion makes it difficult for us to imagine our own consciousness as brain function. We have to look past the illusion. When we do, we find just brain networks doing what they do.

  60. Pete Aon 12 Jul 2017 at 12:34 pm

    chikoppi,

    I was simply pointing out the quantum flapdoodle in Egnor’s ‘reasoning’.

    I shall address your preferred definition of “deterministic” — given two perfectly identical systems with identical initial states both systems must produce the same outcome they are deterministic — by using practical examples…

    Suppose we purchase two computers which have adjacent serial numbers: the only differences between their initial states are: their serial numbers; their network interface MAC addresses; hardware differences due to manufacturing tolerances. However, while they are running, they will be different from each other, at the quantum level, due to the stochastic nature of both electron noise and thermally-induced electronic noise.

    When we talk about two entities which have identical initial states, we don’t mean it literally because it’s physically impossible. What we mean is that their initial states are identical at some very specific higher-level of function or form. The operating voltages and currents of the components in a computer were chosen such that quantum-level effects have a very low probability of causing an error at the data-level of functionality. In other words, the signal-to-noise ratio of all signal paths in the computer must be large enough to ensure a very low error rate. NB: There is no such thing as a physical “digital signal”: all signals are analog signals at the physical level.

    So, our pair of computers are functionally deterministic and they have the same initial states in the context of all the applications we intend to run on them. In fact, all of our computing devices are more than sufficiently deterministic to run such things as the complex decoding algorithms required to play YouTube videos.

    My previous paragraph strongly implies that identical deterministic systems will indeed produce identical outputs when given identical initial conditions followed by identical inputs. But, this easily leads us into the trap of false sufficiency in our definition of “deterministic”. Allow me to explain by using another practical example…

    From each of our pair of computers we login to the same remote server using the Secure Shell (SSH). Now, we sincerely hope that identical data is not being sent by our pair of computers over the Internet link! It it was deterministically identical then anyone could eavesdrop on our encrypted connections to the server. For cryptography to be effective, we need algorithms which are robustly deterministic in one meaning of the word “deterministic”, and as non-deterministic as possible in another meaning of the word. The best algorithms are those which are made freely available to the public, rather than secret algorithms, because public algorithms are open to independent inspection and testing by the many experts across the globe. Conversely, secret algorithms a rendered useless once they are revealed either inadvertently or deliberately.

    There are many other examples of algorithms which are deterministic at a high contextual level, but are stochastic (therefore non-deterministic) at a lower level of functionality. E.g. the Internet is just an unreliable ‘packet chucker’: many packets get lost in transit; many packets are corrupted with random transmission errors. Multicast streaming services rely on User Datagram Protocol (UDP): as with all datagram services, there is no guarantee of delivery therefore the recipient has to cope with lost packets and corrupted packets. The recipient of, say, a music stream will not notice lost or corrupted packets until the loss/corruption rate increases beyond the level at which the decoding algorithm can cope with it.

    SSH, HTTP, and many other service protocols require a more reliable service than provided by UDP. Transmission Control Protocol (TCP) establishes a bi-directional connection with the server so that each time TCP detects lost or corrupted packets it requests them to be retransmitted. At the TCP layer, we have deterministic functionality; whereas the layer beneath it, upon which TCP relies, is non-deterministic.

    To summarize: When we talk about something being either deterministic or non-deterministic, we have to be pedantically specific about the contextual level of the system we are addressing. All macroscopic physical systems are non-deterministic at the level of subatomic particles, but using this core fact to claim that a particular system (or a part thereof) is therefore non-deterministic is committing the fallacy of composition. Michael Egnor is one of several commentators who rely heavily on the fallacy of composition and the fallacy of division.

    For readers who are interested in how to scientifically address the multiple contextual levels/layers of complex systems, I think the Open Systems Interconnection model (OSI model) is a wonderful practical example of applied science. You will find in this model many computer-related terms that we’ve all heard of, but we aren’t really sure what they mean and how they fit together — such as IEEE 802.11 wireless networks.
    https://en.wikipedia.org/wiki/OSI_model

    [edamame] PeteA there is no reasonable sense of deterministic in which (mainstream) quantum mechanics is deterministic.

    I’ve tried my utmost in the above to make it abundantly clear that it depends upon which precise context you are using the term “deterministic”. There is no such thing as a non-integer number of electrons or photons: they are, therefore, deterministic discrete quantum entities which either exist or do not exist as a physical particle or a wave packet (depending on the chosen model). Their arrival times are stochastic, but the charge and mass of an electron are known, and the energy level of each photon is deterministically inversely-related to its wavelength and vice versa. In other words: within the domain(s) of their attributes, they are deterministic; whereas within the time domain and within the probability domain, they are stochastic — aka: noisy processes.

  61. chikoppion 12 Jul 2017 at 1:48 pm

    [Pete A] To summarize: When we talk about something being either deterministic or non-deterministic, we have to be pedantically specific about the contextual level of the system we are addressing. All macroscopic physical systems are non-deterministic at the level of subatomic particles, but using this core fact to claim that a particular system (or a part thereof) is therefore non-deterministic is committing the fallacy of composition. Michael Egnor is one of several commentators who rely heavily on the fallacy of composition and the fallacy of division.

    Yup, that’s the meaning I took away from your initial post. Not that quantum states are deterministic, but that CT algorithms operate at a deterministic level.

    I’m not sure, however, that brain function could be successfully modeled without incorporating quantum-level (non-deterministic) interference. It may be that is only theoretically possible to reconstruct a working brain if the physical substrate of the “processor” replicates the aspects of the brain that utilize such non-deterministic effects. The hardware might therefore be as essential as the software.

  62. Paul Parnellon 12 Jul 2017 at 4:30 pm

    Steven,

    …we find just brain networks doing what they do.

    Exactly so! That is in the end all we can do.

    We can see the neural network as a causal network. We can analyse the causal network and produce a network of NAND gates that captures the causality such that it produces the same pattern of outputs from the inputs. Then we can replace the neurons with an electronic chip that performs the same function. We can do this with more and more parts of the brain until all you have is electronic chips. It would be convienent to replace all those networks of NAND gates with a computer running a program. If you think about it this is just a version of Searle’s Chinese room.

    John Searle believed that if we do that the person would continue to behave normally but would no longer be conscious. I can barely stand to read Searle. He starts with the proposition that consciousness is a real thing which is ok for a starting point. But then he bends heaven and earth with cosmic levels of illogic to make it true.

    Daniel Dennett claims that the person would continue to be conscious but consciousness is an illusion anyway. You don’t have to select for it in evolution. Evolution just selects the best algorithms. You do not need to explain it since there litterly is nothing to explain. It is an illusion. This is a much better take than Searle. I’m still not convinced.

    Part of the problem is that he seems to be saying that the ability to have illusions is an illusion. I cannot wrap my head around it.

    But then when faced with a total inability to link causal networks to subjective experience I see no alternative.

  63. Pete Aon 12 Jul 2017 at 4:38 pm

    chikoppi,

    Brain functions cannot, I think, be successfully modelled without incorporating an appropriate level of stochastic interference.

    I don’t know if you are familiar with the initial Compact Disc digital audio music recordings: They received many complaints of their tracks sounding disturbingly unnatural during the final second or so of their slow fade-out towards silence. The complaints were justified because the recording engineers had failed to understand one of the fundamental differences between the discretized domain of 16-bits per channel digital audio and the real-world continuous (analog) domain of human hearing.

    The inherent problem with many digital audio systems, including CD audio, was, and still is, blindingly obvious to the few who fully understand the difference between the two basic quantization (discretization) methods: two’s complement based arithmetic; sign and magnitude [aka: signed magnitude] based arithmetic. The former is compatible with the architecture of CPUs; the latter is compatible only with the much-more-costly-to-produce, dedicated, digital signal processing microchips.

    In order to adequately discretize an analog quantity which ranges from a negative value to a positive value, a suitable stochastic dither signal must be added to the input of the quantizer. In the absence of a suitable dither signal, our hearing will detect disturbing distortion artefacts, our vision will detect disturbing banding [step transitions] in an image. Our personal disturbance caused by inadequately dithered discretized audio and video is not some unfathomable quirk of human perception; a high-quality spectrum analyser can easily reveal the distortion artefacts that we are perceiving!

    It turns out that a truly stochastic — a truly indeterministic, natural — dither signal is not the ideal dither signal for digital audio and digital video systems. The perceptually-ideal audio dither signal has a triangular, not a Gaussian, probability density function. This is very convenient because it can be simulated by summing the outputs of two pseudorandom binary sequence generators which have different sequence lengths (ensuring that they are decorrelated). Obviously, their sequence lengths must be chosen carefully in order to escape our detection of their repetition rates because humans have the uncanny ability to detect patterns, even in sequences of events that don’t actually contain repeating patterns 🙂

    One of my few passions is the accurate capture and recording of real-world analog quantities, and their subsequent digital signal processing algorithms — especially adaptive algorithms and non-linear algorithms — because I thoroughly enjoyed all of my many years of work in this field of applied science.

    For readers who are interested in digital audio and some of its early history, my favourite book is The Art of Digital Audio, by John Watkinson.

  64. TheGorillaon 12 Jul 2017 at 5:37 pm

    Ivan,

    “To put it differently: there are many other aspects of brain functioning that are not well understood, but that does no prompt us to automatically question their evolutionary value. The only reason why we often do so with “consciousness” is the prevalence of the “Ghost in the machine” ontology and the uncritical acceptance that “consciousness” is somehow fundamentally different from anything else in cognitive behaviour. That’s a philosophical and theological assumption, not a scientific one.”

    The discussion is not about “evolutionary value;” at best, that’s implicit in the problem. Secondly, there is no “uncritical acceptance” that consciousness is fundamentally different — even a cursory glance at the literature would make it clear just *how much* ink has been spilled arguing for this point.

  65. Paul Parnellon 12 Jul 2017 at 8:35 pm

    Pete A,

    The inherent problem with many digital audio systems, including CD audio, was, and still is, blindingly obvious to the few who fully understand the difference between the two basic quantization (discretization) methods: two’s complement based arithmetic; sign and magnitude [aka: signed magnitude] based arithmetic. The former is compatible with the architecture of CPUs; the latter is compatible only with the much-more-costly-to-produce, dedicated, digital signal processing microchips.

    I’m having trouble understanding this. Two’s complement and sign bit are just two different ways to represent the data. You should be able to do anything in one that you can do in the other. You should be able to do it in binary coded decimal if that’s what you want. It is true that modern processors have native support for two’s complement but any processor can do sign and magnitude. It just has to be implimented in software.

    Any program that uses one should be easy to change to use the other.

  66. Nidwinon 13 Jul 2017 at 3:28 am

    “One answer is that consciousness is of no specific benefit. David Chalmers imagined philosophical zombies (p-zombies) who could do everything humans do but did not experience their own existence. A brain could process information, make decisions, and engage in behavior without actual conscious awareness, therefore why does the conscious awareness exist?

    Doesn’t counsciousness help us form/create our own personality and therefore generate diversity amongst a specie?
    The more diverse the higher the chances to find a way or solution to try or start to adapt to a brand new problem that couldn’t be known or discovered before.

    Two big benefits of consciousness versus non or un-consciousness are being able to voluntary change/adapt and make your own “self” decisions when confronted to the unknown or barely known, and limit the the amount of process/analyse by focussing as we can make our own choices and voluntary deciede what to take into account or ignore for the time being or just plainly toss aside.

    When one of us picks up something completely new with one of our senses, e.g. a new smell or sound we can immediately take/make some decisions and/or conclusions about this new sent or noise for ourselves. Would that be remotely possible without being conscious?

  67. Pete Aon 13 Jul 2017 at 6:14 am

    Paul,

    For signals that are smaller than 1 LSB, signed magnitude represents them correctly as +0 when the signal is slightly positive; −0 when it’s slightly negative. The only option in two’s complement is 0 when the signal is slightly positive; −1 when it’s slightly negative, hence giving an average offset of −0.5. This causes problems for many types of algorithm, such as those that require the absolute magnitude, a squared value, or a product of signed values. Obviously, when the signal is large this offset is likely to be irrelevant, but as the signal fades to silence the relative effect of this offset increases. In other words, processing a 16-bit AC signal in two’s complement would require a 17-bit processor to obtain the same accuracy as a 16-bit signed magnitude processor (generally). There are, of course, workarounds for two’s complement but they require many extra processing steps.

  68. BillyJoe7on 13 Jul 2017 at 7:41 am

    Nidwin,

    I don’t know whether or not you intended to, but you seem to be supporting contracausal freewill.

  69. lorenzosleakeson 13 Jul 2017 at 8:15 am

    Materialist explanations of how consciousness can provide something new to improve survival value are illogical. If consciousness is not a fundamental entity within nature it can by definition add nothing new. It is either an illusion or an epiphenomenon. As long as neural networks based on known electromagnetic laws are one hundred percent responsible for motor actions consciousness can never play any role in survival value.

    It is difficult to see how evolution can ever create something totally and fundamentally new in nature even if it adds survival value. Evolution only jumbles around new patterns of existing elements. Either consciousness is a bizarre miracle introduced into the universe for no reason when complex brains evolve or it is an independent force outside of known physical laws and perhaps employing quantum uncertainty to adjust probabilities of particle interactions. Some form of Cartesian or Eccles dualism would then be correct. see https://philpapers.org/rec/SLETLO-2 for more.

  70. Nidwinon 13 Jul 2017 at 8:28 am

    BillyJoe7,

    No, that wasn’t intended and I’m not certain what contra-causal free will actually is supposed to mean.

    looked up
    https://pathofthebeagle.com/2013/10/06/contra-causal-free-will/
    http://commonsenseatheism.com/?p=6382

    but it confuses me more than anything (would probably be easier in French or Dutch for me to understand it better)

    I’ll try to explain it better (with my poor English own words)

    When I say/write we have and can make a choice it’s not free but linked to our personality, the sum of who we are, the cause effect scenario mentioned in the first article. What our consciousness, opposed to non-conscious (e.g. during sleep dreams), gives us is the ability to chose between an x amount of possible actions or responses but still in the cause -> effect spectrum of our personality, spectrum of who we are, have become at that specific moment of our life.

    Not sure this is going to make much more sens to for you 🙁

  71. edamameon 13 Jul 2017 at 8:43 am

    Nidwin is just describing choice, not contracausal free will. Psychological free will is perfectly fine. That is, you can freely choose which type of candy bar to get at the vending machine even though your brain doesn’t violate the laws of physics when you do so.

    Clearly conscious experience helps us make decisions. E.g., I go to the dentist when I have a toothache. That’s all Nidwin meant. Without conscoius experience we would make crappier decisions. Nobody wants to have blindsight.

  72. edamameon 13 Jul 2017 at 9:59 am

    Gorilla most of that spilled ink has dualist intuitions front-loaded and guarantees its conclusion.

    Take the tendentious Hard Problem formulation of the issue, of which you seem to be fond. “True, I cannot prove there is a further problem, precisely because I cannot prove that consciousness exists” (xii of The Conscious Mind). Here we have a false equivalency, as if showing that consciousness exists (which everyone reasonable agrees to) is equivalent to showing there is this further additional Hard Problem, in his technical sense that effectively front-loads dualism into his entire philosophy. Something that very few people would agree to who understand all the issues involved.

    It’s fine if you want to build an edifice based your front-loaded dualist intuition. But don’t pretend you have slayed anyone with a knock-down argument in the process. You have picked a controversial premise, turned it into an axiom, and drawn out some conclusions. That’s fine. Just be up front about it.

    People are much too cocky about consciousness. This applies to both sides. It plays well on the internet, but it’s sophistry.

  73. edamameon 13 Jul 2017 at 10:08 am

    Gorilla how about this: “The world is empirically indistinguishable from one in which consciousness is a brain process.” Do you think that is true or false? If true, why is that unimportant for the dualism/materialism debate?

    This is how I’ve been trying to think about it lately.

    Obviously, I lean strongly physicalist, but there is room for argument. Until we have the science/philosophy settled enough to convince people like Koch, Chalmers, etc.. That is people who, unlike Egnor, don’t have a silly religious ax to grind, but have come at it objectively and from a naturalistic perspective, and studied all the science, but still come away dualistic, then we still have a long way to go. This is unlike the arguments about evolution: the “skeptics” there are just defective in some way. With consciousness, that is simply not the case.

    That’s why I say that both sides (on the internet, where you have the little Sophomoric warriors who know everything because they have read Dennett or Chalmers) tend to be too cocky. People in the trenches don’t act like that on this topic.

  74. Steven Novellaon 13 Jul 2017 at 2:22 pm

    Lorenzo wrote: “As long as neural networks based on known electromagnetic laws are one hundred percent responsible for motor actions consciousness can never play any role in survival value.”

    This makes no sense, because those same neural networks are the substrate for consciousness. The neuroscientific model is that all of the various brain networks are working together in an endless loop of processing and communication the results in wakeful consciousness and everything the brain does. There is no need for or evidence for something outside, or another phenomenon of nature.

    Your denial of evolutionary forces is also nonsensical. That is like saying that as long as individual bees are just laying down wax and paper, nests cannot possibly have evolved naturally or provide a survival advantage.

    Relatively simple processes can lead to higher order complexity through interaction.

  75. edamameon 13 Jul 2017 at 3:24 pm

    missed that gem from lorenzo:
    As long as neural networks … are one hundred percent responsible for motor actions consciousness can never play any role in survival value.

    As long as electric circuits are one hundred percent responsible for your computer’s behavior, your CPU can never play a role in how your computer works.

  76. chikoppion 13 Jul 2017 at 3:59 pm

    [lorenzosleakes] Materialist explanations of how consciousness can provide something new to improve survival value are illogical. If consciousness is not a fundamental entity within nature it can by definition add nothing new. It is either an illusion or an epiphenomenon. As long as neural networks based on known electromagnetic laws are one hundred percent responsible for motor actions consciousness can never play any role in survival value.

    The “materialist” retort again. Code for, “why won’t you let me assert answers without evidence?!”

    How about plants? Plants aren’t conscious, yet these entirely natural organisms propagate over time and across generations. Plants employ a staggering variety of novel traits and structures, both adaptive and reproductive, to benefit species survival.

    I would assume the fact that sunflowers turn to face the Sun in the morning, thus acquiring a survival advantage, does not require hand-wringing about metaphysics.

    The “behavior” of sunflowers is based on a complex natural system interacting with its environment. Consciousness may just as well be a higher-order, vastly more complex and recursive system. No magic is necessary and there is no evidence of magic stuff to which an appeal might be made.

  77. lorenzosleakeson 13 Jul 2017 at 4:35 pm

    Steve said “The neuroscientific model is that all of the various brain networks are working together in an endless loop of processing and communication the results in wakeful consciousness and everything the brain does. There is no need for or evidence for something outside, or another phenomenon of nature.”

    That is according to you consciousness is epiphenomenal. It adds nothing that isnt already there in the neural processing. The neural processing explains everything without real feelings of pain and pleasure, which according to you are meaningless additions that just happen to occur whenever certain neural patterns occur but cause nothing in and of themselves.

  78. Paul Parnellon 13 Jul 2017 at 5:59 pm

    Pete A,

    I don’t know much about DACs but I think I understand what you are saying here. The two different ways of reprtesenting zero in sign-magnitude allows better resolution at the low end. What is usually seen as a kludge is actually a feature in this application.

    My only point was from an information theory point of view how you represent data is unimportant. But I take your point that from an engineering point of view it can be kludgy.

  79. Pete Aon 14 Jul 2017 at 9:16 pm

    Paul,

    “The two different ways of representing zero in sign-magnitude allows better resolution at the low end. What is usually seen as a kludge is actually a feature in this application.” Yes indeed!

    You wrote: “My only point was from an information theory point of view how you represent data is unimportant.” That’s what I was taught, and I believed it because I was shown various proofs along with their easy-to-understand minor caveats. The easy-to-understand minor caveat for 16-bit two’s complement is its minor (1 LSB) asymmetry in its range from −32,768 to +32,767. So effing what! It’s a trivial ‘problem’ to deal with. Need to find the absolute value of −32,768? Easy: test for occurrences of −32,768 in the data and clip them to −32,767; ‘problem’ solved. It isn’t cheating because that’s exactly what signed-magnitude representation does automatically. If we find a 16-bit digital audio signal which hits a peak level of −32,768 then the recording engineer was very likely to be someone who was not an engineer.

    The reason I brought up this topic is because, in the domain of quanta — discrete [quantized] systems, including monetary units and quantum mechanics — negative integer values are generally used only for the purpose of representing an abstract deficit/deficiency. E.g., if you owe your lender $3.07 then your account will show a balance of −3.07 $; which means, in reality, that you owe your lender +307 cents. Likewise, when our loudspeakers are fed with a negative voltage on their +terminals relative to their −terminals, they do not cause the air pressure in the room to become negative: obviously, they are incapable of generating an air pressure which is lower than zero (lower than a complete vacuum). The voltage they are presented with instructs them to vary the air pressure relative to the mean atmospheric air pressure that is currently present in their environment.

    Many of us were taught to believe that two’s complement binary numbers can be used to represent real-world quantities. Yes, they can. But, the onus is on us to completely understand the exact domain in which each of these binary numbers reside. Usually, it resides in a very specific sampled-then-quantized domain, which is very different from the macroscopic-level asymptotically continuous domain in which the majority of real-world quantities reside.

    The statistics branch of mathematics makes this difference abundantly clear to its users. A probability density function is a continuos [aka: linear; analog; non-discretized and non-sampled] domain function; whereas a probability mass function applies to a discontinuous sampled domain function. Mathematicians have created mappings from the continuous domain to the discrete domain, and vice versa; but these mappings are far from being isomorphic (because it is impossible to make them isomorphic using mathematics that would be understandable to those who need implement it in order to solve practical problems).

    I guess the bottom line is: Whenever we see a numerical value that supposedly — especially convincingly — represents a real-world quantity then we need to ask ourselves “What exactly does this number really mean?”

    This reminds me of a news story I read, which strongly suggested that eating bacon increases my chance of dying by 70%.

  80. bachfiendon 15 Jul 2017 at 12:23 am

    Pete A,

    ‘This reminds me of a news story I read, which strongly suggested that eating bacon increases my chance of dying by 70%’

    It mightn’t be as silly as it sounds. I would have to see it in context. I saw an article in the Express reporting that eating large amounts of processed meats increases the risk of dying of heart or artery disease by 72% – over the period of the follow-up, around 13 years.

    It’s still not particularly well reported though. There’s no indication of the absolute risk. If there was a 20% risk of dying of heart or artery disease over a 13 year period, then a 72% increase would be an enormous increase. Not so much if the baseline was 1%.

  81. BillyJoe7on 15 Jul 2017 at 3:19 am

    I think that’s what he meant.

    What that number really means depends on the base rate, the time period, how much bacon, and the effect of other variables in a particular person – including how much he likes bacon! – who is contemplating whether or not it is worthwhile to cut bacon from his diet.

    (Not that I followed more than half of what he did say 🙂 )

  82. Paul Parnellon 18 Jul 2017 at 1:30 am

    Ah, no I think he was just saying that we are all gonna die anyway so saying our chances of dying is increased by 70% meaningless nonsense. It is so imprecise as to be idiotic.

    And possibly the last word on consciousness in this thread:

    If I touch something hot I feel pain. Feeling pain helps me avoid damage. Thus the ability to feel pain has evolutionary value.

    The problem with the above is that it treats the ability to feel pain as a function in an algorithm. It isn’t. There is no “feel this” opcode for any chip ever manufactured. Nobody knows how to do such a thing. And it isn’t necessary. An instrument can detect heat and a program can use that input to make a robot avoid damage. That can be algorithmic. No mention of feeling and even if there were feeling it plays no part in the function of the robot.

    It has been suggested that consciousness is an illusion. I cannot wrap my head around what that could even mean. It seems like a selfreferential paradox like “this sentence is false.”

    It has been suggested that consciousness is emergent. But without further details that is so empty as to be meaningless. A water wave is emergent. From one second to the next it is composed of totally different water molecules. I understand it and I can even see it as an algorithm. Not so consciousness and calling it emergent does no good. And it is impossible to imagine what further details could make a difference.

    It has been suggested that consciousness is explained by non-materialism. In what #%@$ way? How can you even define non-material let alone use it to explain anything anywhere ever?

    So I’m stuck in my attempts to explain consciousness. But I think everyone else is as well. Most are in denial while making arguments that carry their philosophical baggage. A neuroscience guy will see it as some deep function of neural nets. A computer guy see it as a property of proper algorithms. A rationalist will rationalize it away as an illusion or emergent property. A mystic will see god.

    I don’t think anyone has a clue. I don’t think anyone even knows what a clue might look like. I think people should put down their philosophical baggage for a moment and just admire the magnificent conundrum. It is beautiful.

    That is my take if anyone is still listening.

  83. Nidwinon 18 Jul 2017 at 3:56 am

    I’m still reading you Paul.

    I consider the word consciousness as an abstraction, a terminlogy that helps us put a word on that specific entity or concept. I’m not convinced there’s something physical to consciousness but that counsciousness is the product or result of a more complex evolved brain system.

  84. chikoppion 21 Jul 2017 at 10:37 am

    Not to be missed: the first peer reviewed rap album about the science of consciousness.

    https://www.indiegogo.com/projects/baba-brinkman-s-rap-guide-to-consciousness-cd-music-science

  85. Pete Aon 21 Jul 2017 at 4:52 pm

    bachfiend, BillyJoe7, and Paul Parnell,

    My apologies for failing to read your replies until today.

    @bachfiend,
    I agree with your reply on 15 Jul 2017 at 12:23 am, but the utter stupidity of the news story I read, which strongly suggested that eating bacon increases my chance of dying by 70%, was my core point. The author of the article did not mention base rates; it was a scaremongering-for-profit news article. My chance of dying is precisely 100% therefore eating bacon does not, and cannot, increase my chance of dying by 70% 🙂

  86. TheGorillaon 21 Jul 2017 at 11:21 pm

    edamame,

    Gorilla how about this: “The world is empirically indistinguishable from one in which consciousness is a brain process.” Do you think that is true or false? If true, why is that unimportant for the dualism/materialism debate?

    To be clear, I am not a dualist and I think the real issues with consciousness are linguistic and cultural — materialists and dualists are wrong for similar reasons. I just say this to make it clear that I have no personal horse in this race outside of wanting arguments to be properly understood and taken seriously. This is the same reason I give atheists such a hard time about cosmological arguments despite rejecting those arguments.

    If someone was to say that the world is empirically indistinguishable from one in which consciousness is a brain process, that’s not actually hitting the root of the question — it’s not too difficult for a dualist to say that consciousness is clearly something the brain does while denying that the qualitative, “what it is like,” aspects are material.

    And I think it’s hardly loading dualist assumptions into the conversation to say something like “my conscious experience is private” or that “conscious experience cannot be fully described in the manner of the sciences.” Both of those are common sense and lead into the problems of consciousness — meaning that while those statements and attitudes generate the Hard Problem, it is not in an underhanded way… it’s just the prima facie nature of conscious experience.

  87. BillyJoe7on 22 Jul 2017 at 4:54 am

    Paul,

    (If you’re still listening 😉 )

    The phrase “consciousness is an illusion” means that consciousness is not what it intuitively seems to be. It does not mean that consciousness does not exist. Otherwise why would the proponents of “consciousness is an illusion” not just say “consciousness does not exist”. I don’t know anyone who denies the existence of consciousness. It is more commonly an accusation by those who misunderstand what is meant by those who say that “consciousness is an illusion”. As for “this sentence is false”, that has never made any sense to me. It doesn’t actually say anything to which the label “false” could be applied. The sentence “I never tell the truth” is better.

    Saying “consciousness is emergent” is not meaningless. The deterministic world of our everyday experience is emergent upon the probabilistic quantum level (although there are deterministic interpretations of QM). Thermodynamics emerges from statistical mechanics – there is no pressure and temperature in statistical mechanics, but they emerge from it and are described by the laws of thermodynamics. Colour emerges when atoms combine to form molecules. These are simply different levels of description of reality (quantum mechanical, chemical, physiological, biological, neurological) and consciousness could simply be emergent upon the neurological (with psychological and sociological emerging upon consciousness).

    Why can’t pain be a function in an algorithm? Do we know enough to be certain about this? Does our present inability to code it in a chip in a robot, preclude it from being a possibility within the brain? It seems a lot of our technology follows discoveries in nature. We have generally found evolution to be a valuable teacher (ie Velcro). And can you really conclude, in our present state of knowledge, that it is not necessary for pain to be a function in an algorithm? Do you think p-zombies are possible? Also, robots are manufactured from scratch, so robot manufacturers can optimise by writing “avoidance of heat” into its algorithm with temperature as an input and temperature T being written in as a threshold beyond which the robot moves down the temperature gradient. Evolution works with what it has and what is possible. Feeling pain could have been the consequence of those constraints. Bats evolved echolocation. Humans evolved vision.

    But, as you say, there is no non-materialist (ie no nonphysical/immaterial/supernatural) mechanism by which consciousness could be explained. So, for now, let’s continue to pursue physical/material/natural mechanisms. They’ve been working for us for about 400 years already.

    Maybe consciousness is like life. We now have a pretty good idea about what constitutes life. Although we have not been able to create it in a laboratory, we have pretty well dispensed with “elan vital” as an explanation (or, more correctly, as a non explanation). Life is a whole lot less mysterious that it used to be hundreds of years ago. Who’s to say something similar won’t happen with respect to consciousness.

  88. JJ Borgmanon 22 Jul 2017 at 1:42 pm

    I just love all the semantical jousting. And I love the idea that there are people, funded, to study the tertiary questions. But. When I wake up in the morning, reality demands my attention. In the near-term, that is the only thing that matters. By that, I mean the span of a generation. I can do nothing about the activities of tyrants, plate tectonics, gods, atmospheric events, asteroids and so on.

    There is no question in my mind that the flat tire is real. The jerk just cut me off in traffic. The tree fell on the power line. My friend found out they had a debilitating disease. Our water supply is contaminated. The manufacturing plant is relocating out of town. I am getting fat and old.

    I decide, based on my current personal philosophy (affected by many things), what is true to me and what I view as ridiculous. And all that can change. For me, it has.

  89. BillyJoe7on 22 Jul 2017 at 5:09 pm

    Firstly, I think you can safely eliminate gods from your list. Secondly, you are equivocating about the word “reality”. We’re not talking about the reality of trees bringing down power lines. And, thirdly, truth is an empirical question, not a personal one.

    But maybe I misunderstood what you were trying to say.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.