Dec 29 2025

Biological vs Artificial Consciousness

Definitely the most fascinating and perhaps controversial topic in neuroscience, and one of the most intense debates in all of science, is the ultimate nature of consciousness. What is consciousness, specifically, and what brain functions are responsible for it? Does consciousness require biology, and if not what is the path to artificial consciousness? This is a debate that possibly cannot be fully resolved through empirical science alone (for reasons I have stated and will repeat here shortly). We also need philosophy, and an intense collaboration between philosophy and neuroscience, informing each other and building on each other.

A new paper hopes to push this discussion further – On biological and artificial consciousness: A case for biological computationalism. Before we delve into the paper, let’s set the stage a little bit. By consciousness we mean not only the state of being wakeful and conscious, but the subjective experience of our own existence and at least a portion of our cognitive state and function. We think, we feel things, we make decisions, and we experience our sensory inputs. This itself provokes many deep questions, the first of which is – why? Why do we experience our own existence? Philosopher David Chalmers asked an extremely provocative question – could a creature have evolved that is capable of all of the cognitive functions humans have but not experience their own existence (a creature he termed a philosophical zombie, or p-zombie)?

Part of the problem of this question is that – how could we know if an entity was experiencing its own existence? If a p-zombie could exist, then any artificial intelligence (AI), even one capable of duplicating human-level intelligence, could be a p-zombie. If so, what is different between the AI and biological consciousness? At this point we can only ask these questions, some of them may need to wait until we actually develop human-level AI.

What are the various current theories of consciousness? Any summary I give in a single blog post is going to be a massive oversimplification, but let me give the TLDR. First we have dualism vs pure naturalistic neuroscience. There are many flavors of dualism, but basically it is any philosophy that posits that consciousness is something more than just the biological function of the brain. We are actually not discussing dualism in this article. I have made my position on this clear in the past – there is no scientific basis for dualism, and the neuroscientific model is doing just fine without having to introduce anything non-naturalistic or other than biological function to explain consciousness. The new paper is essentially a discussion entirely within the naturalistic neuroscience model of consciousness (which is where I think the discussion should be).

Within neuroscience the authors summarize the current debate this way:

“Right now, the debate about consciousness often feels frozen between two entrenched positions. On one side sits computational functionalism, which treats cognition as something you can fully explain in terms of abstract information processing: get the right functional organization (regardless of the material it runs on) and you get consciousness. On the other side is biological naturalism, which insists that consciousness is inseparable from the distinctive properties of living brains and bodies: biology isn’t just a vehicle for cognition, it is part of what cognition is.”

They propose what they consider to be the new theory of “biological computationalism”. They write:

“For decades, it has been tempting to assume that brains “compute” in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines, and treating them as though they do forces us into awkward metaphors and brittle explanations. If we want a serious theory of how brains compute and what it would take to build minds in other substrates, we need to widen what we mean by “computation” in the first place.”

I mostly agree with this, but I think they are exaggerating the situation a bit.  My reaction to reading this was – but, this was already my understanding for years. For example, in 2017 I wrote:

“For starters, the brain is neither hardware or software, it is both simultaneously – sometimes called “wetware.” Information is not stored in neurons, the neurons and their connections are the information. Further, processing and receiving information transforms those neurons, resulting in memory and learning.”

For the record, the idea that brains are simultaneously hardware and software, and that these two functions cannot be disentangled, goes back at least to the 1970s. Gerald Edelman, for example, stressed that the brain was neither software nor hardware but both simultaneously.  Any meaningful discussion of this debate is a book-length task, and experts can argue about the exact details of the many formulations of these various theories over the years. Just know these ideas have all been hashed out over decades, without any clear resolution, but it has certainly been my understanding that the “wetware” model is dominant in neuroscience. Also – I think the debate is better understood as a spectrum from computationalism at one end to biological naturalism at the other. Even the original proponents of computationalism, for example, recognized the biological nature and constraints of that information processing. The debate is mainly about degree.

In any case, the authors do, I think, make a good contribution to the wetware side in this discussion, essentially reformulating it as their “biological computationalism” theory. This theory has three components. The first is that biological consciousness, and brain function more generally, is a hybrid between discreet events and continuous dynamics. Neurons spiking may be discrete events, but they occur on a background of chemical gradients, synaptic anatomy, voltage fields, and other aspects of brain biology. The discrete events affect the continuous dynamic state of the brain, which in turn affects the discrete events.

Second, the brain is “scale-inseparable”, which is just another way of saying that hardware and software cannot be separated. There is no algorithm running on brain hardware – the hardware is the algorithm and it is altered by the function of the algorithm – they are inseparable.

Third, brain function is constrained by the availability of energy and resources, or what they call “metabolically grounded”. This is fundamental to many aspects of brain function, which evolved to be energy and metabolically efficient. You cannot fully understand why the brain works the way it does without understanding this metabolic grounding.

I full agree with the first two points, and that this is a good way of framing the “wetware” side of this debate. I think the brain is metabolically grounded, but that may be incidental to the question of consciousness. An AI, for example, may be grounded by other physical constraints, or may be functionally unlimited, and I don’t see how that would matter to whether or not it could generate consciousness.

What does all this say about the ability to create artificial intelligence? That remains to be seen. I think what it means is that it is possible we will not be able to create true AI self-aware consciousness with software alone. We may need to create a physical computational system that functions more like biology, with hardware and software being inseparable, and with discrete events and continuous dynamics also being entangled. I don’t think the authors answer this question so much as provide a framework for discussing it.

It may be true that these aspects of brain function are not necessary for, but are incidental to, the phenomenon of consciousness. It may also be true that there is more than one way to achieve consciousness, and the fact that human brains do it in one way does not mean it is the only possible way. Further, even if their theory is correct, I don’t think this answers the question of whether or not a virtual brain would be conscious.

In other words – if we have a powerful enough computer to create a virtual human brain – so all the aspects of brain function are simulated virtually rather than built into the hardware – could that virtual brain generate consciousness? I personally think it would, but it’s a fascinating question. And again, we still have the problem of – how would we really know for sure?

The good news is I think we are on a steady road to incremental advances in the question of consciousness. We have a collaboration among philosophers, neuroscientists, and computational scientists each contributing their bit from their own perspective, and the discussion has been slowly grinding forward. It has been incredible, and challenging, to follow and I can’t wait to see where it goes.

No responses yet