Home » Kiosk » Kiosk Article » Sentience Not Explained

Sentience Not Explained

The Cartesian Theater

Descartes viewed the mind as a nonphysical entity that interacts with the physical world through the pineal gland or epiphysis [Consciousness Explained, p. 34]. Dennett observes that theorists’ models often “still presuppose that somewhere, conveniently hidden in the obscure ‘center’ of the mind/brain, there is a Cartesian Theater, a place where ‘it all comes together’ and consciousness happens.” [Ibid., p. 39]

The Multiple Drafts Model of Consciousness

Dennett replaces the Cartesian Theater model of consciousness with a scientifically verifiable Multiple Drafts model. Here is Dennett’s own thumbnail sketch of his theory.

“There is no single, definitive ‘stream of consciousness,’ because there is no central Headquarters, no Cartesian Theater where ‘it all comes together’ for the perusal of a Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of ‘narrative’ play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its ‘von Neumannesque’ character) is not a ‘hard-wired’ design feature, but rather the upshot of a succession of coalitions of these specialists.” [Ibid., p. 253-254]

Dennett likes to use the term “Joycean machine” [ibid., pp. 214, 275-281] to describe any system that fits his Multiple Drafts model of consciousness. Dennett is so bold as to declare that anything controlled by a virtual machine fitting this model is “conscious in the fullest sense.” [Ibid., p. 281] He further writes, “in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self.” [Ibid., p. 431]


Imagine Ultracog, a robot with an inorganic brain. Ultracog’s brain consists of a vast array of logic cells. This array of logic cells is wired for a very high degree of plasticity. When first manufactured, the brain is totally mindless and devoid of any information or configuration. A programming port provides external random access to the entire brain. Through this programming port, data is first sent to configure the brain as a virtual Joycean machine. Even interconnection paths between logic cells are programmed through the programming port. When configuration programming is done, Ultracog’s brain has been transformed from mindless hardware to an infant mind.

Ultracog’s brain is integrated with an extensive network of transducers and effectors. Once configured as an infant mind, a switch is thrown to get Ultracog started with life. Programming is no longer done through the programming port. For the next few decades, all of Ultracog’s programming is done through nurturing, training, and social interaction. With the help of excellent instructors, Ultracog proves to be a true genius. Ultracog converses fluently with others and has a great appreciation for music and art. Ultracog has a most excellent grasp of scientific knowledge and contributes much to the arts and sciences.

The whole maturing process in Ultracog occurs in its virtual machine. Hardware always remains absolutely identical from infancy. Several copies of Ultracog were sent to Saturn, many years ago, before the robot’s mind was programmed or configured with any information at all. These copies are hardware-identical to Ultracog, except that their brains are mindless–unconfigured, and devoid of any information. One day, Ultracog agrees to go on a mission to Saturn. Ultracog does not need to leave the comforts of Earth to do this. A switch is thrown that suspends the operation of Ultracog’s brain. Then, all the configuration and content data from Ultracog’s mind is downloaded through the very same programming port that, decades ago, configured Ultracog to an infant mind. The data is then broadcast to Saturn. A large number of receiving stations, each with huge, high-gain antennas, orbit Saturn. These receiving stations gather many trillions of bytes of data, which they relay to a central station. The received data has all the information to reconstruct Ultracog’s mind in the brains of the mindless robots. These trillions of bytes of data are downloaded into the programming ports of every copy robot. When their switches are turned on, each of these robots has the mind of Ultracog exactly as it was when Ultracog’s mind was uploaded through its programming port. These robots have no need to repeat the decades of nurturing, training, and social interaction because this has all been accomplished with the original Ultracog on Earth.

The mind-uploading trick described above worked very well with Ultracog’s inorganic brain because it was designed to make complete uploading and downloading easy. A biological brain has no such provision for such a total transfer of configuration and data. Besides, no two biological brains are ever hardware-identical. To make things worse, no two biological brains are connected to an identical network of effectors and transducers. Therefore, the mind transfer process that was so easy from Ultracog’s brain to the copy brains would present hopelessly enormous technical challenges for application to biological brains.

Ultracog never left the Earth, yet its mind is now in several places on Saturn. Did Ultracog’s mind actually travel to Saturn at the speed of light? I would say no. Photons traveled at the speed of light from Earth to Saturn. These photons cannot compose a mind because photons alone cannot process data. Photons can play a vital role in the processing of data, but molecular structures must get involved for data to be processed. As amazing at the photons are in their ability to carry all the information about Ultracog’s mind from Earth to Saturn, they can never have the power to be a mind by themselves.


The philosophical zombie is defined as not being conscious and yet having all the outward behavior of a conscious person. Dennett presents a case against the possibility of such an entity. [Ibid., p. 309-314] Then why do some philosophers propose such a thought experiment at all? I think such philosophers really have in mind entities that are intelligent but not sentient. Such entities could fully meet Dennett’s criteria for being conscious and yet not be at all sentient. In this respect, Dennett’s arguments lack sufficient relevance. To avoid confusion, I will define a new kind of entity called a joycebie. A joycebie has the Joycean machine of a mind that Dennett regards fully conscious but it has absolutely no sentience. I do not think that Dennett could dismiss the possibility of a joycebie as he did the philosophical zombie.

Dennett discusses and dismisses blindsight as a candidate for partial zombiehood. [Ibid., p. 322-333] A blindsighted person’s brain does receive visual data and does make use of that information to produce some observable effects in the person’s responses to an experimenter’s cues. The blindsighted person reports no visual experience whatsoever. Although a blindsighted person does demonstrate a little of the characteristics of a philosophical zombie, the ability of the person to make use of visual data is extremely impaired. This contrasts sharply with the philosophical zombie, which has no functional impairment at all.

By Dennett’s criteria, Ultracog would be a conscious entity. Therefore, Dennett would deny that Ultracog could be a philosophical zombie. However, Ultracog could be a joycebie. Dennett has failed to define or explain sentience. Therefore, Ultracog could be fully conscious, in the Dennett sense, and yet possess absolutely no sentience at all.

Sentience Is Not a Cartesian Theater

It may be tempting to view sentience as the Cartesian Theater that experiences various brain events. However, I do not find cause to think that sentience must exist at some central place much smaller than the brain. Intimacy of sentience with brain events in the Multiple Draft model is no less plausible than its intimacy with brain events in a Cartesian Theater. Successful arguments against the reality of the Cartesian Theater should not be taken as arguments against the reality of sentience.

Pain and suffering

In Consciousness Explained, Dennett writes that “Suffering is not a matter of being visited by some ineffable but intrinsically awful state, but of having one’s life hopes, life plans, life projects blighted by circumstances imposed on one’s desires, thwarting one’s intensions–whatever they are.” [Ibid., p. 449] As is typical in this book, there is no mention of the role of sentience in suffering. Now suppose Ultracog were subject to such conditions as described by Dennett. I would say that Ultracog would be suffering only if it is sentient. If Ultracog is not sentient, then it may display the appearance of suffering, but I could not classify its experience as true suffering. Additionally, there are many instances of suffering that do not involve one’s life hopes, plans, or projects at all. Those who are experiencing excruciating pain may likely not be thinking much about life hopes, plans, and projects at the time. I think most of us believe a one-year-old child is capable of great suffering when experiencing excruciating pain, but how many one-year-old children have their minds set on life hopes, plans, and projects?

Dennett provides a much different perspective on suffering in Kinds of Minds. Here, he presents pain as the “morally most significant instance” of sentience. [Kinds of Minds, p. 97] He admits, “We might well think that the capacity for suffering counts for more, in any moral calculations, than the capacity for abstruse and sophisticated reasoning about the future (and everything else under the sun).” [Ibid., p. 162]

Of course, sentience is not limited to the experience of suffering, but is also intimate with a great deal of experience that we may consider positive or neutral. The vision process is quite rich in sentient intimacy.

Qualia disqualified

Dennett argues against the concept of qualia, which he describes as some special intrinsic properties of our discriminative states, “the subjective, private, ineffable, properties that constitute the way things look to us (sound to us, smell to us, etc.).” [Consciousness Explained, p. 373] Dennett describes the philosophical topic of qualia as “a tormented snarl of increasingly convoluted and bizarre thought experiments … and a bounty of other sidetrackers and time-wasters.” [Ibid., p. 369] Dennett has convinced me of this. We have within us numerous detectors that are tuned to various things in and around us. Things that we sense seem like something to us because we have certain reactive dispositions to our detection of them. There is no great mystery to this at all. I find Dennett’s disqualification of qualia as a system of ineffable properties to be a great relief as this frees us from a great deal of unnecessary baggage in our efforts to distinguish sentience from the forest of mental phenomena.


Dennett is critical of philosophers who suggest qualia are an ephiphenomenon. He observes, “‘epiphenomenalism’ often seems to be the last remaining safe haven for qualia.” [Ibid., p. 401] According to Dennett, the standard philosophical meaning of an epiphenomenon is an effect that “itself has no effects in the physical world whatever.” [Ibid., p. 402] He shows how postulating an event that has absolutely no effects is an exercise in absurdity because such an event would be impossible for anyone to observe. [Ibid., p. 402] However, he also discusses epiphenomena in the Huxley sense, i.e., in the sense that a phenomenon may have effects in the physical world but is not functional. Dennett finds no problem with the idea of qualia if it is identified with reactive dispositions that are physical effects that also have physical effects. [Ibid., p. 404]. This alternative view would, of course, empty qualia of their subjective, private, ineffable properties.

Since sentience has been neither explained nor clearly defined, I find no scientific basis for either affirming or denying its classification as any kind of epiphenomenon. Ultracog may be a very high functioning entity without any benefit of sentience. This would present the possibility of sentience being an epiphenomenon in the Huxley sense. Perhaps sentience is not at all necessary for a highly advanced mind. On the other hand, sentience may be highly essential to the functioning of our minds in ways that scientists have never begun to explain.

Explanation, Definition, Recognition

Since sentience has not been explained or properly defined, does this mean it should be dismissed as a useless fiction like qualia? I would say certainly not. Inability to define or explain a phenomenon or entity does not imply inability to properly recognize it. An excellent example of this is a two-year-old child’s recognition of his/her parents. The child may be at total loss to define who his/her parents are or to explain anything about them, but such inability does not diminish their profound importance to him/her. The child has a good deal of representation of his/her parents in his/her mind despite his/her inability to communicate a verbal description to others from his/her internal representations of them. Likewise, I think that most of us have a good deal of representation of sentience in our minds, but even the greatest of philosophers have not yet succeeded in turning these representations into a useful verbal description.

Just as his/her parents are of profound importance to a two-year-old child, I find sentience to be of profound importance to me. In my opinion, a universe devoid of sentience, even if bustling with superintelligent activity, would be a very empty universe.

I think that much confusion results from inability to define sentience. The convoluted controversy over qualia may just be a problem of mixing or confusing other phenomena with sentience that really need to be distinguished from sentience. The nature of such confusion may be as absurd as confusing electric field with electric charge. An electric field is always present with an electric charge, but it is simply not true that an electric charge is an electric field or that an electric field is an electric charge.

What does sentience require?

This brings us back to the question of whether all Joycean machines are sentient. Perhaps, it is impossible to create a Joycean machine that is not sentient, just as it is impossible to have an electric charge with no electric field. On the other hand, photons have no electric charge but do have electric fields. Analogously, it may be quite possible to create a socially and linguistically fluent Joycean machine that is absolutely devoid of sentience. I would not expect such a joycebie to be indistinguishable from a sentient person because sentient persons do have significant representation of sentience in their minds, despite their inability to properly define or explain sentience. The joycebie would either find the idea of sentience to be rather meaningless or would confuse it with some other phenomenon when conversing with a sentient person about sentience. Scientists cannot presently answer the question about the possibility of a joycebie because science remains in such a great state of ignorance concerning sentience.

Dennett wrote, “Everybody agrees that sentience requires sensitivity plus some further as yet unidentified factor x.” [Kinds of Minds, p. 65] On one hand, this may prove correct. On the other hand this may prove to be as ridiculous as claiming that an electric field requires an electric charge plus some additional factor x. Later, Dennett offers “a conservative hypothesis about the problem of sentience: There is no such extra phenomenon. ‘Sentience’ comes in every imaginable grade or intensity, from the simplest and most ‘robotic,’ to the most exquisitely sensitive, hyper-reactive ‘human.'” [ibid., p. 97] I agree that sentience comes in a huge range of grades or intensities. However, if sentience requires nothing more than sensitivity, then what determines the grade of sentience? Does a bar code reader have ten grades more of sentience than a thermostat? Are there different kinds of sentience? Is a computer that solves difficult math problems ten grades more math-sentient than a student who gets failing grades in math? Then why is doing math quite painful and effortful (higher intensity sentience) for some failing students and is rather painless and effortless (lower intensity sentience) for some gifted students? Dennett has done excellent work in heterophenomenology, but what can he do for heterosentienology? Unless I have missed something, Dennett never offered any suggestions on how to test his “conservative” hypothesis.

Here is a testable hypothesis that may possibly be analogous to Dennett’s hypothesis that sentience requires nothing more than sensitivity. A student’s final examination submission requires nothing more than ink marks in a blue book. Final examinations receive every imaginable grade. Some blue books are ink-marked with such exquisite genius that they score beyond 100%. Some other blue books are ink-marked in such a disgusting way that they earn a negative score. This hypothesis may prove true, but what use is it?

Sentience and the nervous system

Dennett entertains the possibility of slow-motion sentience in plants. [Kinds of Minds, p. 66] Although he sees compelling reason to reserve sentience for something more special, he later allows for sentience to exist in every imaginable grade or intensity. [Ibid., p. 97] The idea of anything without a brain possessing sentience may seem absurd. However, there is a real plausibility to this when considering what amazing things can be accomplished without a brain. The single cell from which a child develops has no brain or nervous system at all. Yet, that single cell has all the wisdom [ibid., p. 78] within it to direct the entire process of building the child’s entire biological system without external guidance or supervision. Nutrition and a well-controlled environment are essential, but these do not give direction to the development process. Have the minds of the world’s greatest scientists ever come near to orchestrating a project comparable to this?

Nothing left out?

Dennett is critical of those who would add quale or some other intrinsically wonderful property to fill in what is missing in an explanation of consciousness. [Consciousness Explained, p. 455] He has done well in his employment of the metaphors of “Software, Virtual Machines, Multiple Drafts, a Pandemonium of Homunculi” [ibid., p. 455] in his explanation of consciousness. However, the matter of sentience remains untouched in all of this. I agree with Dennett’s disqualification of qualia, but he has made no mention of sentience whatsoever in his entire book, which purports to explain consciousness.

Dennett admits, “‘Sentience’ has never been given a proper definition, but it is the more or less standard term for what is imagined to be the lowest grade of consciousness.” [Kinds of Minds, p. 64] If sentience is the lowest grade of consciousness and if Dennett has explained consciousness in Consciousness Explained, then–WOW!–sentience should be the easy problem of consciousness, not the hard problem. Dennett has done an excellent job of showing us how the higher grades of consciousness function in the verbalization process. Explaining the lowest grade of consciousness, then, should be much simpler than this. However, Dennett offers no explanation for this “lowest grade of consciousness.” This would seem like a student who can handle PhD-level work quite well but proves to be the class idiot when attempting to compete with students at a kindergarten level.

In stating that sentience is imagined to be the lowest grade of consciousness, Dennett is not at all clear as to who is doing the imagining. Maybe this is not what he imagines at all, because he did say that sentience “comes in every imaginable grade or intensity.” [Ibid., p. 97] Perhaps he could imagine that sentience is a profound and yet unexplained phenomenon that is highly intimate with but distinct from consciousness. If he imagines this, then I would say that his claim to have explained consciousness is exonerated. However, if Dennett counts himself among those who imagine that sentience is the lowest grade of consciousness then he has an awful lot more explaining to do.


Dennett, Daniel C., Consciousness Explained, New York: Little, Brown and Company, 1991.

Dennett, Daniel C., Kinds of Minds, New York: BasicBooks, 1996.