Consciousness: what it is, where it comes from — and whether machines can have it


A young girl stares through the glass at baby gorilla Yola, at the Woodland Park Zoo.

What does it mean to have a sense of self — and which creatures have it?Credit: Genna Martin/San Francisco Chronicle via Getty

I’ve Been Thinking Daniel C. Dennett W. W. Norton / Allen Lane (2023)

Free Agents: How Evolution Gave Us Free Will Kevin J. Mitchell Princeton Univ. Press (2023)

The Four Realms of Existence: A New Theory of Being Human Joseph E. LeDoux Harvard Univ. Press (2023)

These are good times to be a thinking, conscious creature, despite events in the world that might make us doubt that. These are even better times to be a creature who thinks about consciousness: the scientific debate is livelier than ever, and technological advances and political controversies are making the practical and philosophical questions surrounding consciousness ever more pressing. Will artificial intelligence (AI) become conscious? (Or maybe it already is…? Well, no, I would say, but we’ll get to that later.) Can state-of-the-art algorithms manipulate our consciousness to change our view of the world? Which animals, besides humans, are conscious? What about fetuses? Or artificial neural organoids?

It is becoming clearer that real-life implications will be drawn from the answers that this field generates to such questions1,2. That means we must vastly improve our fundamental understanding of consciousness and related phenomena, such as agency, free will and sense of self. With so much at stake, we had better get things right. This sense of gravity hovers above three books that, in one way or another, tackle these thorny questions.

Meaning in meaning

The first does so mostly in passing. Daniel Dennett’s I’ve Been Thinking is first and foremost an autobiography of the highly influential US philosopher, based at Tufts University in Medford, Massachusetts, from his early childhood spent in Beirut as the son of a spy, to his seminal body of work on consciousness, free will and theory of mind. Readers will enjoy the backstage intelligence about some renowned contemporary philosophers, from George Ryle’s drinking preferences to Jacques Derrida’s arrest on a (false) drug-smuggling charge. There is also some direct and pretty harsh score settling in a chapter devoted to ‘academic bullies’ (you’ll have to read the book to find out who), as well as Dennett’s tips for staying focused in long philosophical talks (listen for words that start with each letter of the alphabet in order), perfecting your own arguments (walk and talk to yourself until you’re convinced) and spotting weak lines of reasoning in others (look for the word ‘surely’).

Towards the end of the book, Dennett summarizes his view of how consciousness, free will and meaning emerged from billions of years of natural selection and cultural shaping, as single-celled organisms became eukaryotic, multicellular ones. In the last tiny fraction of the process came Homo sapiens, and the development of language. This took millions of years of R&D by, as Dennett puts it, “agents who did not yet understand what they were doing and why”. But the possibilities that language provided — to notice meaning, to analyse, to think about what we are thinking and to communicate and act on our thoughts — presented a problem. How could humans control these unprecedented degrees of freedom?

The answer, according to Dennett, was consciousness. Consciousness, for him, is a control architecture that takes competing streams of ideas and determines from them our expectations and actions. This control system is, fundamentally, who ‘we’ are. Consciousness is not about the way it feels to touch a hot surface, for instance, but about generating a control signal that tells us to move our hand away from that surface, an action which has survival value. Free will, in turn, is the ability to differentiate between competing streams of thoughts and actions. Being human is essentially about being a reasoner: to reason about reasons and to exert control over one’s own behaviour. Our sense of self — being a being that ‘experiences’ things, observing them somehow from the outside — is a mere user illusion.

Evolutionary forcing

Kevin Mitchell and Joseph LeDoux apply similar evolutionary rationales to explain the emergence of consciousness and agency in their books. Mitchell is a geneticist and neuroscientist at Trinity College Dublin. His Free Agents devotes its first six chapters to an evolutionary account of the development of life and its various faculties. He argues that cognitive traits such as action, perception and choice started from very simple mechanisms that were selected for and honed to maximize fitness, or survival. From reading his book, one gets the strong impression that humans were forced by natural selection to be able to make choices and to become conscious agents.

At some point, he throws indeterminism into the mix. The Universe is not deterministic, he argues: it involves some degree of randomness, with events sometimes seemingly governed by the flip of a coin. The same is true of the brain, in his view. This indeterminism is adaptive, making humans less predictable and hence more able to survive and fight opponents.

Does such indeterminism by itself endow people with free will? No, says Mitchell: there’s nothing free in being governed by a coin flip. But indeterminism in an organism’s responses does allow it to have some influence on its future. The ability to create and express meaning is crucial here: it endows our reasons for doing things, and our reasoning about reasons (which Dennett also emphasizes), with causal power. Dennett wants to dispel the ‘illusion’ of self, but for Mitchell, the self, with all its goals, desires and beliefs, is real, and key to our free will. Together with the meaningfulness of the patterns of our neural activity, it allows us to exert top-down control, to plan ahead and to continuously shape ourselves as we interact with the world. For Mitchell, such conscious, rational control of our actions is nothing other than our free will. It is a biological, evolved function — as Dennett argues too.

Into the conscious realm

LeDoux agrees. In The Four Realms of Existence LeDoux, a neuroscientist at New York University, suggests that there are four basic varieties of life on Earth: biological, neurobiological, cognitive and conscious. The book provides an in-depth description of these realms (I found the cognitive one especially thought-provoking) and describes how they evolved, in a way that is reminiscent of Mitchell’s approach. In this scheme, most living things occupy only the biological realm. Organisms with nervous systems are also neurobiological. Of these, some animals show model-based behaviour — using past experience to predict the future effects of their actions, and in doing so optimizing outcomes. These count as cognitive creatures3.

The fourth and least common realm is the conscious one. LeDoux values the ability to verbally report the content of experiences as the prime indicator of consciousness, a position that is not shared by all4. He emphasizes the importance of activity in the prefrontal cortex in allowing the creation of higher-order states that re-present the content of experience (although the special role of this brain area again is debated5).

LeDoux further differentiates between types of consciousness, ranging from simpler forms to the explicit, content-rich type that humans have. He argues that we should aim to connect each type of consciousness with a different prefrontal brain architecture, and judge claims of animal consciousness on that basis. For example, because all mammals share the same mesocortical prefrontal areas, they might have “whatever kind of consciousness these areas enable in humans”. However, some prefrontal brain structures are unique to humans, arguably endowing us (and possibly some other great apes) with some unusual aspects of consciousness, such as the ability for mental time travel, that are not shared with other animals.

And what of AI? These books are published at a time when the discussion about the potential for machines to gain consciousness and agency is attracting substantial attention (see go.nature.com/46hjzvk)6,7. All three have something to say about it. LeDoux takes a hardline biological approach, arguing that consciousness can exist only in biological beings. Even if one were to mimic all the biological mechanisms that support consciousness — whatever these could be, from the micro to macro level — the resulting system would not be conscious.

Dennett is similarly unenthusiastic, referring to this wave of excitement as a “bubble we should burst before many more people get deeply misled by it”. He presents DigiDan — a GPT-3 model trained on almost all of Dennett’s publications (more than one million words!) — which he uses to generate Dennett-like sentences. But as (the real) Dennett explains, despite DigiDan’s impressive abilities, it doesn’t understand anything it says: it is not an agent with beliefs and desires, or, in Dennett’s words, an intentional system (yet).

But despair not (or rejoice not, depending on where you stand). The epilogue of Free Agents provides a ‘recipe’ for creating artificial systems that resemble humans, that have general intelligence and agency. It is to follow the evolutionary trajectory that got us here: embodiment, sensing, acting, with some motivation and learning abilities, and a drop of indeterminacy.

That’s why it is indeed a good time to be a creature thinking about consciousness: as all three books emphasize, today’s discussions of these issues are much more informed than they were, say, 70 years ago. Back then, one had to resort to I, Robot, Isaac Asimov’s brilliant 1950 collection of science-fiction stories, to think about consciousness in artificial settings. Today, we can rely on a strong backbone of biological research, on developed conceptual and philosophical insights and on extensive empirical work in the field of consciousness studies. Although the field is far from agreeing on a single theoretical or empirical account8, progress has been made in understanding the issues, and in suggesting solutions9.

I firmly believe that this knowledge-based, interdisciplinary approach is the way to move the sort of questions I posed at the beginning of this article from the domain of science fiction to that of science. This year I was part of a group of philosophers, computer scientists and neuroscientists that published an extensive report on consciousness in AI6, identifying potential indicators of consciousness in artificial systems, using theories developed mostly with humans in mind. We show that current AI systems fail to meet these criteria, but also that there are no technical barriers for building a system that will satisfy them.

Would such a system be conscious? To be honest, I am not sure; I consider the indicators as signifying the potential for consciousness, rather than its existence. Would we really want to build a machine with consciousness, or agency? Here, I am even less sure. We are yet to understand which creatures in the world are conscious, and have not developed ethical frameworks that account for this possibility. As our past and present sadly demonstrate, we repeatedly mistreat even those creatures who are undoubtedly conscious, our fellow human beings.

It doesn’t seem very prudent to me to add more conscious creatures to this already complicated, combustible picture. It is perhaps wiser, then, to be a creature who thinks about consciousness than one who aspires to create artificial versions of it.


Leave a Reply

Your email address will not be published. Required fields are marked *