banner
News center
The company is seeking top-notch candidates.

What it Means to be Human: Blade Runner 2049

Jan 16, 2024

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

In the 2016 movie Blade Runner 2049, Ryan Gosling plays the replicant K (short for his serial number KD 6-3.7) in a dystopian future in Los Angeles. Replicants are bioengineered humanoids who serve humans, despite having superior strength and intelligence. K is a Blade Runner, whose job is to ‘retire’ (kill) renegade replicants for the Los Angeles Police Department. One day, he finds information that a replicant might have been born from another replicant, which no one thought was possible. This revelation could lead to political tensions between humans and replicants, who may now start to fight for recognition and rights. Hence, K's task is to find and retire that replicant to prevent a conflict.

In the conversation between K and his superior about killing the replicant, K hesitates when he receives the order. K explains his hesitancy by saying something that can easily be overlooked, but which idea guides the whole movie: "To be born is to have a soul, I guess."

This sentence is essential for the film. K is implying that by being born, the new replicant has inherited some special human-like feature. So far, humans have enslaved replicants based on the premise that replicants are just AI and hence not as worthy as their human masters. If a replicant could have been born naturally, the ‘artificial slavery’ basis of society might be challenged.

To understand if the societal order in Blade Runner 2049 must be altered, we must first discover what defines a human being and gives humans worth. This question of what makes us human runs through the whole plot. K begins to think that he might be the replicant-born child who is therefore similar to humans. So let's start to explore the question: What makes us human?

K says that being born means having a soul. ‘Having a soul’ encapsulates the idea of having feelings and thoughts that exclusively belong to the individual – arguably it's the most fundamental aspect of human being. All our memories, all our experiences, and all our actions appear to belong to us as human beings.

Of course, it sounds very spiritual or religious to put this in terms of ‘soul’. ‘Soul’ is an elusive concept, not based on scientific observation. However, replacing the word ‘soul’ with ‘consciousness’ (one Greek word for ‘soul’ is after all psyche), opens up a new world, with more scientific bases to unravel the question of what makes us human.

In the words of philosopher Thomas Nagel, consciousness is ‘what it's like’ to be a particular organism. Indeed, many consciousness researchers consider his 1974 article, ‘What Is It Like to Be a Bat?’ as one of the starting works for the field of consciousness. In this article, Nagel outlines that there is something it is like to be a bat: there is a subjective world from the viewpoint of a bat. Thus, consciousness can be defined as subjective experience. To use philosophical jargon subjective experience includes qualia. Everything we experience in the world we experience through these qualitative, sensory properties. However, each of us may see a different blueness in the blue sky or feel a divergent sharpness in our pain. Our subjective experiences differ from each other.

Famously, the original Blade Runner movie (1982) was based on the book Do Androids Dream of Electric Sheep? (1968) by Philip K. Dick. The book's title is in some ways similar to ‘What Is It Like to Be a Bat?’, supporting my view that the film is about consciousness. Both titles regard inner universes, of androids and bats respectively: the question ‘Do androids dream of electric sheep?’ is presumably asking whether there is something it is like to be an android. But if androids do have an inner universe, what does it look like? Are androids so similar to humans that they dream about the electric equivalent of the sheep that humans think of? Would the inner world of an android be comparable to that of humans, or would it be utterly different? In sum, Dick's title asks if there's a ghost in the machine.

In this context, we naturally want to ask, what do we know, or can we know, about consciousness?

As is probably obvious, there are several answers to this question. For example, some philosophers subscribe to the notion of mysterianism. This is the idea that although there may be a way to explain consciousness, humans are simply too limited in our intelligence to grasp it. Another notion some philosophers find compelling is panpsychism. Panpsychism assigns consciousness as an essential property of everything: everything is conscious, at least to a certain extent, including even rocks or bacteria. Another, more compelling theory to explain consciousness is materialism: the idea that consciousness can be equated with the physical processes and activities of the brain. With, through, and as a result of our living body, consciousness emerges, as a material phenomenon.

In Blade Runner 2049, K appears to lean towards materialism. He thinks having a soul is related to being born, and is therefore tied to the body's physical properties. There is one clear difference between this and standard materialism, though: K believes a person needs to be born to have consciousness. However, according to most versions of materialism, a perfect replica of a human being would most certainly be conscious. Imagine you made an excellent copy of a human; a perfect copy. How could it be possible to lack some fundamental property? Compare this to rebuilding a car perfectly, but without it being able to drive. It is conceivable, but practically impossible.

If you share my view that consciousness depends on the physical body, the odds of K being conscious are very high. Replicants are perfected copies of humans, so why shouldn't they be conscious? Nonetheless, this still does not provide us with a reasonable explanation for why K is conscious. The missing piece of this puzzle is to explain how consciousness can arise in brains at all.

The difference between the ‘easy’ and ‘hard’ problems of consciousness posed by David Chalmers in his 1996 book The Conscious Mind, marks a helpful distinction here. The ‘easy’ problems deal with relating the functional and behavioral aspects of mind and brain. They are about correlating different aspects of consciousness with brain activity: working out which bits of brain behaviour are linked with which bits of mind behaviour, and how. For example, concerning how people pay attention, or how they choose to act, or how the brain processes sensory signals. The ‘hard’ problem of consciousness adds to the challenge by being concerned with the first person perspective. Why and how do these brain processes generate conscious experience itself?

You might be disappointed that science is still working on answering both the easy and hard problems of consciousness. Even the ‘easy’ problems still require much intricate work in neuroscience. But the hard problem of consciousness is particularly difficult to answer (which is why it's the hard problem), because even if you could explain exactly how the brain functions, there is still an apparent gap in explaining why this is connected to or gives rise to subjective experience. Pain, for example, will activate when certain parts of the brain are activated. However, this recognition only answers the easy problem of pain, leaving the hard problem untouched. Just because we know an area in the brain activates when we feel pain, this does not explain why this activation gives us the conscious or qualitative experience.

Let's start with the part of the brain in which consciousness apparently first emerges. The reticular activating system, in the core of the brainstem, is arguably the part of the brain primarily responsible for consciousness (see for instance The Hidden Spring: A Journey to the Source of Consciousness, Mark Solms, 2022). Small lesions in this area put people in a state of coma, so it sounds like this part of the brain can be compared to an on-off switch, since when intact and operating, the person is fully awake and aware. However, damage turns the switch, and awareness ends. The reticular activating system is also connected to the generation of emotions. When stimulating this part of the brain, patients experience strong depressive feelings, which disappear after the stimulation stops. Additionally, the core brainstem is highly active in people feeling emotions such as grief, curiosity, rage, and fear. Due to these two factors – controlling wakefulness and generating emotions – the reticular activating system has a strong claim to be the source of consciousness. Being awake to interact with the world is fundamental to creating an inner universe. The utility of emotions for consciousness may be less clear. So let's consider why consciousness arises. Here it will become evident what part emotions play.

The reason consciousness arises is presumably because it helps achieve the main goal for any organism, namely staying alive. But how does the mind realize that threats are occurring, in order to counter them? Here's where feelings come into play. Whether pain, happiness, or anger, we are constantly feeling emotions which are actually demands for the human body to work in specific ways. They guide a person to act in the right direction for survival by using positive or negative feedback through emotional motivation. Our emotions can be compared to an alarm system for the human body. For instance, feeling fear is experienced in dangerous situations, and elicits the fight-or-flight reaction. Suffocating is another threat which would provoke fear. The body needs to re-establish its necessary level of blood oxygen, and informs the body about this by evoking an emotion. Besides external threats to the body, the mind also reacts to internal ones. The heart rate must run at a certain pace. As long as there is no problem, heart rate is not consciously noticed. However, once the brain registers a concerning alteration, the process of maintaining the proper heart rate becomes conscious. Now symptoms are noticeable, and negative fear emotions demand response from the body.

A person cannot feel all emotions simultaneously. Therefore, the mind creates a hierarchical structure, and prioritises, to reduce the most critical need first. Sometimes, drinking water is more important than going to sleep, for example.

This theory tells us that consciousness arises because subjective experience helps an organism survive. This concept is very close to what Sigmund Freud called ‘drive’. Freud thought of drive as the extent to which the mind works to maintain a balanced bodily state.

We now have all the necessary puzzle pieces to explain why K is conscious. K is a replica of a human, and thus has a human-like body. Following materialism, this gives us the first clue that K must be conscious, just as humans are because of our bodies. Further, since his body is prone to threats, K seeks to survive. He needs consciousness to deal with uncertainty and threats from the environment. Positive and negative emotions guide K to help him determine whether his steps to reduce the danger are sufficient.

Throughout the film, it's evident that K has feelings similar to a human. He feels various emotions, such as sadness, rage, hope, and happiness. And at the end, K also does a most human thing. After K has helped Rick Deckard (Harrison Ford) reunite with his daughter, K lays on the stairs in front of her workplace, slowly passing away. If minds need to survive, then there's nothing more profound to prove that K is conscious than to see him as his mind fades away.

Some readers here might ask an excellent question: Why does this even matter? What is the point of knowing whether K is conscious or not?

Blade Runner 2049 shows us the importance of comprehending consciousness in AI research. It is even now becoming unclear whether AI is sentient or not. An AI has already been created that claims to be conscious: this is the recent case of the chatbot LaMDA, which claimed to feel happiness and sadness from time to time.

You might argue that LaMDA is conscious. However, Chalmers created a thought experiment that depicts the level of problems we have when assessing whether another organism is conscious or not. The thought experiment requires you to imagine a philosophical zombie. This zombie is very different from the ones you’ve seen in movies. It's not a mindless, brutal creature with an appetite for human flesh. This zombie is instead, a lot like you (assuming that you are not mindless and brutal with an appetite for human flesh). Imagine another person that acts like you and speaks like you. This version of yourself even claims to be sentient. However, despite its similarity to you, in having a complete human biology, this zombie has one significant difference: it lacks consciousness. There is not something to be like that organism. It has no inner universe. In the case of AI, there similarly might be versions that claim, and appear, to be conscious, but are not.

Discovering whether an AI is truly conscious will become one of the most vital challenges in this area. Being clear on whether or not an AI is conscious determines how humans should treat it. In the future, we may share our world with highly intelligent, maybe even sentient, AI. If AI becomes conscious, it will be capable of real emotions, including pleasure, pain, worry, excitement. We may never be able to fully grasp what it's like to be such an AI, as much as we do not understand what it is like to be a bat. Regardless, there might be something that is like being that AI. Hence, unnecessarily exposing conscious AI to pain or other unpleasant experiences would be unethical. Picture a world like that of Blade Runner 2049, in which sentient robots look like humans, speak like humans, and have emotions like humans. Willingly mistreating them would simply be a case of psychopathic cruelty. Even turning them off could be considered a case of murder. Therefore, not treating conscious AI properly would be an example of uttermost moral confusion, perhaps on a societal level.

Blade Runner 2049 shows viewers a future in which humans have implemented an unethical approach to Artificial Consciousness. It is morally wrong to enslave K and the other replicants just because they are sentient beings. Yet the humans do precisely that. Moreover, the company that produces the replicants, and the government, are afraid that replicants might one day recognize their similarity to humans, and the people in power act to prevent replicants from recognising and fighting for their rights, in order to keep them as servants. This is why they want to prevent information about a born replicant from becoming public: "The world is built on a wall that separates kind. Tell either side there's no wall, you've bought a war. Or a slaughter," says Lieutenant Joshi, K's human superior.

In Blade Runner 2049, K is on a journey to discover if he has a soul like his human masters. He wants to be more than just a biological machine created to serve. K sees being born as a way to be more remarkable than other replicants. However, what he didn't realize is that he had a soul all along. K did not need to be born to be a unique experiencer, and the factor of having authentic subjective experiences defines us as people. Consciousness is also what gives life meaning.

Imagine you were a philosophical zombie as Chalmers depicted them. Visualize not having your own experiences. For instance, you’re sitting at home eating what once has been your favorite food, but now there's a slight difference: you eat only to satisfy your need for food, and no subjective experience is attached to it. Flavor or texture no longer matter, or even exist for you. You do not feel genuine emotions like joy. And this lack of experience is not limited to eating food, but extends to any activity in which you engage. You are a machine without a ghost. This scenario is horrible, as the world would lose its meaning for you.

Since replicants show no essential difference from other people in being conscious, we need to conclude that they are indeed very much like humans in this most central aspect. Hence, the societal order in Blade Runner 2049 has to change. Humans hate replicants because even though they appear similar to humans they’re still perceived as fundamentally different. Humans have once again fallen for the bias of seeing themselves as the center of the world (which is called anthropocentrism). The humans in Blade Runner 2049 fail to look beyond themselves to recognize that they are living with a species not fundamentally different to themselves. Replicants like K are capable of feeling sensations, and emotions, and having thoughts similar to humans. So, there is no moral grounds to treat them as badly as they are treated.

Sci fi films often mirror the current zeitgeist, and Blade Runner 2049 demonstrates the scientific and ethical problems of our imminent future. We are uncertain how AI will develop, and its coming impact on our world. Meanwhile philosophers and neuroscientists consider various scenarios about what position AI might occupy in the future. One possibility could be that a company will make an AI not only more intelligent but more conscious and, as a result, more ethically important, than humans. Humans could end up being as ethically crucial as an anthill, as an AI tramples on us without a second thought.

This is just one extreme example of how the future might pan out. Other possible scenarios are available. However, in one sense it does not matter what kind of future scenario you envisage. Science has to find a way to understand consciousness in great detail; only then will we have a chance to comprehend consciousness in other species, including in AIs. Blade Runner 2049 is, therefore, a perfect illustration of our present debate on consciousness and its implications for the future of the human race living together with sentient AI.

© Kilian Pötter 2023

Kilian Pötter is a psychology student at the University of Twente, the Netherlands.

one Kilian Pötter