The Computational Model of Consciousness is Ill-Defined

Let’s say I take an extremely detailed scan of your brain. I load the scan into a sophisticated program that knows everything about how brains work. The program can then run your mind as if it were a computer program, simulating how you would respond to various inputs.

Is the program conscious, in the hard-problem-of-consciousness sense? For example, if you tell the program to simulate something painful happening, and it prints ow, has anything actually experienced pain?

Some say “yes”, some say “no”, some refuse to answer the question. One particular school on the “yes” side is that anything that emulates a mind is conscious by definition. This is sometimes phrased as the claim that consciousness is a computation, and so any two things that are computationally equivalent are equally conscious.

I used to agree with that school, and have since changed my mind. I will now try to present what I consider to be my strongest argument, which is unfortunately also my most unintuitive argument. In one sentence:

The computational model of consciousness is ill-defined, because the very question of whether a particular computation is happening has no objective answer.

To explain this, let’s step away from the simulated person for a moment and talk about computation more generally. A computation is a process that manipulates information: it has inputs and outputs, and it defines a certain relationship between the information that goes into the inputs and the information that comes out of the outputs. For example, a computation might have two numbers as input, x and y, and an output which is always equal to x + y.

But note: numbers are an abstraction (as is all information). You can’t hold a number in your hand and feed it into a slot on a machine. Rather, you can click on buttons in a pattern that represents numbers, and then the output appears as a pattern of pixels that represents their sum.

Naturally, computations can also happen without buttons or pixels. You can have a machine made of blocks of wood connected by springs, such that you move the blocks to a certain position representing the inputs, and then release a catch, and after some jostling the position they settle into represents the output.

But they only “represent” the output in the mind of the observer. The blocks don’t have to know what they represent in order for this process to work properly. In fact, it’s entirely possible to build a working computer without any of the people who worked on individual parts knowing what they’re building. In theory, such a device could even be built by accident.

Now consider an object that definitely wasn’t designed to be a computer—say, a bag of sand. Every time you shake the bag, the grains move around in a way that depends on exactly how you shook it.

You can see where this is going. If we interpret each possible shake of the bag as a pair of numbers, and we interpret each exact configuration of grains in the bag as a number, then the bag can be taken to be a “computer” that performs some arithmetic operation. Or, with equal validity, a completely different arithmetic operation.

(Now, actually using this computer would require you to track the grains of sand in minute detail. It would also require you to have extremely fine motor control. And, of course, you’d have to memorize the numerical value of a huge number of grain-of-sand-configurations and shaking motions. Humans can’t do any of these things. But it would be very strange if the physical and psychological limitations of humans implied anything fundamental about the nature of consciousness.)

I hope this makes it clear that whether a particular physical process is a computation—and what computation it is—depends on the representation scheme of the observer. Two people could observe the same physical process and take it to represent different computations.

We now return to the point.

Let’s say a computer is emulating a person. What does that mean? It means that the computer is running a program that can correctly predict how that person would behave in any given scenario. In other words, the computer is mapping inputs (situations) to outputs (behaviours) in the same way the person would.

So far, so good. But that’s not quite correct. Because the computer doesn’t have biological sense organs or muscles—it’s a computer. If you make the area around the computer cold, the computer will not shiver. Rather, if you type set ambient_temperature = 0 C, then the computer will respond with *shiver*.

(Or something like that. Obviously, the I/O to your emulation could be done in any number of ways: text, network messages, images, sound, whatever.)

The point is that an emulation doesn’t map the same inputs you do to the same outputs you do. It has its own range of inputs and outputs, and there exists some equivalency between those and the inputs and outputs of humans, such that the computer maps equivalent inputs to equivalent outputs. For example, the input set ambient_temperature = 0 C is equivalent to the sensation of standing around in freezing weather; and the output *shiver* is equivalent to the action of shivering.

Why does this matter? Well, that equivalency is entirely in the mind of the beholder. You speak English, so of course you know that set ambient_temperature = 0 C implies the sensation of standing around in freezing weather, and that *shivers* means shivering. But obviously English words don’t objectively mean anything.

Imagine a written language, “Shilnge” that looks similar to English—in fact, all the sentences in Shilnge happen to also be valid English sentences—but with completely different meanings. For example, all the English phrases that describe happy events mean something sad in Shilnge, and vice versa. Likewise, all the English descriptions of something a person might do or say when happy happen to describe the sorts of things sad people do in Shilnge.

So, is the emulation in English or in Shilnge?

I hope it’s clear why the question is important: if the emulation is conscious, and it’s in English, then when it says I’m happy it’s actually experiencing happiness. Conversely, if it’s in Shilnge, then when it says I’m happy it’s actually experiencing sadness.

All it takes for a program to be an emulation is for it to map inputs to outputs in a way that an observer would interpret as equivalent to the way an actual person would respond to situations. Here, both an English-speaking observer and a Shilnge-speaking observer would agree that the computer was emulating a person. But they would disagree on what situations and responses were being emulated!

The claim we’re attacking is that an emulation actually experiences whatever sensation you ask it to emulate. We now see why I said the claim was ill-defined. There is no objective determination of “whatever sensation you’ve asked it to emulate”.

And of course, this holds for other examples too. In fact, under a weird enough interpretation scheme, any physical process with enough possible inputs and outputs—and remember that everything is made of lots of tiny molecules, so there are a lot of possible inputs and outputs—can be taken to represent any computation we wish. This includes computations predicting people’s behaviour, i.e. emulations.

Thus, the “computational consciousness” school attempts to define something objective (consciousness) in terms of something that turns out to be subjective (computation). This empties their definition of any concrete ontological or ethical content.

What then does the emulations-are-conscious hypothesis imply? We can read it in a number of ways. Each possible answer is given in bold.

Almost every large-scale physical process is conscious, in multiple different ways, at the same time. The emulation in the thought experiment has two different consciousnesses: one happy, one sad.

This is pretty counterintuitive, and has implications that immediately destroy all of ethics. If everything is conscious in multiple opposing ways at the same time, how are we to behave? If you do something that makes the machine output I am happy, have you done something good or something bad?

Come to that, if you make a person act “happy”, have you done something good or something bad? After all, their brain can also be taken to embody an emulation of a different person, who is sad.

The correct interpretation scheme is the one that would be adopted by someone who wasn’t being deliberately contrary. The emulation in the thought experiment should be assumed to be in English.

Look, I apologize for the thought experiment. I admit that English is a language that is actually spoken (by the creator of the emulation, and the flesh-and-blood person being emulated, and a billion other people), and Shilnge isn’t. Giving the two equal weight in our interpretation of the emulation is silly. The same is true of our ad-hoc interpretation schemes that could map any physical process to any computation.

But if everyone in the world dropped dead except for one person, that one person would continue to be conscious and experience the world. Their consciousness exists independently of how anyone else sees it.

However, under this model the “true” consciousness of the machine is determined by the perception of humans. So if all humans died, or changed how they interpreted the machine, would the machine’s consciousness change? Even if none of those humans were actually interacting with the machine at the time?

This is absurd. If you’re willing to bite the bullet and say it would, then at the very least we must concede that the consciousness of the machine is “real” in a very different sense to the consciousness of humans.

“Emulations are conscious” isn’t a claim that anything that emulates consciousness is conscious by definition. It’s just saying that anything that reacts as a conscious being would should be presumed to be conscious. (It’s epistemic, not ontological.) Since you can’t directly observe consciousness, this is the best we can do.

Even claims about “what should be presumed” can’t be totally subjective. Obviously they have to be defined in terms of what some viewpoint agent knows, but it makes no sense to define them in terms of how the viewpoint agent thinks. Claims of this sort exist to prescribe what the viewpoint agent should think.

For example, given all available information about the physical universe, should we assume God exists? We can debate this question all we like, but it makes no sense to say “if you’re a spiritual sort of person, it’s rational to assume God exists; otherwise, it isn’t”. Two rational agents with the same information should reach the same conclusions.

(Follow-up) We can’t directly observe other humans’ consciousnesses, just as we can’t observe the machine’s consciousness. We should presume the emulation is conscious for the same reason we should presume other humans are conscious.

The reason I presume other flesh-and-blood people to be conscious is as follows. I can directly observe my own consciousness. I observe that my experiences are of my own thoughts, memories, sensations, etc; i.e. my brain-states (or a portion thereof). So something about my brain is producing those conscious experiences. For now, I have no idea how—but perhaps scientists will eventually figure it out.

Then I meet somebody else. They have a body a lot like mine, including a brain a lot like mine (medical science having established by this point that human brains are pretty similar to each other, I don’t have to actually crack their head open to check). So until proven otherwise, my presumption is that their brain has the same basic features that my brain does, including consciousness.

This is a judgment call, of course. There are lots of objects in the world that are more or less like my brain. Since I have little idea how my brain produces consciousness, I have no rigorous way of knowing how something would have to be different from my brain in order to not do that. But that is, more or less, the thought process.

In this respect, consciousness is a lot like other features which we don’t directly observe. For example, imagine if you’d never personally felt any other person’s pulse. From the fact that other people’s bodies were otherwise like your own, you would probably presume they had a pulse too. The fact that (other people’s) consciousness is unobservable even in principle, whereas other features might be unobserved only in practice, doesn’t have to make a big difference.

But now, we bring in a computer emulation. It’s superficially similar to you, but the internal structure of the emulation is completely different to yours. You know for a fact that it’s made of different chemicals to you, arranged in a way that’s not even analogous to the way your body is arranged. Would you presume it had a pulse? Of course not. So why would you presume it’s conscious?

(If your answer is “because a pulse is a physical process, but consciousness is a computation”—well, the original argument was supposed to establish that consciousness isn’t a computation.)

Leave a Reply

Your email address will not be published. Required fields are marked *