A Model of Consciousness

Joss Earl

joss@software-that-works.co.uk

This post looks at the issue of consciousness from a different (but compatable) angle to R4.

Consciousness is the process of updating your internal representation of yourself.

I mean "process" in the classical computer science sense, ie: "a program in execution". It is not the hardware of your brain that is conscious, neither is it the 'software' - a static print out describing the precise state of the neurons would not be conscious. Consciousness here refers to the more restricted meaning: "self awareness", or the thoughts in your head. I'll use the term sentience to refer to the complete mind which produces this phenomena.

In other words, everybody carries around in their heads an internal representation of the universe (a topic explored in "The Ghost Not"). We need this in order to interpret our senses, and it is essential for survival. The photons that hit your eyes indicating an approaching truck do not carry the information that the truck will flatten you. This information is obtained by looking up "truck" in our internal model of the universe. Naturally, the most important thing in anyones internal universe is themselves so the detail people keep on themselves is greater than anything else, and is frequently inaccurate. Generally speaking, the less accurate people's internal representation of themselves, the harder they are to get along with. So far, this is fairly standard psychology.

The thing is I don't think this goes far enough. I believe that consciousness is a deeply inadequate and superficial internal model of what we are thinking. We realise that we have had a thought after we have it, but people cling to the (to me) laughable notion that their consicious mind is in control. The term sub-consious is hugely misleading. The sub-consious is 99.99 % of what is going on, while consciousness is a faint (but important) echo of what's happening in your brain.

This way of looking at things makes many things easier to understand. When you first get into a car, you are conscious of many details, I'm checking the mirror, I'm putting the key in ignition, I'm turning the wheel, etc. After a little while we stop bothering with this and the only message is "I'm driving, driving, driving, driving" and your focus wanders. Sometimes our consiousness of driving returns to more detail: "I'm driving, driving, driving, driving, about to die, hey - what was that last one" by the time you are again consious of the details of driving, you have already braked, swerved, avoided a collision with the on-coming car. All the necessary actions were taken by your "sub-consious" and you feel very lucky. There is a sense of "thank god I did the right thing, even though my consious mind wasn't working on the problem".

A key point here is that there is a question and answer loop going on. Parts of our brain can communicate with other parts. In very loose terms, the part responsible for maintaining the meta level symbolic representation can poll the part responsible for driving the car and obtain almost as much detail as it pleases on what that bit's doing. If it's not asked, it won't tell, but that doesn't stop it from being able to drive. In fact, the opposite seems more likely - if the symbolic reasoning center is permanently pestering another process, that process will be hampered. Of course, a certain amount of inter-process communication is necessary - the driving process does not know where you're actually going.

Consiousness can be seen as an artifact of sentience, not necessarily it's raison d'etre. Our internal representation (IR) of the inside of a television is a bunch of wires, and bulbs. Virtually nobody can visualize a fully working television with all the necessary components performing the requried tasks. In order to function, a television needs a far richer internal structure than most people are capable of holding in their IR worlds. Brains are 10^? more complicated than televisions, it's amazing that people can honestly think they have any understanding of what they are thinking.

Conscious thought does seems to be necessary for deductive reasoning. Deductive reasoning is a useful tool, but one which often blinds people to their true abilities, it is seldom the source of insight. A recent article in New Scientist talked about experiments which showed that people can perform much better on some problems when they don't try to think about them. The researchers seemed surprised by this. However, I think it's a crucial difference between hyper productive programmers and average programmers. The mappers let their minds get on with it, and don't cripple themselves by constantly trying to fit their thoughts into consiousness. Good programmers can do things where they certainly wouldn't be able to tell you how they did it. If you're constantly trying to follow a procedure, you're only going to be able to use data that's filtered through to the linguistic "thoughts in your head" part of the brain.

Cartesian Theatre

This viewpoint if of course a huge simplification. It's better to think of it as an analogy than a precise description. Several important points have been left out, the most salient being that the notion of a single "stream of consciousness" is also an illusion. The brain is definitely the seat of consciousness, but trying to pinpoint it more accurately than that is misguided. Various processes acting in parallel in different parts of the brain all give rise to parts of what we experience as consciousness.

This contradicts with the "cartesian theatre" model where the brain acts as a filter, gathering and interpreting data for projection to our souls in a little theatre inside our heads.

The "cartesian theatre" model is wrong, but very hard to abandon - it crops up in a huge range of disguises. Read "Consciousness Explained" for more detail on abolishing that one. Although frequently ridiculed for it's dualist implications, mainstream consciousness theories often implicitly require some sort of cartesian theatre.

I completely agree with the mainstream science idea that we should reject dualism. Our conscious thoughts are brought about purely by physical events within this universe. However, I believe this logically leads to a much stranger and richer view of the universe than commonly exists.

Example

From phrasing you can tell that this is a programmer's viewpoint. Let's make this a little more concrete to illustrate a point.

I could write a program - say some sort of Eliza program. An Eliza program answers questions in as human like way as possible. They can be quite convincing, but it's really just a cheap trick. Example at http://www-ai.ijs.si/eliza-cgi-bin/eliza_script

Q "Why is my boss so stupid"
Eliza "Why do you say your boss so stupid?"

Q "Because he asks me to perform mutually exclusive tasks"
Eliza "Do any other reasons not come to mind?"

Now suppose I made this program more sophisticated and gave it some meta representation so that the program maintained an internal rep of what it was doing, was able to obtain feedback about it's own internal state, and had the ability to answer questions along the lines of

Q "Why do you often answer questions with another question"

Eliza "Often I don't really understand the question, I'm programmed to reflect the question in order to disguise this fact".

I'm saying that updating this self-representation is analgous to our own self-awareness. Does this mean that when we run this program, then the process is conscious in a similar way to our own consciousness ?

This is the point where 'serious' people usually back down. You'll see some handwaving argument about how passing the Turing test indicates "equivalence" so its "meaningless" to speculate on whether this indicates "true consciousness". There is a pathological reluctance to reason any further. You can just hear the braying from the Ghost Not victims: "I can't imagine how a machine could be conscious, so it's ridiculous to say that a machine could be conscious". Who cares whether you can imagine it ? What relevance does that have on the situation ?

I don't have a problem with those who would insist that it's not the same because "it has no soul" - at least their viewpoint is logically consistent. Those who insist it's different because the brain is made of biological components need a whack upside the head with a "clue by four". At the deepest level, there is no such thing as biology - it's a made up word. There are atoms and molecules. They really don't care if we call the structures they assemble biological or not.

The answer to whether running this program would give rise to a conscious mind is of course: possibly.

Reductio Ad Absurdum

Lets go for the reductio ad absurdum here. Suppose that instead of running this program on a computer, we ran it on a virtual Turing machine operated by trained monkeys. This would of course run very slowly. In fact, it could take several monkey's lifetimes to run the program. We can speed things up by having several teams of monkeys working at locations in Paris, LA and Oxford and communicating through telegrams. However, speed and intelligence are orthogonal concepts. Imagine a radio conversation with an alien from another galaxy. Each exchange could take thousands of years, but that doesn't make any difference to the intelligence of the correspondants. The intelligence of any conversation can be judged in a time independent manner.

So, if one accepts that the human brain depends on purely deterministic processes, then one must accept that the manipulation of information can give rise to an intelligent consciousness. One could construct a machine that was capable of going through the same mental processes as we do. This consciousness exists in a manner that is somewhat independent of time and space. If you insist, you can think of a fleeting fraction of that consciousness as existing for one moment at the point where a portion of a tape is read in LA, and then the *same* consciousness as existing in Paris a moment later, or even simultaneously. This really doesn't help much. The physical manipulation of information gives rise to a conscious intelligence that has a somewhat separate existence to the apparatus that produces it.

Try this argument on a 'rational scientific' GN type. Watch his denial of the reality of thought as he struggles to escape the implications this. "Cogito ergo Dio ergo non cogito". (I think, therefore there could be a God, therefore I don't think).

In this instance, just because something is absurd, doesn't mean it's wrong.

Further Questions

Two important questions arise from this:

  1. Is the human brain really just equivalent to a Turing machine ?

  2. What is the necessary quality of a physical process for that process to give rise to a conscious intelligence?

I don't know the answers to these, but here's my best guess:

Are We Turing Machines?

I sincerely doubt it

A Turing machine is (a) discrete and (b) deterministic

Although we can approximate a continuum to arbitrary accuracy with discrete numbers, there really is a qualitative difference between integers, and reals as proved by Cantor's diagonal strip. I don't know enough to say whether this is significant.

The determinstic issue is also unclear. It is impossible for free will to exist in a purely deterministic world and I tend to side with the notion that we unambiguously have free will. Somehow our quantum observations can result in effects being back-propogated in time. Our decisions determine the state of the universe enough for our brains to be in the state necessary to make those decisions. This is certainly not ruled out by standard "understanding" of QM. It's less weird in R3 model.

On the other hand, maybe we're just unpredictable. I've done quite a lot of work with genetic algorithms, neural nets, simulated annealing and the like. Here you have stochastic algorithms where the behaviour depends on a large number of random numbers. Although the algorithms are fully deterministic, they seem unpredictable, and if they accept input from a genuine random number generator, the behaviour is genuinely unpredictable. I think a process based on these stochastic algorithms hooked up to a RND might 'feel' like it had free will.

What Processes Produce Consciousness

What's so special about the kind of physical processes in the brain that give rise to thought ? Is a neuron conscious? the planet ? a waterfall ? the universe ?

I have virtually no idea about this one. It's possible of course that only similar processes to that which arise in a human brain can give rise to conscious thoughts. Possible, but it seems pathologically egotistical to believe this is likely. I believe that the further up the scale of fractal composition we go, the deeper the level of consciousness that may arise. A neuron, an amoeba, or an ant, probably has a pretty tiny level of consciousness. A brain or an ants nest is a little more intelligent. It seems pretty natural in this model to assume that the universe as a whole is conscious. It's not obvious to me

It also seems likely that the level of interaction between the components versus the complexity of the components is relevent. There aren't many problems that can be solved better by ten PCs connected via a 2400baud modem than by a single, slightly more powerful workstation. Again, this is just analogy, a 1000 processor supercomputer and a single 486 are equivalent to a single Turing machine. I don't think speed is relevent - its the complexity of the process, not the speed of the processor that counts. Consciousness lives in maximum complexity, which one finds in a fractal universe on the edge of chaos.

This relates to my comments about the HumaNet. If humanity as a whole has a consciousness already then it's in a pretty embryonic state right now. Low bandwidth may be an asset for getting it started (see Alan's note) but as it develops, the bandwidth will be increased, and the nodes (people) will gradually learn to use the bandwidth effectively. As the HumaNet becomes a reality, humanity's consciousness will wake up.