Bog-Hubert’s Theory of Consciousness

Bog-Hubert shook the rain from his coat, entering the Fog Island Tavern. Seeing his friends Abbé Boulah and Renfroe at the bar, he approached them with a greeting and inquiry about current states of their well-being. To his consternation, Abbé Boulah did not respond at all, while Renfroe turned around with a helpless gesture, pointing at Abbé Boulah with his thumb and whispering: “Sumthin’s got to him — ben sittin’ there unconscious-like for better’n half an hour”.

But the shuffling and scraping of the barstool Bog-Hubert pulled up aroused Abbé Boulah from his strange mental state. He turned around and said: “Funny you’d mention that — I mean the consciousness bit. That was just what I was trying to figure out.”

Bog-hubert looked at him, while signaling to Vodçek the bartender for his usual, and asked: “Consciousness? when you seemed to Renfroe here to be — let’s say — out of it? What got you into that? Can you enlighten us?”

“Well” said Abbé Boulah, slowly. “It’s not that easy when I’m just trying to make sense of it myself. But all right. Here I was reading this exchange of comments by Artificial Intelligence experts on one of their hi-faluting internet forum networks, about this theory of conscience one of those experts had come up with.”

“What are those guys getting into theories of conscience for — Isn’t that a subject of , well, several different disciplines?” asked Bog-Hubert.

“True enough — but you know, one of the ambitions of those AI guys is to develop a system — say, a computer program, or even a robot, that has conscience just like a human being.”

“Ah, now I understand, yes. Then they need to know what this thing is, of course. Have a theory. Of consciousness. Makes sense. So this fellow had sprung such a theory on them? What about it? Weren’t they happy with it?”

“Well, there was a lively discussion — but I’m not sure it was a happy one. Frankly, I was surprised any of them really cared to get into a discussion about this particular theory — it seemed, let’s say, not very coherent to me. What do I know. But it got me thinking about consciousness.”

“What was the problem with that theory — I mean, you have some pretty clear ideas about what makes a viable theory, don’t you? This one didn’t live up to those standards? “

“I’ll let you make up your own mind about that — here’s the link to that file — he wrote a whole article, with references and and all. And yes, you might ask yourself whether it does what we expect of a good theory.”

“A good theory: what’s that?” asked Renfroe. “Aren’t all theories just that: theory — not practice, not reality? So what do you expect?”

“Well, it’s really not that complicated. A theory is just a collection of statements about something we want to know about, some aspect of reality. And while you can make up all kinds of stories about anything for yourself, if you want to communicate with somebody else about it, the statements must be understandable to that person. Usually, in general, they must be expressed in a language that person understands, satisfy the rules of grammar of that language, and use words that refer to concepts the other person understands, that is, they have to have the same meaning for both folks. To make sure of that, it usually takes a whole bunch of explanations and definitions at the beginning of any story or book or paper about a theory — because all that can’t be taken for granted. Then of course, the statements must be coherent, and support each other. That means that there should be some connection, some common elements, from one statement to the next. The whole thing must make sense. But then there are several levels of expectations, depending on what you want the theory to do.”

“Levels of expectations, huh? Sounds more complicated than what you said a while ago.”

“Relax, Renfroe. It’s not that difficult. See, a first expectation about anything we want to discuss is that the concepts and statements allow us to describe that part of reality we are talking about: which means that the statements of the theory actually, understandably — to you and me — describe what we are talking about. We need to name that, of course, distinguish it from other things so we don’t confuse matters, which usually means that we have to agree on some kind of definition: That thing over there, that’s what we are discussing. Give it a name, and agree that’s what we refer to when we use the name. Without such an agreement, it’s pretty hard to carry on a meaningful discussion — but it doesn’t mean that we can’t change our minds about that as we learn more. Then we can adjust the definition. But we got to start with one.”

“Makes sense, so far”

“Okay. In other words, the theory at this level must allow us to describe everything we know about that thing. So at this level, we are dealing with a ‘descriptive theory’, and a good theory at this level is one that covers all those concerns of description. Now, that’s usually not all we want from a theory, is it? What do you say, Bog-Hubert?”

“Well, uh. To me, a theory ought to give me some ideas about why and how things are the way they are and why things happen.”

“Right you are: what you are talking about is an ‘explanatory’ theory: one that explains why things happen the way they do. That mostly takes statements like those we call ‘laws of nature’ or laws of behavior. We make hypotheses about such laws and try to find out whether they are true: a good explanatory theory allows us to set up such statements that explain the relationships between things that happen.”

“How do we know if those explanations are any good? Not just wild speculations and nonsense?”

“Good question, Renfroe. That’s the task of science, to find out, or test, whether such theories are valid. To do that, they use the explanatiosn to make predictions: If we do so-and-so, and the theory — or rather the specific hypothesis of a given theory — is true, then we should see such and such happening. If Hypothesis that A causes B is true, and we do A, then we should expect to see B. So we perform such a test: we do A, and see if B happens. If it does, we have a better reason to think that the hypothesis is true — we can’t really be totally sure, because as the philosopher of science Karl Popper emphasized, the reasoning pattern
“If H is true then E must follow
Now E is true (we observe)
therefore H is true”
Is not a deductively valid argument — it just provides a bit more support (corroboration) for H than we had before, but no definitive proof. While the pattern
“If H is true then E must follow
E is not true
therefore H is not true”
is a valid and conclusive reasoning scheme: H is ‘refuted’ — and Popper emphasizes that this is really how science progresses: conjecturing hypotheses, then testing them by making predictions, trying as best we can to refute them — if they stand up to all the tests we can think of then we can be more comfortable accepting the hypothesis as ‘corroborated’ (not proved!) — until somebody comes up with a better one. So a theory is a good one to the extent is lets us make predictions we can test, and those tests turn out as predicted: it has ‘predictive power’. Popper turns this around to say that a theory that can’t be tested, at least in principle (even if it would be very difficult to actually carry out a test) simply is not a scientific theory”.

“Interesting. But why would those AI folks need all that — and for consciousness?”

“Good question, Bog-Hubert. But wait just a while — the question may become more clear after we clarify one more level of theory: the normative level.”

“What’s that?”

“Well, one major reason we want to know things, how things work, and so on, is because we want to do things in the world, make plans, solve problems. And now there are many statements that aim to tell us what to do and how to do it: those stories are what we call the normative level of theory. Not all theories go to that level, but that’s what we are always quarreling about in human life, isn’t it” what we ought to do. And of course, this is where science gets into trouble, because it has no way of etsting, and therefore no business setting up statements about that we ough to do. That’s a whole different topic. But you might use this brief rundown of levels of theories to look at this proposed theory of consciousness: I think you’ll see that it falls far short of the basic expectations even at the descriptive and explanatory level, offers no predictions anybody might test, but then jumps to a claim that it has implications of ethics, without any further justification or explanation. Have you read that paper?”

“Give me some time, I’ll go over it”

“So, what do you make of it?”

“Sorry, I don’t really understand it at all. The paper makes a bunch of audacious claims for which I can’t see the evidence or coherent arguments, so I can’t really comment on that. The audacity must be contagious though, it made me think I could come up with a theory of consciousness myself. One that’s much simpler. But of course I don’t know if it would be of any use for the kinds of things those AI researchers are trying to do with it.”

“Right. It brings us back to the question we sidestepped a while ago: what do the AI researchers need a theory of consciousness for?”

“Beats me. What I know about that is that they want to build things like robots to do things for people, machines that are smart enough to do things faster, better, more efficiently. Don’t know why consciousness has to be a part of that.”

“Maybe they think that to get them to be smart enough, intelligent enough, they need to not just do things, but have a sense of what they are doing and when to stop, or when it doesn’t make sense to begin or continue doing it?”

“Yeah, that’s just what I was thinking: they must be able to look at a situation, get the messages from their surroundings, and check them against the rules, the program that tells them what to do. But that still doesn’t need any consciousness as far as I can see. I’ve seen robots sweeping the floor; when they hit the wall, they stop or turn around. Are they conscious?”

You are getting closer here. — They ‘stop or turn around’, you say. That means that the thing, the machine or entity, whatever, makes choices. And that’s what requires that there’s something in it, — in the entity — that looks at not only the message from the outside, but also at itself, asking: is there something up my memory chip that could let me do something other than going on butting up against the wall, or stopping? Like turning around in a different direction?”

“ In other words; the thing, the entity — we’ll have to allow for both living things and machines here, right? — must have some internal representation not only of the outer world it operates in, but also a representation of itself. Then it receives messages from the outside, matches them against what it has on its representation board, and uses another program to decide what to do about them.”

“Wait a minute. That bit about the representation of the world outside and of itself makes sense to me. Are you saying that these representations are what we call consciousness? I could go along with that. But why do we need the responses, the action? Can’t a thing be conscious just sitting there taking it all in, but not responding — like you did when I came in a while ago?”

“Good observation, Bog-Hubert. But you are asking two different questions here, let’s take them one at a time. One is whether having such a representation already is equivalent to consciousness. And the other is whether the response, the action is a necessary part of that . The second one actually has to do with why the AI and the theory folks are having so much trouble with this: We could imagine something having consciousness just sitting there taking it all in, as you say. But how would we know about it if it doesn’t do anything that we can observe? I think that might explain all the complications in that paper as well: the questions about what such an entity might do that we could observe as a sign of consciousness, are getting all mixed up with the questions of what we can observe, and how we can interpret that, how we can know whether something, some reaction indicates consciousness or just a regular physical or chemical reactions — reactions that aren’t registered on some inetrnal representation board. But these issues don’t really have anything to do with what consciousness itself is, do they?”

“That would make things a little easier on the one hand, if we can cut all those reaction recognition and interpretation problems out. So, for the AI and robot folks, it would mean that giving a machine such representations both of what the outside world is like and what its own organization is like, that would meet their needs at least to some extent, wouldn’t it? They could say that they actually are simulating the effects of consciousness that way. — But it still leaves the question of whether having those kinds of representations really make for consciousness, doesn’t it?”

“Right. And you could say it makes that question even harder.”

“Why?”

“Because you have to admit that we don’t really know much about that representation. People try to make such artificial representations on computer chips and memory boards, to make computers behave like people as much as possible — as if they had consciousness, which tricks some folks into saying that our brains are like computers. But that’s as much nonsense as saying the brain is just like an abacus, or some other mechanical device. We don’t know how those representations are organized. And we can imagine many different ways they might be done, can’t we?”

“What you are saying is that there might be ways all kinds of things or entities can have such representations and therefore consciousness — even pet rocks? But we can’t know about them unless they start talking back somehow?”

“That’s what it boils down to, right. We have to keep an open mind about that, and refrain from judgments about things we can’t know. But I ‘m not sure yet that having just one representation already is consciousness. What if it takes another level of representation that looks at its representation — both of the outside reality and its own representation of it? And starts evaluating it: how it’s organized, whether there are gaps or contradictions in them, how the elements are related to one another, and so on. Doesn’t that sound more like ‘being conscious’: having a sense both of the outside world — as it is represented in one’s own system — and of the structure and quality of that entire representation?”

“Whoa! You are making me dizzy here — you realize that there’s no end to that line of thinking, don’t you? For who’s to say that in addition to the first level — A — of representation of the outside world A(w) including the representation of the entity itself A(s) , and the next level or representation B, of both A and its structure, which would be B(As&As), but in order to ‘inspect’ that system you’d need a third level C in which the structure and quality of B is represented and evaluated. And another one — D — to inspect C, and so on. There’s no end to that: you are falling into a bottomless pit. Or looking into that endless sequence of images in the opposing mirrors in the barbershop.”

“Now I see what they mean by ‘levels of consciousness’ — each one higher — or deeper in the pit — than the previous one.”

“Right, Renfroe. And the barbershop mirror image is an apt one: it shows us that we can’t really see the whole sequence: If the mirrors are truly parallel, our own first image gets in the way.”

“But it also explains the way you were sitting there, lost in contemplation: it seems that in order to inspect things at the higher (or lower) levels, we can’t pay attention to the messages coming in at the first level. Made it look like you’re unconscious. Scary, huh?”

“Yeah. Could plumb drive a fellow to drink.”

“Ah, Renfroe. Drink himself into unconsciousness?”

1 Response to “Bog-Hubert’s Theory of Consciousness”


  1. 1 abbeboulah July 17, 2009 at 2:42 pm

    Consciousness is an intrinsically interesting, fascinating concept.
    But: why are AI folks so obsessed with it?
    If the point is to make a device ‘work’ better, one very practical problem is the monitoring of its operation: It can have many different parts, each of which linked to others in a purely ‘mechanistic’ manner. But any car driver with some experience is aware of instances when the engine shakes or rattles, or labors, not quite maintaining the effort to carry the car up a hill. The driver then may do do some controlling things — shifting gears, pulling back or pushing the gas pedal, slowing down. If the car is equipped with sensors that can detect ‘rattling’ (in the entire engine, not just one part) or laboring, or overheating, this becoming ‘cognizant’ of these affects, so as to then be able to make the necessary adjustments in the machine controls ‘on its own’ — does it have ‘conscience’? Such abilities will be very useful indeed — but why is it necessary to complicate matters with the issue of what to call it?
    So some answers to the question of why the obsession with it might be interesting.

    Any theories that rely on models such as the neuron networks to define and understand consciousness seem to be unnecessarily self-limiting. If there exist in the brain — and in any ‘brain-simulating device — components with an electrical charge, any basic understanding of electricity should tell us that aside from the direct charge and discarge of currents in such a network, these charges also have a field aspect — just as copper wires wound up in a spool and carrying current generate a magnetic field. This means that information about the state of the network could be conveyed and ‘represented’ in ways other than that constituted by the neuron network itself but by the interaction of fields in the network. ‘Rattling’, (transmitted by sound waves) ‘overheating’ (temperature gauges or sensing temperature on driver’s body) , ‘slowing down’, speeding up’ etc. So the ‘level’ or control instance that is cognizant (conscious?) of those changes may have a representation somewhere, somehow, not of the state of each individual component, machine part, neuron and its activities, but of the resulting overall ‘humming’ qualities of the entire system. It can be transferred or ‘sensed’ by sound or vibration, or both. Its messages and representations can be both more holistic but also simpler; the ‘humming’ quality messages of a machine do not have to include all the details of the activities of individual parts but simply record frequency and intensity. If needed detail is required ‘attention’ can revert to a lower level of network components. That is, the specter of complexity may be overrated.

    The question remains as to how many such levels are present, or needed, and what they are monitoring, and how. It seems like a possible self-limitation in ‘simulating’ such ‘levels’ and their work with the same technological mechanisms: all ‘neurons’ in a digital device might not be the best way of simulating the ‘field’ information at the next level.

    It is curious how some people exploring their own mental consciousness workings seem to often emphasize the ‘non-working’ or ‘stillness, the absence of the busywork at the lower sensory and contemplating levels, as the more essential aspects of real “higher states’ consciousness. The expectation of ‘taking action’ in response to stimuli is not only not necessary for that understanding of consciousness but seems more of a distraction.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.





%d bloggers like this: