Main      Site Guide    
Message Forum
Re: The Hard Problem of Consciousness
Posted By: Wolfspirit, on host 206.47.244.93
Date: Sunday, November 26, 2000, at 22:55:18
In Reply To: Re: Abstracted question; Consciousness posted by gabby on Friday, November 24, 2000, at 14:47:50:

> > Wolf "does this answer any part of the question you wanted to ask?" spirit
>
> Not really. You're explaining synergy, which I understand. But each part must still contribute to the synergy: while it is more than the sum of its parts, it still has parts. The effect doesn't come from nothing at all.
>

Nobody said it did. What does ab initio (from scratch) thematics have to do with synergy?


> The responses to the question were focused on trying to break down consciousness into equal little parts for each type of matter everywhere. The question did not ask for this, just that the effect be broken down *somehow*.
>
> Here's an analogy. If we were discussing purple paint made from red and blue particles, I wouldn't have been asking whether each particle was purple, but whether each particle had color of any kind.
>

I guess the reason why everyone answered you the way we did is that you're *still* reframing the question continually in terms of the Reductionist approach. For example, take your analogy about colourless particles (and they are, in fact, colourless on the subatomic scale) and how they combine together on a higher macroscopic scale to absorb visible wavelengths resulting in the colour which we call 'purple.' This is a purely reductionist problem and the answer to it is well known -- in terms of what happens when a wavelength hits the retina and it is absorbed by cone cells, resulting in a signal-transduction event sent to the visual cortex, etc.

In the field of Consciousness studies, resolving the direct *mechanisms* of physical and neural operation are examples of what are called "easy problems" (relatively speaking). The far more difficult problem lies in fully describing states of *subjective experience*: this is known as the "hard problem". For example, Why is it that when our brains process light of a certain wavelength, we might have an *experience* of deep purple and not, say, of faded pink? Why is it so difficult to describe to someone else what an orgasm feels like, or why getting tickled is unpleasant, or the exact aroma of a rose?

Nobody knows, but there are a large number of researchers working in the field of Consciousness, covering disciplines as diverse as neuroscience, cognitive psychology, philosophy, and physics. I'm going to quote at length a paper by Sir Francis Crick and Christof Koch, followed by commentary on the subject by David J.Chalmers. If you manage to make it through the entire paper, you'll have an idea why we don't yet have to worry about the problem of self-aware Artificial Intelligences taking over the world... for quite a while to come.


WHY NEUROSCIENCE MAY BE ABLE TO EXPLAIN CONSCIOUSNESS
by Francis Crick and Christof Koch

We believe that at the moment the best approach to the problem of explaining consciousness is to concentrate on finding what is known as the "neural correlates of consciousness," the processes in the brain that are most directly responsible for consciousness. By locating the neurons in the cerebral cortex that correlate best with consciousness, and figuring out how they link to neurons elsewhere in the brain, we may come across key insights into what David J. Chalmers calls the hard problem: a full accounting of the manner in which subjective experience arises from these cerebral processes.

We commend Chalmers for boldly recognizing and focusing on the hard problem at this early stage, although we are not as enthusiastic about some of his thought experiments. As we see it, the hard problem can be broken down into several questions: Why do we experience anything at all? What leads to a particular conscious experience (such as the blueness of blue)? Why are some aspects of subjective experience impossible to convey to other people (in other words, why are they private [such as pain or pleasure])? We believe we have an answer to the last problem and a suggestion about the first two, revolving around a phenomenon known as explicit neuronal representation.

What does 'explicit' mean in this context? Perhaps the best way to define it is with an example. In response to the image of a face, say, ganglion cells fire all over the retina, much like the pixels on a television screen, to generate an implicit representation of the face. At the same time, they can also respond to a great many other features in the image, such as shadows, lines, uneven lighting and so on. In contrast, some neurons high in the hierarchy of the visual cortex respond mainly to the face or even to the face viewed at a particular angle. Such neurons help the brain represent the face in an explicit manner. Their loss resulting from a stroke or some other injury, leads to prosopagnosia, an individual's inability to recognize familiar faces, consciously-even his or her own, although the person can still identify a face as a face. Similarly, damage to other parts of the visual cortex can cause someone to lose the ability experience color, while still seeing in shades of black and white, even though there is no defect in the color receptors in the eye.

At each stage, visual information is re-encoded, typically in a semi-hierarchical manner. Retinal ganglion cells respond to areas of light. Neurons in the primary visual cortex are most adept at responding to lines to edges; neurons higher up might prefer a moving contour. Still higher are those that respond to faces and other familiar objects. On top are those that project to pre-motor and motor structures in the brain, where they fire the neurons that initiate such actions as speaking or avoiding an oncoming automobile.

Chalmers believes, as we do, that the subjective aspect of an experience must relate closely to the firing of the neurons corresponding to those aspects (the neural correlates). He describes a well-known thought experiment, constructed around a hypothetical neuroscientist, Mary, who specializes in color perception but has never seen - a 'color'. We believe the reason Mary does not know what it is like to see a color, however, is that she has never had an explicit neural representation of a color in her brain, only of the words and ideas associated with colors.

In order to describe a subjective visual experience, the information has to be transmitted to the motor output stage of the brain, where it becomes available for verbalization or other actions. This transmission always involves re-encoding the information, so that the explicit information expressed by the motor neurons is related, but not identical, to the explicit information expressed by the neurons associated with color experience, at some level in the visual hierarchy.

It is not possible, then, to convey with words and ideas the exact nature of a subjective experience. It is possible, however, to convey a difference between subjective experiences-to distinguish between red and orange, for example. This is possible because a difference in high-level visual cortical area will still be associated with a difference in the motor stages. The implication is that we can never explain to other people the nature of any conscious experience, only its relation to other ones.

The other two questions, concerning why we have conscious experiences and what leads to specific ones, appear more difficult. Chalmers proposes that they require the introduction of 'experience' as a fundamental new feature of the world, relating to the ability of an organism to process information. But which types of neuronal information produce consciousness? And what makes a certain type of information correspond to the blueness of blue, rather than the greenness of green? Such problems seem as difficult as any in the study of consciousness.

We prefer an alternative approach, involving the concept of 'meaning.' In what sense can neurons that explicitly code for a face be said to convey the meaning of a face to the rest of the brain? Such a property must relate to the cell's projective field a pattern of synaptic connections to neurons that code explicitly for related concepts. Ultimately, these connections extend to the motor output. For example, neurons responding to a certain face might be connected to ones expressing the name of the person whose face it is and to others for her voice, memories involving her and so on. Such associations among neurons must be behaviorally useful, in other words, consistent with feedback from the body and the external world.

Meaning derives from the linkages among these representations with others spread throughout the cortical system in a vast associational network, similar to a dictionary or a relational database. The more diverse these connections, the richer the meaning. If, as in our previous example of prosopagnosia, the synaptic output of such face neurons were blocked, the cells would still respond to the person's face, but there would be no associated meaning and, therefore, much less experience. A face would be seen but not recognized as such.

Of course, groups of neurons can take on new functions, allowing brains to learn new categories (including faces) and associate new categories with existing ones. Certain primitive associations, such as pain, are to some extent inborn but subsequently refined in life.

Information(***) may indeed be the key concept, as Chalmers suspects. Greater certainty will require consideration of highly parallel streams of information, linked-as are neurons-in complex networks. It would be useful to try to determine what features a neural network (or some other such computational embodiment) must have to generate meaning. It is possible that such exercises will suggest the neural basis of meaning. The hard problem of consciousness may then appear in an entirely new light. It might even disappear. END


NOTES and COMMENTARY (ON INFORMATION, AS A FUNDAMENTAL ENTITY) by Chalmers

(***) Physicist John A. Wheeler's suggestion that Information is fundamental to the physics of the universe. The laws of physics might ultimately be cast in informational terms, in which case we would have a satisfying congruence between the [neural correlate] constructs in both physical and psychophysical laws. It may even be that a theory of physics and a theory of consciousness could eventually be consolidated into a single grander theory of information.

A potential problem is posed by the ubiquity of information. Even a thermostat embodies some information, for example, but is it conscious? There are at least two possible responses. First, we could constrain the fundamental laws so that only some information has an experiential aspect, perhaps depending on how it is physically processed. Second, we might bite the bullet and allow that all information has an experiential aspect - where there is complex information processing, there is complex experience, and where there is simple information processing, there is simple experience. If this is so, then even a thermostat might have experiences, although they would be much simpler than even a basic color experience, and there would certainly be no accompanying emotions or thoughts. This seems odd at first, but if experience is truly fundamental, we might expect it to be widespread. [...]

Crick and Koch outline their "neurobiological theory of consciousness" (1990; see also Crick 1994). This theory centers on certain 35-75 hertz neural oscillations in the cerebral cortex; Crick and Koch hypothesize that these oscillations are the basis of consciousness. This is partly because the oscillations seem to be correlated with awareness in a number of different modalities - within the visual and olfactory systems, for example - and also because they suggest a mechanism by which the binding of information contents might be achieved. Binding is the process whereby separately represented pieces of information about a single entity are brought together to be used by later processing, as when information about the color and shape of a perceived object is integrated from separate visual pathways. Following others (e.g., Eckhorn et al 1988), Crick and Koch hypothesize that binding may be achieved by the synchronized oscillations of neuronal groups representing the relevant contents. When two pieces of information are to be bound together, the relevant neural groups will oscillate with the same frequency and phase. [...] Crick and Koch also suggest that these oscillations activate the mechanisms of working memory, so that there may be an account of this and perhaps other forms of memory in the distance. The theory might eventually lead to a general account of how perceived information is bound and stored in memory, for use by later processing [by the *neural correlates* of conscious experience].

[A] second example is an approach at the level of cognitive psychology. This is Baars' global workspace theory of consciousness, presented in his book "A Cognitive Theory of Consciousness" (1988). According to this theory, the contents of consciousness are contained in a 'global workspace,' a central processor used to mediate communication between a host of specialized nonconscious processors. When these specialized processors need to broadcast information to the rest of the system, they do so by sending this information to the workspace, which acts as a kind of communal blackboard for the rest of the system, accessible to all the other processors. One might suppose that according to this theory, the contents of [conscious] experience are precisely the contents of the workspace. [...]

Ultimately, however, it is a theory of 'cognitive accessibility,' explaining how it is that certain information contents are widely accessible within a system, as well as a theory of informational integration and reportability. The theory shows promise as a theory of awareness, the *functional correlate* of conscious experience, but an explanation of experience itself is not on offer.

Replies To This Message