Consciousness: just more of the same in the visual brain?

Consciousness: just more of the same in the visual brain?

412 News & Comment TRENDS in Cognitive Sciences Vol.6 No.10 October 2002 Journal Club Consciousness: just more of the same in the visual brain? Ma...

14KB Sizes 3 Downloads 86 Views

412

News & Comment

TRENDS in Cognitive Sciences Vol.6 No.10 October 2002

Journal Club

Consciousness: just more of the same in the visual brain? Many theories of consciousness assume that there is a qualitative difference between the pre-conscious processing of a stimulus and its conscious perception. In the visual system, pre-conscious processing involves extracting basic stimulus features (lines, edges, colours, etc.) at a low level and using them at higher levels to produce representations of shapes and objects in the visual scene. Somewhere along the line, these pre-conscious representations undergo some qualitative shift that allows them to make it into consciousness. If we could distinguish between the pre-conscious processing of a stimulus and its conscious perception, we might find out what this qualitative difference is – we might catch the brain in the act of being conscious. In dichoptic visual presentations, the brain fuses different images from each eye, so that when the left eye is shown a red square and the right eye is shown a green square, the conscious percept is a yellow square. Moutoussis and Zeki used this principal of binocular fusion to develop stimuli for each eye that would be invisible at a perceptual level. For example, a red

house on a green background presented to one eye and a carefully matched green house on a red background presented to the other will produce a percept of a uniformly yellow square. Similar stimuli were created using faces. Houses and faces were used because they activate different areas of the brain, which can be distinguished using fMRI. Using these ‘invisible’ stimuli Moutoussis and Zeki were able to dissociate the pre-conscious processing of the stimulus from its conscious perception. Because invisible houses produce no conscious percept they would not be expected to activate house-specific areas involved in conscious perception. Similarly, invisible faces would not be expected to activate face-specific areas in the brain. By comparing fMRI of subjects viewing invisible and visible stimuli, the experimenters hoped to find conscious perception written in the activity of different brain regions. However, the brain activity of subjects was very similar in both cases. The only thing distinguishing invisible from visible stimuli was the level of activation in the stimulus-specific brain areas. Conscious perception activated no additional areas.

One possible conclusion from this experiment is that conscious perception of a particular stimulus occurs when a threshold level of activation is reached in the areas responsible for processing that stimulus. In this view, conscious perception is not delegated to specific perceptual areas but is distributed throughout the brain to areas responsible for stimulus-specific processing. But if consciousness is such a highly distributed process, how do we account for its unity, the feeling of a central self that experiences everything? Surely there is a place where it all comes together? If there is such a place in the brain (and many would argue there is not), Moutoussis and Zeki did not find it. As they warn, ‘absence of proof is no proof of absence’, but could it be, as some think, that the apparent unity of perceptual experience is all an illusion…?

may produce a lower RMSE and so appear to be fitting the data better. However, when it comes to generalizing to new data sets, such a best-fit model will perform poorly compared with a worse-fitting model that captures the underlying cognitive process. The tendency of a model to capture noise in the data is a function of a model’s ‘complexity’. However, there are two aspects to complexity: the number of ‘parameters’ and the ‘functional form’ of the equation that maps these parameters into a predicted response probability. In a recent article, Pitt et al. argue that previous model-selection methods tend to penalize models only for the number of parameters and not for functional form [1]. Some, like cross-validation (where the models’ parameters are fixed using one half of the data and are then assessed by seeing how well they generalize to the other half), seem to address generalizability do not specify, in any principled way, how they penalize models for functional form.

The problem is approached in three stages. First, they show how the complexity of a function can be characterized using ‘response surface analysis’. This allows the relationship between a response probability and the parameters to be expressed as a relationship between two response probabilities. This defines a response surface, which represents all the possible data patterns the model can describe. The length of a model’s response curve provides a measure of functional complexity. However, such a measure does not take account of random error. So, second, the analysis is extended into a space of probability distributions. The measure of complexity provided by doing this is called the ‘Riemannian volume’ and it measures the number of distinguishable probability distributions a model can generate. This can be seen as breaking up the response surface into a number of statistically distinguishable segments (or regions) that can be counted

1 Moutoussis, K. and Zeki, S. (2002)The relationship between cortical activation and perception investigated with invisible stimuli. Proc. Natl. Acad. Sci. U. S. A. 99, 9527–9532

James N. Ingram [email protected]

How does it fit? Cognitive science has seen a steady increase in the number of mathematical and computational models of cognitive processes. This raises the question of which models should be preferred when there is more than one that appears to account for the data? For formally well-specified models, the answer to this question has traditionally been: the model that provides the best fit to the data. ‘standard GOF measures…do not discriminate between a model that is accurately capturing the underlying cognitive process from one that is fitting the noise in the data’ However, standard goodness-of-fit (GOF) measures, such as the root mean squared error (RMSE) are prone to problems of ‘overfitting’. That is, they do not discriminate between a model that is accurately capturing the underlying cognitive process from one that is fitting the noise in the data. The latter http://tics.trends.com

1364-6613/02/$ – see front matter © 2002 Elsevier Science Ltd. All rights reserved.