The problem of introspection

The problem of introspection

Consciousness and Cognition Consciousness and Cognition 15 (2006) 761–764 www.elsevier.com/locate/concog Commentary The problem of introspection C.D...

90KB Sizes 10 Downloads 211 Views

Consciousness and Cognition Consciousness and Cognition 15 (2006) 761–764 www.elsevier.com/locate/concog

Commentary

The problem of introspection C.D. Frith a

a,*

, H.C. Lau

a,b

Wellcome Department of Imaging Neuroscience, Institute of Neurology, University College London, London, UK b Department of Experimental Psychology, University of Oxford, Oxford, UK Available online 18 October 2006

One of the most important factors behind the flowering of consciousness studies in the latter part of the 20th century was the demonstration that stimulus detection and goal directed action can occur in the absence of any conscious awareness of the stimulus. This phenomenon is most striking in neurological patients such as GY, who can detect a moving stimulus that he cannot ‘see’ and DF, who can accurately grasp an object while being unaware of its shape. Similar phenomena can also be observed in people with intact brains. With appropriate presentation parameters, subjects can often ‘guess’ the presence identity of a stimulus better than chance, while claiming that they cannot ‘see’ the stimulus. Alternative the presence of an ‘unseen’ stimulus may interfere with the performance of some task by increasing errors and/or reaction time. These observations, now extended by brain imaging studies, define three levels of awareness. (1) The subject is fully aware of the stimulus. (2) The subject claims not to be aware of the stimulus, but can make guesses about the stimulus better than chance. (3) The subject claims not to be aware of the stimulus, guesses at chance levels, but nevertheless responds to the stimulus in terms of brain activity and/or behaviour. Level 2 is problematic because there is a discrepancy between the subject’s introspection (I am not aware of the stimulus) and his behaviour (guessing better than chance). A perfectly reasonable argument can be made that the ‘guessing’ performance indicates that the subject is ‘partially’ aware of the stimulus. The subject might claim not to be aware of the stimulus through a lack of confidence in these partial impressions. In terms of signal detection theory, the apparent threshold of awareness is determined, in part, by the criterion adopted by the subject for answering the question, ‘Did you see the stimulus?’ What these criteria might be and how they can be manipulated are, of course, key questions for the use of introspection in experimental studies. As Costal reminds us in his very entertaining account, the history of psychology is intimately bound up with the problem of introspection. The idea that such studies disappeared during the dark days of behaviourism is largely a myth. Nevertheless, after more than a century introspection remains the area of experimental psychology where the least progress has been made. Costal makes the key point that, for self-observation to be a viable component of experimental studies, we need to develop it as a shared social practice rather than something that is private. But what kind of a practice is introspection? Johansson et al. address this question by testing a classical idea in cognitive psychology, that we do not have 1st person access to psychological processes (problem solving, decision making, etc.) but only to the products of these processes. The implication of this belief is that, when people introspect about such processes, they are effectively making up a story. How can we tell when a respondent is making up a story in this way and when they are accurately reporting inner

*

Corresponding author. E-mail address: cfrith@fil.ion.ucl.ac.uk (C.D. Frith).

1053-8100/$ - see front matter Ó 2006 Published by Elsevier Inc. doi:10.1016/j.concog.2006.09.011

762

Commentary / Consciousness and Cognition 15 (2006) 761–764

states? The ‘choice blindness’ phenomenon demonstrated by Johansson et al. provides an ideal source of data for examining this problem of introspection. Participants in this experiment chose between two photos on the basis of attractiveness and subsequently explain the reasons for their choice. However, participants frequently fail to notice that, on a proportion of trials, the photo they were given to talk about was not the one they had just chosen, but the one they had just rejected. On such trials participants presumably were indeed making up a story about the reasons for their choice. As yet, detailed examination reveals few distinguishing features of these ‘false’ accounts from the ‘true’ ones. The remaining five papers are all more directly concerned with the problem highlighted at the beginning of this commentary, the problem of distinguishing between the different levels of awareness that lie between full awareness and ‘just guessing’. Pinku and Tzelgov, using the framework developed by Dienes and Perner, distinguish 3 levels of awareness (consciousness of self), which differ in explicitness. At the lowest level there are ‘sensual representations,’ which are neither attended nor monitored. At the next level there is an explicit representation of stimuli or actions such as occurs when we deliberately monitor them. At the highest level our awareness is reflexively: we represent ourselves as perceiving the stimulus or the action. Do these theoretical distinctions map on to the sort of differences observed in experimental studies? It seems plausible that ‘guessing’ is based on sensual representations at the lowest level of awareness, while report of full awareness requires a reflexive representation of the self perceiving the stimulus. Overgaard and colleagues have approached this problem directly by instructing participants to base their responses on different levels of awareness. In one condition participants had to report whether or not a stimulus is present even if this report is based on a guess. ‘You are to report whether you believe a dot appeared on the screen. It does not matter whether you actually saw the dot or not.’ In the other condition participants had to report the presence of a stimulus only if they are clearly aware of seeing it. ‘You are to report if you had the visual experience of seeing a dot.’ These instructions might engage levels 1 and 3 respectively in Pinku and Tzelgov’s framework. Brain response to stimuli after these two different instructions about to how to introspect differed. During high-level introspections the timing and amplitude of ERP components resembled those seen during selective attention tasks. This result is consistent with the idea that we can deliberately attend to internal states, in the same way that we attend to external stimuli. Indeed this idea has been used to identify brain regions concerned with internal states such as motor intentions. The expectation is that these areas will show increased activity when the relevant internal state is the focus of attention (e.g., Lau, Rogers, & Passingham, 2006; for the case of motor intentions). The same methodology could be applied to identify the different levels of introspection. In a second paper, Overgaard et al. provided participants with a Perceptual Awareness Scale that had 4 different levels, rather than the two levels specified in the study we have just discussed. The participants used all 4 levels of the scale and there were strong linear relationships with both objective measures of stimulus visibility (display time) and response accuracy. This result is strikingly different from that obtained by Sergent and Dehaene (2004) who found, using the attentional blink task, that conscious experience was all-or-none. Several aspects differed between the two experiments so it is hard to give a definite answer. It is plausible that the nature of conscious experience varies systematically with the task and masking method used, but perhaps the discrepancy also give us a hint as to the essentially social nature of introspection. The participant in an experiment will describe experiences on the basis of an implicit script established through prior interactions with the experimenter (Roepstorff & Frith, 2004). Another possible cause for the discrepancy between these experiments might be individual differences. Some subjects are very cautious and would not claim awareness until they see the targets vividly and perfectly well. Other subjects are, in contrast, rather ‘gung-ho,’ and they claim awareness even if they only see the hint of a shadow. Norman et al, in this issue, explore this phenomenon by using standardized personality tests. They find that peoples’ degree of ‘openness to feelings’ affects both their learning and awareness in an implicit learning task. They theorize that ‘openness to feelings’ reflects the ability to introspect on fringe feelings. This is supported by their findings, which show that the actual efficacy of learning is modulated by degree of ‘openness to feelings’. However, in the context of our opening discussion of signal detection theory, individual differences could also play a role in determining the criterion for reporting awareness. This brings us to the final study which deals with this issue.

Commentary / Consciousness and Cognition 15 (2006) 761–764

763

In the final paper of this special issue, Cleeremans et al. explore the possibility of applying signal detection to introspection. Traditionally SDT is used to measure how well a participant can discriminate a stimulus from background noise. The key measures are hits, where a stimulus is correctly identified as such, and false alarms where noise is incorrectly identified as a stimulus. The same procedure has recently been applied to the discrimination of internal states in order to characterize consciousness (Kunimoto, Miller, & Pashler, 2001; Tunney & Shanks, 2003). In the case of the sequence learning task studied by Cleeremans et al. SDT was used to measure to what extent the knowledge participants had gained about the sequence was explicit. If participants are ‘just guessing’ then they will not be able to discriminate between their right and wrong answers. The more explicit their knowledge the better they will be able to discriminate. In this paradigm hits are confident classification of correct answers as correct. False alarms are confident classifications of incorrect answers as correct. Using this technique Cleeremans et al. show that participants trained with slow sequences had more explicit knowledge of the sequence than those trained with fast sequences, even though the groups did not differ on objective measures of learning. In relation to the study by Overgaard et al discussed above, it is interesting to note that Tunney and Shanks (2003) found that binary confidence judgements provided more evidence of awareness than did judgements of confidence on a continuous scale. The application of SDT to internal states seems like a promising advance for studies of introspection because it provides an objective and explicit mathematical framework. However, SDT relies on several mathematical assumptions that do not necessarily hold in the context of the discrimination of correct and incorrect trials. For instance, the error distribution for the correct trials would not be of a normal bell shape (Gaussian) because a significant portion of trials with low internal signal intensity would turn out to yield ‘correct’ responses, out of sheer luck in forced-choice guessing; for a detailed discussion of these please see Lau (in press), or Galvin, Podd, Drga, and Whitmore (2003) for a mathematically rigorous formal exposition. Within the present context, it is perhaps useful to consider that SDT is a special case of Bayesian decision theory (Kersten, Mamassian, & Yuille, 2004). The Bayesian model is free from many specific assumptions of SDT, and has the potential to provide an even more sophisticated approach to the problem of introspection addressed in this special issue. Here we will provide a very sketchy outline of what this approach might be. For a Bayesian, perception depends, not only on sensation, but also on our prior beliefs about the world. These prior beliefs allow us to make predictions about likely sensations. The errors in our predictions indicate how we should update our beliefs. This model of perception has a number of components. For example, there is an error distribution associated with the sensation (the new evidence). This error distribution might be relevant to the lowest level of awareness: sensual representations neither attended nor monitored. However, there will be other, different error distributions associated with our prior beliefs and our posterior beliefs. These might be relevant to higher levels of awareness. Presumably we would report that we had not seen the stimulus if the evidence was not sufficient to update our belief. This decision would depend as much in the strength of our prior belief as on the strength of the stimulus. Schemes for Bayesian perception are often presented as hierarchical (e.g. Friston, 2005). Such mathematical schemes capture the notion that awareness is also hierarchical. Implicit representations are low in this hierarchy, while explicit representations involve metaknowledge and are therefore higher in the hierarchy. In principle it should be possible to devise techniques for estimating parameters at different levels of this hierarchy of perception and linking them to the results of introspection. The prior knowledge associated with the top of this hierarchy derives from interactions with other people, such as the experimenter, capturing the shared social aspect of introspection. Acknowledgment Our work is supported by the Wellcome Trust. References Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society of London B Biological Sciences, 360(1456), 815–836. Galvin, S. J., Podd, J. V., Drga, V., & Whitmore, J. (2003). Type 2 tasks in the theory of signal detectability: Discrimination between correct and incorrect decisions. Psychonomotor Bulletin Review, 10(4), 843–867.

764

Commentary / Consciousness and Cognition 15 (2006) 761–764

Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as Bayesian inference. Annual Review of Psychology, 55, 271–304. Kunimoto, C., Miller, J., & Pashler, H. (2001). Confidence and accuracy of near-threshold discrimination responses. Conscious Cognition, 10, 294–340. Lau, H. C., Rogers, R. D., & Passingham, R. E. (2006). On measuring the perceived onsets of spontaneous actions. Journal of Neuroscience, 5:26(27), 7265–7271. Lau, H. C. Awareness as confidence. Anthropology and philosophy (special issue on interdisciplinary work in psychology), in press. Roepstorff, A., & Frith, C. (2004). What’s at the top in the top-down control of action? Script-sharing and ’top-top’ control of action in cognitive experiments. Psychological Research, 68(2–3), 189–198. Sergent, C., & Dehaene, S. (2004). It consciousness a gradual phenomenon? Evidence for an all-or-none bifurcation during the attentional blink. Psychological Science, 15(11), 720–728. Tunney, R. J., & Shanks, D. R. (2003). Subjective measures of awareness and implicit cognition. Memory and Cognition, 31(7), 1060–1071.