Chapter 6 Automatic and Controlled Information Processing: The Role of Attention in the Processing of Novelty G. Underwood and J. Everatt2 ~University of Nottingham, England and 2University of Surrey, England The words and sentences that I am writing at the moment are being written using a word-processor. I have the distinct impression that I must think about the meanings of these sentences, and about their grammatical construction, but that the act of producing the words can take care of itself. Typing the words, spelling them conventionally and even selecting the most appropriate word are all activities that do not need my attention. These activities might be described as having become automatized. They required attention at one t i m e - when I was learning to spell and then again when I was learning to t y p e - but no longer is my mind occupied with these low-level writing skills. My attention now focuses upon more general problems of composition: what to say next, and how to express it. Although this declaration of claimed skill concerns the output of information, the idea that low-level processing is automatized is a suggestion that can also be applied to recognition. You may not need to attend to the form of each letter in each word, or even to each word in this sentence, but if you want to extract the underlying meaning then it is to the meaning that you must attend. If you do not, then the words may be recognized, but the meanings of sentences will be lost. If, when you get to the bottom of the page, you realize that you have been thinking about something other than the relationships that I am trying to describe, then a re-reading will be necessary. The words will then look familiar, but the text will not. One possible reason for this is that words have a largely invariant relationship with the meanings stored in the reader's internal lexicon, whereas the texts are novel. Invariance allows the reader to learn the relationship between the input and the required cognitive action, and when learning is complete attention may 'drop out' of the processing sequence. When this happens we may say that the activity has become automatized. In this discussion of automatic behavior we shall consider input and output activities, with the overview of attention as being necessary for the processing of novel inputs and novel outputs. To do this it is first necessary to consider the relationship between attention and automatization, and to examine the effects of practice upon both.
Handbook of Perception and Action, Volume 3
Copyright 9 1996 Academic Press Ltd All rights of reproduction in any form reserved
ISBN 0-12-516163-8
185
186
G. Underwood and J. Everatt
ATTENTION, AUTOMATIZATION OF PRACTICE
AND THE ROLE
Automatic activities have been considered as those that possess one or more of a set of 'defining' characteristics. The sort of characteristics that have been proposed in the past are that automatic activities: (1) (2) (3) (4) (5) (6) (7) (8)
develop with extensive practice; are performed smoothly and efficiently; are resistant to modification; are unaffected by other activities; do not interfere with other activities; are initiated without intention; are not under conscious control; do not require mental effort.
The first five of these characteristics can be derived mainly from laboratory observations, whereas the characteristics involving intention, conscious control and mental effort are either inferred or arise from subjective reports. The list is descriptive rather than definitive and is taken from a variety of sources including LaBerge (1981), Logan (1988), Posner and Snyder (1975), Schneider and Shiffrin (1977) and, of course, James (1890). It is intended to provide a general description of the characteristics that have been applied to something that is considered automatic. There is no agreed criterion for categorizing an activity as being automatic rather than volitional; however, proponents of the two-process view (Posner and Snyder, 1975; Shiffrin and Schneider, 1977) consider that processes can be split up into those that are automatic and those that are attentional, i.e. automaticity can be considered as all-or-none. More recent views have emphasized the notion of a continuum of automaticity, with new, unskilled activities at the conscious control end and familiar, highly practiced activities at the automatic end (Cohen, Dunbar and McClelland, 1990; Kahneman and Chajzyck, 1983; Logan, 1985). As more experience of an activity within a constant environment is encountered, so the activity moves from the controlled end toward the automatic end. The arguments put forward by proponents of the continuum view is that automatic processes gain characteristics of automaticity with practice (MacLeod and Dunbar, 1988) and still show effects of attentional factors even when they are considered to be automatic (Francolini and Egeth, 1980; Kahneman and Henik, 1981). It does seem unlikely that all activities are either automatically or consciously controlled, and, although a whole activity may not be automatic, component processes within that skill may be (see the arguments of Jonides, Naveh-Benjamin and Palmer, 1985; Shiffrin, 1988; Shulman, 1990). In analyzing the automatized components of an activity we have three options available. The first two options are 'positive' in that they continue to use the assumption that behavior can be under automatic or conscious control, but the third, negative, option simply says that the distinction has no value in that activities do not fall into one or other of these categories. The first positive option is to propose (as above) that there exists a continuum of automaticity, with new, unskilled activities at the 'conscious control' end and with familiar, highly practiced activities at the 'automatic control' end. One
Automatic and controlled information processing
187
problem with this description is that it gives a one-dimensional impression of the the nature of skill. Another is that it assumes that skills are crystallized, whereas they change with practice. This change can be described in terms of the acquisition of automatization. The second positive option, then, is to propose that skills are organized hierarchically, and that the automatization of the low-level subskills will progress with practice. Skill acquisition is then seen as the increasing automatization of the component subskills. Attention may be directed initially at the control of the low-level components, but as practice is increased these components are automatized and attention can be released for higher-level components. In the case of typing, a low-level component would be hitting a required key, and a higher-level component would be transforming an idea into the surface form of a sentence. In the case of playing a game such as tennis, these levels would be the equivalent of gripping the racquet to meet the ball versus deciding where to place the ball to put your opponents at their greatest disadvantage. Even the low-level components are attention-demanding for the novice, but the Wimbledon finalist will be allocating little thought to details such as the angle of the racquet head. Instead, this highly skilled player will have the impression that these details can take care of themselves, and that thought should be given to decisions about game tactics such as whether to approach the net more often. The novice may not even be aware of having these thoughts, even if there was time for them. This model requires us to describe the mechanism whereby practice allows attention to move up the skill hierarchy, and this is part of the purpose of the present discussion. Practice on perceptual-motor tasks has several observable effects, including an increase in accuracy, an increase in speed, and an increase in the smoothness of performance. For the set of activities at which we are personally a d e p t - tying our shoelaces, making a gear change when driving, hitting an approaching tennis ball with a racquet, or typing at a keyboard, p e r h a p s - w e might also have the impression that we are not always aware of having initiated each motor response. The list of characteristics of automatized activities mentioned at the beginning of this section may seem to be a good description of what happens when the well-practiced motorist changes gear in response to a change in the car's engine speed. Upon reflection, this motorist may be unable to recall having initiated or executed the gear change even though it was completed perfectly adequately. The motorist may also have been engaged in conversation with a passenger, and neither the gear change nor the conversation would have interfered with each other. At the same time as the gear change, other perceptual-motor activities will have been performed in order to keep the vehicle traveling along its intended path. The gear change may therefore be described as being automatized. The novice driver would have very different experiences, starting with having to decide when to initiate the action. The overt indices of performance change with practice, but can we give any weight to the verbal reports of reflections upon our own skilled activities? If these reflections do not correlate with any externally observable change in performance, then they have little credibility. On the other hand, if the subjective reports of attention-free behavior coincide with changes in performance such as speed and accuracy improvements, then we are entitled to look at the reports in more detail. They may be emerging after the event, in which case they are not very interesting. For instance, it may simply be that the performer noticed an improvement in performance, and that change is now attributed to a particular state of awareness.
188
G. Underwood and J. Everatt
If this has happened, then we cannot know whether a change in performance has been accompanied by a change in attentional control. Did the change in attention accompany the change in performance or did it follow this change? To make matters worse, these are not mutually exclusive possibilities. The subjective reports of automatized behavior are obtained after the behavior has been produced, and so perhaps we simply have unreliable memories of skilled performances. Perhaps we think that the performance was attention-free for the reason that we did not record a memory during performance. In this case, attention may have been allocated, but when no memory is recorded we subsequently have the impression that we were behaving without attention. We must be very cautious of subjective reports from skilled performers, as it is not clear whether their impressions of changes in attention come as an accompaniment to changes in performance or as a result of changes in performance. Practice gives an action faster, more accurate and smoother performance, and the change can also be described in terms of a change in the cognitive structures that mediate performance. A relatively conventional account of the change starts with a categorization of motor performance according the feedback necessary for successful execution. The model suggested by Adams (1976), Keele and Summers (1976), Reason (1979), Underwood (1982) and others describes novel activities as requiring closed-loop control (CLC) in that performance of the individual components of the activity requires individual checking. The closed loop here refers to feedback from execution of an individual action being used to check the match between intention and action. If there is a match, then the next individual action can be executed. Behavior under CLC is halting, slow and variable. Practice has the effect of eliminating the need to use feedback. The skilled performer issues a command for action and does not check that the individual action matches the individual intention. The elimination of feedback from the sequences of actions is described as the change to open-loop control (OLC). In the OLC mode, feedback is not used to check the intention-action match and performance becomes smoother because there are no longer any interruptions to the flow of action. Performance becomes faster because the time taken to check the feedback is eliminated, and accuracy is improved because the performer is now able to issue instructions for action based upon over-learned associations. The evidence in favor of this view of a change from CLC to OLC is reviewed by Underwood (1982), where the use of feedback is identified with attention. The subjective impressions that accompany a highly skilled action under OLC result from the removal of attention from the production of the action sequence. The CLC/OLC model applies equally well to motor skills and to cognitive skills such as recalling familiar facts (simple multiplication calculations, the names of certain capital cities and world politicians, for example). Provided that the performer does not need to check the intention-action match, then attention will be unnecessary for the production of the answer to questions concerning these facts. An alternative view of the effects of practice is proved in Logan's (1988) instance-based theory of automatization, which will be described in some detail shortly. Briefly, this view suggests that practice has the effect of taking the performer from a reliance upon algorithm-based actions to a reliance upon memories. The algorithms must be calculated each time an unskilled action is performed, while the practiced action can rely upon a memory of the stimulus and its accompanying action. Calculation and operation of the algorithm requires mental
Automatic and controlled information processing
189
resources, while memory-based performance is free of attention. The two models are compatible, of course, if Logan's algorithm requires the use of feedback and can be described as running under CLC, while the reliance upon memories is free of feedback and runs under OLC. However, the use of feedback is not explicitly specified by Logan. The model of skills that suggests a hierarchical structure is a model best applied to complex activities that can be only loosely categorized as skills. Activities such as typing, riding a bicycle and playing a game such as tennis all clearly involve skill, but whether the whole activity is a single skill is a more debatable matter. They are assemblies of coordinated but individual perceptual-motor skills, defined as learned motor responses to distinctive perceptual inputs in order to achieve a specified goal. The integration of sensory information with muscular responses is usually considered to be an essential part of skilled behavior. Activities such as reading, remembering and performing mental arithmetic, in which the observable motor response is irrelevant, each share the critical features of a skill. Motor activities such as playing tennis are said to be skilled but it is just as easy to describe them as collections of smaller-scale skills. Being able to hit a ball with a tennis racquet, giving it varying amounts of momentum and sending it in an intended direction might in itself be considered a skill. Being able to serve the ball requires different coordinated actions to those required when returning the ball in a rally, and so we might also want to describe serving as an independent skill. Similarly, returning the ball from behind the baseline, while stationary, requires different perceptual-motor coordinations to the action involved in hitting a volley on the run. Being able to impart backspin to the ball might in itself be considered an independent skill, and so on. Any of these high-level activities may be described in terms of their components, which themselves have the characteristics of a skill. They differ from the high-level skill in the generality of the goal. We play tennis for one reason, but we hit the ball with a racquet specifically as a part of playing tennis. The lower down this hierarchy, the more constrained will be the skill. There are many possible tennis games that can be played, but a smaller number of ways of serving the ball, and an even smaller number of ways of gripping the racquet when hitting the ball. Descending the hierarchy can therefore be said to reduce the degrees of freedom associated with the action, and this in turn can be described as reducing the novelty of the action. One's grip on the racquet handle will be much the same as it was the last time a forehand shot was played, and so this part of the action is invariant. The trajectory of the ball, as it arrives to invite a return volley, will be slightly different from the last time the shot was played, and only at the lower levels of the skill hierarchy will the required action be invariant from instance to instance. The argument will suggest that as we consider higher levels of the hierarchy, this invariance is reduced, and so the task of automatization becomes more difficult. Not that automatization of a global activity is impossible. For example, wellpracticed drivers can report that for whole sections of a route they have no recollection of having done anything. These are instances of what Reed (1972) called 'time-gaps', and they occur when a skilled operator can daydream while responding to changing perceptual inputs. The predictability of the input can be appreciated only by those who have learned the statistical constraints of the input, and once learned it can produce well-learned responses. The individual may return to their external reality only when an unpredictable event occurs or when attention is
190
G. Underwood and J. Everatt
called to some perceptual configuration sufficiently novel to be as yet unlearned. Although high-level activities may be considered as automated to the extent of occurring without awareness, it is the low-level components of a skill that are more readily performed with this form of control, and it is to these that we will direct most of the following discussion, though higher-level processes will be considered later.
1.1
Dual Tasks and Resource Limitations
As discussed above, a skill can be considered to be a learned motor response to a distinctive perceptual input in order to achieve a specific goal. For such a skill to be automatic, the processes of perceiving the perceptual input and retrieving and executing the learned motor response must be automatic. It is also possible that the whole skill is not automatic, but that one or more processes within that skill may be; for example, the processing of the stimulus may be automatic. (Jonides et al. (1985), argue that stimulus identification may be automatic in tasks that show features of nonautomaticity.) In investigating automatic versus attentional processing, various techniques have been used in the hope of finding tasks that show one or more of the characteristics mentioned above. The techniques have ranged from perceptual/attentional orienting responses to the problem of trying to perform two tasks at the same time. As discussed above, the act of changing gear could be considered as an automatic act in the skilled driver, because it can occur without attending to it (unconsciously), and while other acts, or processes, occur at the same time, e.g. talking to a passenger, listening to a play on the radio, reading a road sign, etc. The act of changing gear does not affect the other acts and so may be considered to be occurring without limiting resources to other tasks. Researchers have used this feature of well-practiced skills to investigate automatic skills. Types of experiments using this technique are usually referred to as dual-task experiments. In these a subject has to perform a well-practiced task while at the same time performing a secondary task. Examples of tasks in which a secondary task was used to assess the performance on the task are: typing a piece of text while at the same time shadowing (repeating verbally) an aurally presented message (Shaffer, 1975); playing the piano while performing the similar secondary shadowing task (Allport, Antonis and Reynolds, 1972); writing a message to dictation, while at the same time reading (Hirst et al., 1980; Spelke, Hirst and Neisser, 1976); tracking a stimulus with the hand, while verbally responding to a second stimulus (McLeod, 1977); and memorizing a set of figures for recall while performing a multiple-choice stimulusresponse task (Logan, 1979). These experiments suggest that, with large amounts of practice on one of the tasks, performance of it and the secondary task can be relatively unaffected by having to perform them together. One explanation of these findings is that extensive practice on one task allows it to be performed automatically and so as items (6) and (7) of the above characteristics of automatized behavior state, it will not interfere with, or be affected by, another task (LaBerge, 1981; Logan, 1988). The automatization of the one task will also allow more attentional processes to perform the other task. This view has become know as the limited resources viewpoint. It considers that a certain amount of attentional resources can be applied to a task(s)
Automatic and controlled information processing
191
and if these are exceeded then interference between tasks will occur, owing to the competition for those resources (Kahneman, 1973; LaBerge and Samuels, 1974; Posner, 1978; Schneider and Shiffrin, 1977). This is the basis of the view of automatization being useful. If there is a limited amount of attentional resources available, then performing simple, invariant tasks without the use of these resources (i.e. performing it automatically) will allow them to be used for other processes. For example, LaBerge and Samuels (1974) considered that as individuals learn to read then simple processes such as letter recognition and word recognition can become automatic, leaving attentional resources to be used to understand a piece of discourse. However, there are at least two other explanations for these effects. First, it is possible that both tasks require attentional processing, but that attention can be switched between the tasks (Hirst et al., 1980; Neisser, Hirst and Spelke, 1981). Second, it is possible that the different tasks have their own separate source of attention-like processes with which to perform the tasks. This latter point has become associated with the idea of modules within the human processing system (Fodor, 1983; Minsky, 1980). Modules are separable information processing systems, usually associated with a well-practiced task or skill, and are themselves considered to be at least partially under automatic control. Proponents of multiple resources views (Navon and Gopher, 1979; Wickens, 1984) consider that each module can make use of its own pool of limited-capacity resources and so interference between these tasks will occur only when the tasks use the same module (see also Allport, 1980). Such a view makes it difficult to distinguish between a totally automated process and one that is only partially automated and is using a separate source of attention. One recent theory of automaticity that has tried to circumvent this problem is that of Logan (1988).
1.2 The Algorithm/Instance Theory of Automaticity Logan (1988) attempts to side-step the above issue of resource limitations by considering automatization as simply memory retrieval. Automaticity in this view is a memory phenomenon, governed by factors that govern memory. There is no need to consider single versus multiple sources of resources. Automaticity will occur as a result of the processes of memory retrieval. Logan considers automatization as the acquisition of a domain-specific knowledge base formed from separate representations (instances) of each exposure to a task. A task becomes automated when it is based on 'single-step direct-access retrieval of past solutions from memory' (p. 493). (It is closely related to instance theories of memory and categorization; cf. Hintzman, 1986.) Novices, who do not possess such a knowledge base, perform a task via a general algorithm. As the novice gains experience of the task, specific solutions are learnt. When a large enough d a t a / m e m o r y base is stored, algorithm-based performance can be abandoned for the direct retrieval of a solution to a given input. Thus a behavior is automatic when it is accomplished via the process of memory retrieval and nonautomatic when it is accomplished via the algorithm. Performance within a task can also be partly automatic in that some responses can be accomplished via memory retrieval whereas others are accomplished via the algorithm.
192
G. Underwood and J. Everatt
Practice improves performance in Logan's view by increasing the number of individual traces connecting a response to a specific stimulus in pursuit of a specific goal. When the stimulus is encountered again in the context of the same goal the trace will be retrieved and the subject can respond on the basis of the retrieved information. Logan considers that there is a race between algorithm-based performance and retrieval-based performance. If retrieval is quicker then it controls the response; if not, the algorithm controls the response. The more instances there are, the more likely that at least one of them will win the race and so more practiced tasks become more likely to be performed automatically. This view is the main difference between Logan's view of automaticity and many other theories. For example, MacKay (1982) considers that automatization is produced by a connection between perception and action, but that increased practice increases the strength of that connection. The more practice, the stronger the connection and so the faster the process from stimulus to response. Similar views have been proposed by Anderson (1982), Schneider and Shiffrin (1977) and Schneider (1985). The recent PDP model of Cohen et al. (1990) considers that increased performance is due to a strength factor which is related to the weightings placed on connections between units within a processing pathway. This model allows for the possibility that memory for an event is encoded in the strengths of a set of connected units within a distributed system which allows overlapping, but distinct, representations of individual stimulus-response pathways. This model may allow the amalgamation of a strength theory with the features of instancebased learning. As we have seen, the CLC/OLC model of automatization (Underwood, 1982), which identifies feedback with attention, may be compatible with Logan's instance model. The use of CLC, in which the performer attends to the feedback from each individual action, would be applied to an algorithm used for an unfamiliar action. The use of OLC, with feedback no longer being inspected, would correspond to a memory presenting a solution before the algorithm can generate an action. A memory can be evoked rapidly when there is an invariant calling-pattern. Automatic performance depends upon the development of a oneto-one relationship between a stimulus and its cognitive response, as when we hear a familiar word, for example, and know what it means without having to consider alternatives. The stimulus in this case can be regarded as a calling-pattern which evokes a specific 'condition tE action rule' and which does not have to be decoded by the use of resource-limited algorithms. The two models diverge on the matter of memory-based performance. For Logan, automatic performance results from the availability of a large number of instances of previous encoding-retrieval episodes, whereas the CLC/OLC model suggests that these episodes compile into condition iE action rules which can be called by familiar stimuli. The transition from CLC to OLC, which corresponds to the compilation process, results from the direction of attention away from algorithmic, CLC-based performance. Attention becomes unnecessary when the performer no longer needs to match the intention with the action. Since Logan's model assumes that instances of particular solutions to specific stimuli are encoded, generalization from performance on one stimuli to another should not occur. Logan (1988) tested this prediction by presenting subjects with w o r d / n o n w o r d lexical decision tasks. Continued practice on the same set of words and nonwords was shown to reduce the mean and variability in decision times to those words and nonwords, but did not transfer to new words and nonwords. That
Automatic and controlled information processing
193
nonwords show similar effects suggests that the effect is not to be found in semantic memory, but in representations of specific episodes (Jacoby and Brooks, 1984). In another experiment, subjects performed a continuous task on a set of w o r d / nonword stimuli, or varied between two tasks: lexical decision and pronouncibility. A later frequency judgement task showed performance in the varied tasks condition was at the same level of performance as after the continuous task condition. These results are more easily explained from the point of view of Logan's instance-based theory. Process-based strength theories would have difficulty in explaining these findings unless by considering more individual traces between input and output. Logan's model explains the noninterference in dual-task experiments such as those mentioned above by considering that performance on the well-practiced task is memory retrieval based, while performance on the other task will be algorithm based. However, since we do not know what criteria make up memory retrieval and what make up algorithm-based retrieval (Logan suggests that there could be a great many such algorithms each with its own set of properties), deciding when a process is accomplished by memory retrieval (automatic) and when it is not will be very difficult. Also, since memory may be unlimited, there are no bounds to the degree of automaticity that can be achieved with this model. To explain why two automatic tasks can be performed without interference, while using the same process (memory retrieval), we are again left to conclude that it depends on whether they use the same memory retrieval resources or not. Although Logan's model may not provide a solution to the problems of studies of automaticity, it may, however, provide a useful null hypothesis from which to study automatic or nonautomatic processes. By considering that automaticity is a function of an increase in memory and an already available retrieval process, rather than the occurrence of a new set of processes, or changes in underlying processes, studies of automaticity could almost be done away with and exchanged with already existing studies of the process of memory retrieval. Studies of skill acquisition would look at the functioning and properties of algorithm-based processing up to the point of automaticity, when studies of memory retrieval would take over. What is left to decide then is whether or not a particular automatic skill, set of skills or process can be reduced to the level of memory retrieval. In summary, whether the two-process viewpoint is a useful way of considering human functioning within various processes, or skills, remains to be seen. The role of practice suggests that a process, or skill, may not be considered as either automatic or attentional, but rather as varying along a continuum from strongly attention based to strongly automatic; or, in terms of Logan's model, from algorithm controlled to memory retrieval controlled. Another way of looking at this is to consider a skill as a set of subskills and that automatization of a skill is reflected in the attention-free performance of those subskills. This attention-free performance may spread up the hierarchy, producing an increasingly more autonomous skill. The role of attention is determined by the constraints of the task and its invariant condition ~ action relationships. The open-loop or closed-loop performance of these subskills may then be an important criterion for assessing automaticity. The more open-looped the performance, the more automatic the functioning. As suggested above, these viewpoints may not be mutually exclusive. It is possible that subskills have their own continuum of automaticity, as when a tennis player has a good forehand, but a poor backhand. The backhand may show
194
G. Underwood and J. Everatt
signs of variability and less smoothness than the forehand and may be more prone to external distraction. A final possibility is that the automatic/attentional distinction is not useful and investigations of human behavior should perhaps consider problem-solving processes and memory retrieval processes, since attention (and so automaticity) is a variable within these processes, not a defining characteristic. In the following section we will look at the evidence for automatic processing, by reviewing the relationship between attention and input processing.
2
AUTOMATIC
INFORMATION
PROCESSING
Logan's (1988) model used lexical decision tasks as a basis for assuming memory retrieval as a plausible basis for automatic functioning. Here the input is a string of letters which access a stored trace, which in turn initiates the response given the desired goal. In our initial discussion we looked at how the individual experience of reading a piece of text can take the form of recognition of written information without understanding of the underlying meaning. Within the bounds of automaticity theories, many have considered that for well-practiced readers written information is automatically recognized (most notably, LaBerge and Samuels, 1974). However, even for this subprocess of a well-learned skill there is evidence that word identification (Kellas, Ferraro and Simpson, 1988) and even individual letter recognition (Paap and Ogden, 1981) seem to use limited resources. For example, Kellas et al. (1988) provided subjects with words with multiple meanings and single meanings. In line with previous findings (Jastrzembski, 1981; Rubenstein, Garfield and Millikan, 1970), they found that multiple-meaning words were recognized more rapidly than single-meaning words and these words showed less interference with a secondary task than slower single-meaning words. Whatever the reason for this effect (Kellas et al. (1988) explain the findings along the lines of a cascading, interactive system as proposed by McClelland and Rumelhart (1981)) it suggests that when word identification is accomplished quickly, resources can be transferred to secondary tasks more quickly. (Becker (1976) found a similar effect for lexical decisions on high- versus low-frequency words in combination with a concurrent probe task.) This suggests that word identification, at least within the bounds of a lexical decision task, used the same resources as in the secondary probe task. This finding is difficult to accommodate into the view that word identification is automatic and does not use up limited resources. Past evidence, used to conclude that word identification is to some degree automatic, involved studies looking at automatic priming effects and Stroop-like interference effects. However, recent evidence suggests that these are also affected by attention and intention factors.
2.1
Stroop-Like Interference Effects
The classic Stroop effect is to present a subject with a word written in colored ink and to ask for the name of the ink color. If the word itself named a color that was different from the color of the ink, responses tend to be considerably slower than if the word name and color name are the same, or if the word is not the name of a
Automatic and controlled information processing
195
color (Dyer, 1973). Variations on this procedure have included identifying letters flanked by other letters (Eriksen and Eriksen, 1974; Eriksen and Schultz, 1979), classifying words flanked by words from other categories (Shaffer and LaBerge, 1979), naming pictures with related words positioned within the frame of reference (Lupker and Katz, 1981; Rosinski, Golinkoff and Kukish, 1975; Underwood and Briggs, 1984), naming visually presented digits while hearing different spoken digits (Greenwald, 1972), and repeating spoken presented words in one ear while related words are presented to the other ear (Lewis, 1970; Treisman, Squire and Green, 1974). The common characteristic of these tasks is that attention is required for the analysis and response to one stimulus and that an associated stimulus presented at the same time results in a slower response to the attended stimulus. An account of this effect in terms of automaticity would consider that the distracter word or letter is automatically recognized and interferes with the performance of the required response. Although most interpretations suggest that the interference occurs at the response stage (Dyer, 1973; Eriksen and Eriksen, 1974), Shaffer and LaBerge (1979) found that the usual interference effects were produced when subjects were given distracter stimuli assigned to the same response as the target but different categories. This suggests that interference may occur at a semantic categorization stage, presumably before response allocation. It also suggests that noninterference in these tasks occurs only up to the point of semantic categorization and that the only processes that possess this automatic feature are recognition processes (LaBerge, 1981). Evidence supporting the view that such interference is due to automatic processes comes from findings that the interference effects increase as reading skills increase (Rosinski et al., 1975; Schiller, 1966). Thus as reading becomes more automatic (and so more resistant to conscious intervention), so its interference with other tasks increases (Hasher and Zacks, 1979). However, evidence from MacLeod and Dunbar (1988) suggests that this is not an all-or-none phenomenon. It does not follow that once a process is considered automatic then it is free from interference effects. MacLeod and Dunbar gave subjects training on naming a group of novel shapes. The names given to the shapes were the names of four familiar colors. During training the shapes were presented in a neutral color. The naming times for the shapes were compared with the naming times for the four colors themselves in a neutral shape, the naming times for the shapes when they appeared in color and the naming times for the colors when they were presented in the form of the shapes. With up to 2 h training, colors were named faster than shapes and interference occurred only when naming the shapes in color. This was a typical Stroop-like interference effect and would be expected if the color-naming task were automatic to some extent and so interfered with the shape-naming task. However, with 5 h training, interference was found when naming colors in the form of the shapes as well as naming shapes when they appeared in color, even though shape naming was still slower than color naming (this is against simple speed of processing explanations of these interference effects; see Dyer, 1973). With 20 h training, shape and color naming were equivalent and interference occurred only when naming a color in the form of the shapes. This suggests either that the initial interference effect was not due to automaticity, or that automatic processes can show effects of interference from other well-practiced tasks. MacLeod and Dunbar (1988) interpret their findings as suggesting a continuum upon which tasks are positioned in terms
196
G. Underwood and J. Everatt
of how automatic they are. Interference between tasks in these terms depends on the relative position along that continuum. Logan (1988) also considers that two well-practiced tasks can show interference as training on one shows more and more use of the same memory retrieval processes as the first. However, if memory retrieval is obligatory and control of a process is due to some sort of race between ways of performing a task, it is difficult to see why after 5 h of training interference should be produced by a slower task. A second problem for the view that these interference effects are produced by an automatic process is that many studies have found that interference can depend on what the subject attends to within the presented stimuli. For example, Kahneman and Chajzyck (1983) found that if a second word was added to a display showing interference from a color word positioned below a color patch, interference was actually reduced, suggesting that attention-grabbing stimuli can reduce the effects of the interfering stimuli. Similarly, Kahneman and Henik (1981) gave subjects two words, a color word and a noncolor word. One was presented in colored ink, the other in black ink. The subjects' task was to name the colored ink. Kahneman and Henik found that if the color word was presented in colored ink then more interference took place than if the noncolor word was in colored ink. Even though in both cases the color word was presented to the subjects, different amounts of interference were produced depending on whether the subject was attending to the color word or not. Francolini and Egeth (1980) found the same effect using colored compared with neutral digits. Irrelevant digits showed more interference when they were colored the same as the target stimuli than when they were not. Other research suggests that interference reduces as the distance between target and distracter increases (Gatti and Egeth, 1978; Goolkasian, 1981). Thus the amount of interference produced seems to depend greatly on the amount of attention paid to the potentially interfering stimuli. This is difficult to explain if it is considered that identification of written stimuli is automatic. It seems more likely that some initial attentional process concentrates processing on particular features within the presented stimuli. This view is similar to the idea of early attentional selection views (Broadbent, 1971, 1982; Treisman, 1960; Treisman and Gelade, 1980) in which attentional/selection factors play an early role in the processing of stimuli, and thus determine what is perceived and the way it is perceived. For example, Treisman and Gelade's (1980) feature integration theory considers that simple features of the stimuli (e.g. brightness, orientation, color, movement) are initially processed automatically and in parallel, and these are identified as particular objects/percepts by focusing attention on the particular features that make up that percept. Thus what we see depends on what this focused attention, or filtering (Broadbent, 1982; Kahneman and Treisman, 1984), directs us to see. The opposing viewpoint, or late selection theories, suggest that what is perceived is determined much later in the processing of the stimuli, possibly after identification (Deutsch and Deutsch, 1963; Duncan, 1980; Keele, 1973; Norman, 1968, 1969). The above findings are more in line with the early selection views. The interference shown in the Stroop task poses problems for the early selection models of attention, and at the very least it demands a refinement in order to account for the perceptual interference of an unattended word upon the processing of a response to the color of ink. The refinement suggested by Treisman (1969) was that the involuntary processing of the word resulted from processing within a modular subsystem. Attention selects the perceptual analysers which form the
Automatic and controlled information processing
197
subsystem, but has more difficulty in selecting the analysers within the subsystem. Kahneman and Treisman (1984) consider Stroop interference to arise through the failure of 'filtering' - the process by which we select between two perceptual events. This failure is said to occur if selection requires the use of analysers within the same subsystem, and the appearance of interference is not considered to challenge the notion of early selection by these authors. Indeed, the evidence presented above suggests greater Stroop interference from attended words than from unattended words, and this can be interpreted as suggesting that early selection against a word does moderate its processing. A similar conclusion comes from the modified Stroop task using picture/word interference. Here the viewer names a picture in the presence of a conflicting word: the advantage of using pictures is that a greater variety of pictures and words can be used, and the word is now a perceptual event which is physically distinct from the picture. Experiments that have used the picture/word version of the Stroop task have either used sheets containing several pictures, with the total sheet time as the dependent measure (Rosinski et al., 1975), or have used single-trial designs (see below). There are several restrictions imposed by using the total sheet time, namely: lists do not separate recognition time from pronunciation time; search patterns and eye movements can confound the reading time; there may be distraction from other pictures on the page; and the blocking of stimuli within experimental conditions may induce readers to adopt strategies that rely more or less upon the word according to whether it will be helpful or harmful in processing the picture. A single-trials design which used tachistoscopic presentations, and which avoided the restrictions associated with sheets of pictures and words, has been the basis for a series of our experiments looking at influences of unattended words (Briggs and Underwood, 1982; Underwood, 1976, 1977, 1981; Underwood and Briggs, 1984; Underwood and Thwaites, 1982; Underwood and Whitfield, 1985). The interest in this series is not so much the processing of the pictures or other target stimuli as the influence of the unattended words that accompany them. By varying the relationship between picture and word, and observing the effects of different relationships, we have inferred the extent of processing of the word. This is the same process of inference by which Lewis (1970), Bradshaw (1974) and others have concluded that unattended words are analyzed for meaning. In the first of the picture-processing studies it was found that unattended words that were related in meaning to the picture adversely affected the naming response (Underwood, 1976). The pictures were presented in a predictable location, and were therefore fixated by the viewers, and the words were printed to one side. Pictures and words were printed on the same tachistoscope cards, which were displayed for 60 ms. The task was to name the picture, which was a line-drawing of a common object, as quickly as possible. In comparison with all other experimental conditions, including nonassociated words, nonwords and pictures without distracting words, the associated words slowed down the picture-naming response. The subjects in this experiment could focus their attention upon the appearance of the picture, and attempt to select against the word. Even so, the meaning of the word was influential, and the experiment indicates that focusing attention upon one stimulus is not always sufficient to exclude the analysis of a second stimulus. The next experiment in the series qualifies the conclusion that all unattended words are analysed (Underwood, 1976, experiment 2). If subjects cannot know in advance the location of the picture, then associated words inhibit the naming
198
G. Underwood and J. Everatt
response, as before, but nonassociated words provide even greater inhibition. The uncertainty over the location of the picture would require that the viewers divide their attention between the two possible locations until the moment of presentation. At this point they could select the picture and ignore the stimulus in the other location. Printed in the to-be-ignored location was the unattended word, however, and the delay in selecting against this location would allow more of this unwanted stimulus to be processed than in the experiment where selection was made prior to stimulus presentation. These two experiments are similar to Dallas and Merikle's (1976) precue and postcue conditions, and the conclusions are also similar: there is greater processing of the unattended word when attention is divided. The explanation of these effects of unattended words is crucial for the well-being of early selection theories: can they survive the appearance of so many demonstrations of the analysis of unattended words? The effects of selectivity give support, in that unattended words are more disruptive if attention is more divided. The early selection theories could present these effects as a demonstration of the power of the attenuation process, but the unattended words are effective even with focused attention and this poses a substantial problem. Attenuation is seen at its strongest in the focused attention experiments of Bradshaw (1974), Bryden (1972), Dallas and Merikle (1976), Lewis (1970), Underwood (1976)- these studies will be discussed in later sections. Even under these focused attention conditions, associates of the target were seen to influence processing. Other experiments lead to the same conclusion, and will be described presently. Attenuation alone does not prevent the analysis of meaning. There are basically two accounts of these effects of unattended words, one which describes them as arising from the recognition of all unattended words, and the other considers that only associated words are recognized. We have previously described these two accounts as the 'nonselective access hypothesis' and the 'contextual facilitation hypothesis' (Underwood, 1981). The effect to be explained, and which is such a potential problem for early selection theories, is the influence of an unattended associate under conditions of focused attention. Nonassociates are rarely reported as having an effect under these conditions. The nonselective access hypothesis considers that all unattended words gain access to the lexicon, regardless of their relationship with the target stimulus. The selective effect of an associate may then arise during the processes after recognition, and candidate processes include selection of the recognized lexical token, and selection of the response. As two stimuli activate their lexical representations, say, a target word and an unattended word, one of them may be selected as the basis for the organization of the response. If the two sources of activation in the lexicon point to semantically distant words then their separation may pose no processing difficulty, and selection of the lexical token would continue unimpaired. However, if the two words are associates, then the selection of the target may be impeded by the activity caused by its near neighbor. In this way different effects will be caused by the presentation of unattended associates and nonassociates, even though both types of words have been recognized to the level of their semantic properties. In this hypothesis it is the semantic similarity between target and distracter that results in a difficulty in separating them. An alternative account of the effects of unattended words is provided by the contextual facilitation hypothesis. In this case only associated words are analyzed, and it is by virtue of their association with the target that the analysis can occur.
Automatic and controlled information processing
199
When this analysis has occurred, the further processing of the target can be impeded. The sequence events would be as follows. First the target is recognized lexically, and at this time the distracter would gain only primitive analysis. Its existence would be noted, together with analysis of physical characteristics such as loudness, location, size or pitch of voice. At this stage there would be little or no analysis of meaning of the distracter. As the target is processed then contextual facilitation would become available, perhaps through the process of spreading activation within the lexicon (Anderson, 1983; Collins and Loftus, 1975; Meyer and Schvaneveldt, 1971; Warren, 1977). The process envisaged here is one whereby recognition thresholds are reduced whenever an associated stimulus is processed. Treisman (1960) appealed to a similar process in her experiment reporting that sometimes plausible unattended words are shadowed. In this case the context prior to an unattended word had reduced the recognition thresholds for these unattended words. Even though they had been attenuated, sufficient information had accessed their lexical representations for lexical recognition with the temporarily reduced threshold. And so it might be with the case of simultaneous distracters. The processing of the attended stimulus may act to reduce the recognition thresholds of all associated words, and leave unassociated words unaffected. If one of these associates is available in the environment, then even its attenuated form may be sufficient to exceed the reduced recognition threshold, as it did in Treisman's experiment. At this stage in the sequence, the target is fully recognized, the thresholds of target-associates have been reduced, and an unattended associate can activate its lexical representation. Nonassociates are unable to progress this far in the sequence, and have no effect upon the processing of the target. An associate can be recognized, and once recognized it is available to influence future processing of the target by the same routes as suggested by the nonselective access hypothesis. The associate may impair selection of the lexical token, or selection of the response, but a nonassociate can have no effect whatsoever. The two hypotheses provide different accounts of the influence of unattended words that are associates of currently attended words. In several experiments, an inhibition effect has come from nonassociates (Dallas and Merikle, 1976, postcueing conditions; Underwood, 1976, experiment 2; Underwood, 1981), and this effect is informative. The contextual facilitation hypothesis cannot account for the appearance of inhibition from nonassociates, given that it suggests that unattended words are recognized only by associative facilitation. The nonselective access hypothesis assumes that all unattended words gain lexical access, associates and nonassociates alike, and can accommodate inhibition effects from nonassociates by suggesting that they have their effect at a processing stage different from that of associates. The effects of nonassociates are greatest under divided attention and when exposure durations are increased: this might indicate that the effects are most apparent when the nonassociate is available for verbal report and therefore able to produce response competition. In these circumstances the viewer might be expected to be aware of the identity of the unattended word. Associates and nonassociates would produce response competition, of course, and so the hypothesis is left to explain why associates should produce less inhibition than nonassociates. One account is to say that these two kinds of words produce the same amount of inhibition at the response selection stage, and that the difference between them arises earlier in the sequence of processing when associates inhibit targets less than nonassociates.
200
G. Underwood and J. Everatt
The hypothesis does not identify the stage at which we should observe reduced inhibition from associates. It identifies encoding and lexical access as a stage at which associative facilitation can occur, and it identifies response selection as the location of response competition, but to account for inhibition from associates a third stage must be implicated. One possibility is the stage at which the recognized word is selected from the lexicon in preparation for the response. If the target and its associated unattended word generate cross-facilitation, then both lexical representations will become more activated than if a nonassociate had been the unattended word. This enhanced activation might then facilitate the selection of the target from the lexicon, in the preparation of the response.
2.2
A s s o c i a t i v e Facilitation or Associative I n h i b i t i o n
These hypothetical accounts of the progress of targets and distracters both acknowledge the effects of divided attention while at the same time assuming that unattended words are recognized at some level of processing. The effects of dividing attention are, first, to allow greater inhibition from nonassociated distracters, and second, to delay the response to the target. With focused attention, the naming or categorization of the target is faster, and unattended words are less distracting. The pattern of distraction is also seen to change, with nonassociates becoming less effective or noneffective. The effect of an associate depends upon the specific task being performed, and upon the conditions of presentation. Associative facilitation or inhibition may be observed, and the direction of the effect appears to depend upon the difficulty of encoding of the target. When the target and distracter can be seen clearly, then associative inhibition has been observed in a variety of tasks (e.g. Underwood, 1976; Underwood and Briggs, 1984; Underwood and Thwaites, 1982; Underwood and Whitfield, 1985). In the picture categorization experiments reported with Alison Whitfield, associative inhibition changed to associative facilitation when we made the subject's task more difficult by masking the stimuli. In the word-naming experiments of Allport (1977) and Dallas and Merikle (1976), associative facilitation was found with masked presentations in an identification and a speeded response task, respectively. With difficult target viewing, the encoding of the target may be aided by presentation of an unattended stimulus which shares some semantic feature with the target. There may be cross-facilitation through spreading activation, and this would enhance the selection of the target from the lexicon. The stage of encoding would be less likely to be influenced under easy viewing conditions, because the target would be recognized before the attenuated distracter, and would have progressed to a later stage in the sequence. The inhibition would then be seen to occur during one of the stages involving the selection and organization of the response. The notion of two interference effects is supported by data recently reported by La Heij, Dirkx and Kramer (1990). While an influence upon the encoding stage may produce facilitation from an associated item, an influence of the same item upon a decision stage may produce inhibition. In these experiments subjects named briefly presented pictures, and printed words acted as primes. The related primes were members of the same semantic category, e.g. the picture of a chair would be
Automatic and controlled information processing
201
accompanied at different times by the word 'bed' or the word 'table' or by control words unrelated to the picture. La Heij et al. reported two effects, one involving differences between priming words and one involving the variable time interval between presentation of the word and presentation of the picture (the stimulusonset asynchrony, or SOA). Words that were members of the same category as the picture, but which were weak associates (chair/'bed') only produced inhibition effects in the picture-naming task. The size of this inhibition effect varied according to the SOA, with the largest effect when the word was presented shortly after presentation of the picture. There was no reliable inhibition effect for a weak associate appearing before the picture. This leads to the suggestion that the inhibition effect occurs late in the sequence of picture recognition and name selection. The inhibition effect is considered to result from interactions between word representations in the output lexicon, and prevents the simultaneous activation of two phonological forms. This inhibition effect is the only influence that can be observed with weakly associated words. Whereas they will be recognized and will be active during picture name selection, any early activity will not be sufficiently strong enough to aid name selection. Facilitation can arise only when a strong associate is present. In contrast to the 'inhibition-only' effect found with a weak associate, La Heij et al. reported that a strong associate of the picture (chair/'table') produced a facilitation effect when presented before the picture and inhibition when presented after the picture. Not only must a strong associate be present for facilitation to be observed, but the associate must appear before the target picture. This facilitation effect may result from lexical priming, with identification of the word aiding selection of the picture name in the output lexicon. An alternative interpretation would be to suppose that the facilitation effect could arise from identification of the word aiding the identification of the picture itself. The nature of the facilitation effect is considered in the following section, which focuses upon automatic word encoding.
2.3 Associative Facilitation by Priming The classic demonstration of priming effects consists of a presentation of two stimuli that are semantically related. One item can be observed to aid the processing of the second (target) item, with the two presentations usually, but not necessarily, being asynchronous. When processing of a target item is facilitated by the prior presentation of a related item, then the target is said to have been primed. By looking at the effect of a related item upon a target, relative to the effect of a neutral item, the direction of the priming effect can be observed as the nature of the relationship is varied. Also by looking at the effect of an unrelated item relative to the neutral item, we get a clear picture of the direction of facilitation and inhibition effects. Suppose that a related prime results in a faster response than does an unrelated prime. This alone does not tell us whether the related prime generates a facilitation effect or whether the unrelated prime generates an inhibition effect. The neutral prime serves an invaluable function here. It may have an effect that is similar to the unrelated word (in which case we can conclude that the related item results in facilitation), or it may be most like the related word (in. which case the
202
G. Underwood and J. Everatt
related item has no effect but the unrelated item causes an inhibition effect). The importance of the neutral item is in providing a baseline against which two other conditions can be measured. Using this reasoning, Neely (1977) presented subjects with a letter string that was to be classified as a word or nonword, preceded by a word (prime) that was semantically related or unrelated to the word targets. The effects were compared with the effects of a neutral prime (a group of Xs). The results suggested an automatic priming effect, in that word targets were responded to faster following a related prime than following a neutral or unrelated prime. Explanations for these results, along the lines of an automatic effect, consider that activation from the lexical entry of the prime spreads to the lexical entry of the target and increases its activation (Collins and Loftus, 1975; Posner and Snyder, 1975). Thus the recognition of an item means less evidence is needed for a related item to exceed a recognition threshold; the first item can be said to be aiding the recognition of the second item. This priming effect has also been found in experiments where the prime and target were presented together (Meyer and Schvaneveldt, 1971), in experiments using a naming task (Seidenberg et al., 1984; Warren, 1977), in experiments using semantic categorization decisions (Guenther, Klatzky and Putnam, 1980), in experiments using sentence primes instead of single word primes (Stanovich and West, 1981, 1983a), in experiments using single letter prime and targets (Posner and Snyder, 1975), and in experiments using pictures instead of words as primes/targets (Carr, et al., 1982). The evidence presented in the previous section suggests that making a task harder (by masking the stimuli) produces facilitation by related items. A similar suggestion has been proposed by Carr et al. (1982) to explain the differing effects of picture/word priming in naming and categorization. Carr et al. found in a naming task that pictures were more primable than words, whereas Guenther et al. (1980) found in a semantic categorization task that words were more primable than pictures. These findings led Carr et al. to conclude that words were more automatically named, while pictures were more automatically categorized, and that priming aided the more difficult task. Stanovich and West (1983a) considered a similar viewpoint to explain their findings that word frequency and priming interacted; low-frequency words showed greater priming effects. They considered that the harder it is to process a word, the more associative priming effects will aid the processing of that w o r d - t h e y put forward their views as an interactivecompensatory model of word recognition. A further source of facilitation effects is suggested by Neely (1977). Subjects were given category name primes that were followed by an exemplar of that category on most trials ('bird'-'robin'); this showed the normal facilitation from related primes soon after the prime. Other subjects though were given category names that were followed by an exemplar from another category on most trials ('body'-'door'). Here priming effects were found when a large enough gap between prime and target allowed subjects to switch their attention to the different category. This suggests another source of facilitation effects caused by consciously controlled strategiesstrategies that can be described as being intentionally initiated and given attention during their execution. Similar distinctions between automatic and strategic facilitation effects were proposed by Posner and Snyder (1975) using letter stimuli. The occurrence of such strategic effects within tasks such as lexical decision (word/ nonword decision), as used by Neely (1977), can produce problems in deciding the
Automatic and controlled information processing
203
source of effects within word recognition-so much so that some have questioned the assumption that such effects are due to the normal functioning of the word recognition system. For example, Balota and Chumbley (1984) proposed that lexical decisions could be made by a familiarity assessment, outside lexical processes. Other accounts of priming effects consider that the effects are due to some sort of a post-recognition coherence check (deGroot, Thomassen and Hudson, 1982), due to searches through context-based lists and to inhibition effects from integration processes after recognition (Forster, 1981), and due to intentions to respond in a particular way (Neumann, 1984). In conclusion, several stages of facilitation are suggested by the results from the studies discussed so far. First, facilitation can occur at the stage of accessing a lexical entry. This may be via a process of automatic spreading activation from a related lexical entry. The evidence for this is that facilitation is apparent early in the processing of the target (La Heij et al., 1990), suggesting initial encoding processes, and occurs soon after the appearance of the prime (Neely, 1977), in turn suggesting a fast-acting process. Items that are harder to process show larger priming effects (Stanovich and West, 1983a). A second source of facilitation may be at a stage of selecting a recognized item from the lexicon in preparation for a response. This is suggested by the differing effects of associated and nonassociated words on responses to other items (Underwood, 1976). Responses that are harder for a given stimuli show larger priming effects (Carr et al., 1982). A final stage at which facilitation may occur is during which it is under relatively more attentional control. Here subject strategies can show effects on responses to stimuli (Neely, 1977; Posner and Snyder, 1975). A distinction between automatic priming and attentional priming is thus suggested, with the time between prime/target onset and the type of task performed being important variables within these effects. However, there are alternative viewpoints for the source of these effects, as mentioned. The distinction may be made clearer by the effects of unattended stimuli, as discussed in the previous two sections. However, the findings here are not entirely clear-cut, and in the following sections we will discuss these effects further.
2.4
Attention in Simultaneous Presentations
The automatic effects of unattended messages have been investigated extensively with presentations of simultaneous speech and presentations of visual displays containing more than one element. When an unattended element, whether spoken or visual, influences the processing of an attended element, then we can conclude that recognition of the element does not require attention and may be automatic. Interactions between items can also be observed in dichotic listening tasks. Here subjects are presented with messages to both ears at the same time and the task is usually to attend to one message and repeat it out loud ('shadowing'). The time taken to shadow each word can be measured, and the effects on this of presenting related or unrelated words in the unattended message observed. Interference between related words was demonstrated in Lewis's (1970) experiment, in which listeners shadowed one message from a dichotic presentation of lists of words, and their shadowing responses were timed. The unattended words, to which no response was required, were not always unrelated to the words presented at the
204
G. Underwood and J. Everatt
same time in the attended message. In comparison with shadowing latencies when the two members of a dichotic pair were unrelated, which can be considered as the baseline control, unattended synonyms slowed down the shadowing response. Lewis reported other effects of unattended meanings and, for example, antonyms tended to speed up the shadowing. These results indicate that the meaning of an unattended word is recognized at some level of analysis. This is not to say that the listener had necessarily been aware of the meaning of the unattended word, but that activation had occurred in the part of the processing system that responds to lexical meaning. Recognition in this sense corresponds to activation which is selective to a specific feature of a word. In this case, the feature is lexical meaning. It is not entirely clear why antonyms of attended words should have exactly the opposite effect of synonyms in Lewis's experiment, and similar experiments have not confirmed the direction of the effect. The influence of unattended words upon shadowing has been confirmed, however, in experiments reported by Bryden (1972) and by Treisman et al. (1974). Bryden extended Lewis' result by demonstrating that an unattended antonym which is presented prior to a related shadowed word, rather than simultaneously with it, also speeded up the shadowing response. Synonyms also produced a facilitation effect, in contrast to Lewis' inhibition effect, and in contrast to Treisman et al., who also found inhibition with synonyms. A significant feature of the Treisman et al. result is that the inhibition effect appeared only for words near the beginning of a dichotic list. When they appeared a few seconds after the beginning, the difference between synonyms and the control words was eliminated. Treisman et al. interpreted their positive result as being an indication of serial processing of the two related messages at a time when capacity was not fully occupied by one message. At the beginning of the list both messages could be analyzed, and the synonym relationship recognized. Another way of looking at their result is to consider it as a function of the increased focusing of attention during the presentation of the dichotic messages. At the beginning of the lists the task for the listener is to select the message that is to be attended, and at this point there would be a small amount of sampling of the to-be-unattended message. This sampling might be necessary for the listener to confirm that the message does not possess the features of the to-be-attended message. It was during this period of selection, when attention was not fully focused, that unattended words were most effective in the Treisman study. By the time attention was focused, just a few seconds into the list, the synonyms were ineffective. This explanation would indicate an important relationship between the focusing of attention and the semantic analysis of unattended messages. Evidence consistent with the notion of semantic processing of unattended words comes from experiments using a slightly different approach, but one which again looks for an influence of a completely ignored unattended message. Smith and Groen (1974) and Traub and Geffen (1979) reported experiments in which subjects heard short dichotic lists, with instructions to attend to the words presented in one ear only. A test word was then presented, and the subjects required to indicate whether or not this word had been in the attended list. The test words of interest are those that were not in the attended list (i.e. negative probes), but were in the unattended list. Smith and Groen found that these particular test words gained slower response times and higher error rates than negative probes that had not been in the unattended list. Their result held only if the words in the unattended
Automatic and controlled information processing
205
list were members of the same semantic category as the attended words. Traub and Geffen confirmed this result, and also looked at the effects of increasing the focusing of attention. By precueing the attended words with a sequence of five digits, they increased the listener's ability to select the appropriate words, but this did not change the interference effect. Even with a good selection cue their listeners were influenced by the meaning of the unattended words, although Traub and Geffen attributed their results to acoustic rather than semantic analysis. Unattended lists were certainly not processed to the same categorical level of analysis as the attended lists, this conclusion coming from their second experiment. Listeners in this experiment heard lists of words taken from one category or from different categories. When the words came from one category, not only would they act to prime each other, but the shared category feature could be used by listeners as an organization aid. The homogeneity of the unattended list did not influence the response to negative probes taken from that list, but inhomogeneous attended lists produced slower and less accurate responses. The deficit associated with inattention was the failure to recognize the common category of the words in the list. If the meaning of an unattended message is recognized only when attention is poorly focused, then the early selection model of Treisman (1960) and Broadbent (1971, 1982) gives a satisfactory account of the findings that unattended information did not influence performance. Johnston and Wilson's (1980) comparison of divided and focused attention also supports this conclusion with a dichotic listening experiment. Their experiment showed that the detection of a target word was affected by the meaning of a simultaneous word only when attention was divided. When listeners could focus upon one message, the unattended words did not influence performance. Similarly, a study by Kidd and Greenwald (1988) found that recall of attended stimuli was not affected by presentation within an unattended list, suggesting that repetition effects require attention to become effective. The results from studies of visual presentations of words and pictures also suggest a role for attenuation during input processing, but that attenuation does not preclude processing of the unattended stimulus (Underwood, 1976). The discussion so far presents evidence that processes thought of as automatic can themselves be products of, and influenced by, attention, the very process that automaticity is supposed to spare. It also suggests that plausible automatic processes can show interference effects from other processes, and interfere with other processes. There is also evidence that well-practiced automatic skills are not free from subject control, e.g. findings that typing and speaking can be inhibited quickly when errors occur (Ladefoged, Silverstein and Papcun, 1973; Levelt, 1983; Logan, 1982; Rabbitt, 1966). Though not damning evidence, these findings suggest major problems with what can be considered an automatic process or not. The view that automaticity is simply memory retrieval (Logan, 1988) seems attractive under these circumstances. The findings that automaticity can depend on attention can be accounted for if it is considered that attention is needed for retrieval of information from memory. The findings of interference effects from and within automatic processes can be accounted for if it is considered that two tasks use the same memory retrieval processes, or the same algorithm procedure. Intentions and control factors can be accounted for by considering that different input information affects the type of output produced, and by considering that subjects can choose to respond to that output or not. Interference can then be accounted for by having to make choices between different outputs. The fact that Logan's model considers that
206
G. Underwood and J. Everatt
storage and retrieval of memory traces is an obligatory and unavoidable consequence of attention accounts for these findings quite well. Cohen et al. (1990), however, argue that the characteristic of obligatory retrieval in Logan's model suggests that interference would be expected if time is allowed for a slower process to be accomplished. Experiments suggest that interference on color word reading from colors does not occur if time is allowed for color processing to occur by presenting the color before the color word (Glaser and Glaser, 1982), or color word processing is slowed down by presenting the color word upside-down and backwards (Dunbar and MacLeod, 1984). It is also difficult to see why slower processes should interfere with faster processes, as in MacLeod and Dunbar's (1988) study. These findings suggest at least that simple processing time accounts of Stroop effects need considerable revising. It should also be noted that Cohen et al.'s simulations do not conform to all of these data. We have seen that so-called automatic effects can be affected by attention. This, however, does not necessarily suggest that automaticity depends on attention. The alternative account of where the cognitive system selects percepts (the late selection theories) suggests that selection occurs relatively late in the processing of stimuli. The model of Deutsch and Deutsch (1963, 1967) is a notable example of this viewpoint. They considered that all inputs are analyzed to a relatively high level of processing and the results are used to select stimuli for further processing. This model was extended by Norman (1968, 1969) who considered that inputs were relatively completely encoded and that the pertinence of a stimulus determines the order of further processing, pertinence being itself determined by the sensory encoding and analysis of previous inputs. These models gain support from demonstrations of the processing of unattended information, as with, for example, the dichotic listening studies of Lewis (1970), Bryden (1972) and Smith and Groen (1974). Dichotic listening tasks do not provide the only evidence for the possibility of processed unattended information; similar evidence has been found in investigations of simultaneous visual presentations. For example, Bradshaw (1974) demonstrated the potency of unattended words in brief visual displays with a task that required subjects to make judgements about the meanings of attended words. An attended word was presented to a predictable location in the visual field on each trial, and thereby gained the benefit of an eye-fixation during presentation-presented to the fovea. This word was polysemous (e.g. 'bank') and was accompanied by a second word that offered disambiguation (e.g. 'water' or 'money'). The unattended second word was presented either to the right or left of the target. As the 125 ms display went off, the subject was presented with a forced choice task in which they had to select one of two meanings of the target word. Subjects tended to bias their interpretation of the target in favor of the meaning provided by the accompanying word, and this result held for those accompanying words that were reportable, and for those that were not. Bradshaw concluded that the parafoveal second word had received semantic processing in the absence of conscious identification. The finding that unattended information can affect the processing of attended information is a controversial one, but further support is provided by the studies of Dallas and Merikle (1976) using related and unrelated precued and not precued target and unattended words, and of Underwood (1976) using target pictures in predictable or unpredictable locations with or without related and unrelated words
Automatic and controlled information processing
207
and nonwords (see section 2.1 for more discussion of this problem). In both cases evidence was found that suggests the unattended word affected target naming, and in the Underwood (1976) study this effect varied depending on whether attention could be focused on a particular position because of the predictable location. However, other studies do not support these conclusions (Paap and Newsome, 1981; Rayner, Balota and Pollatsek, 1986; Stanovich and West, 1983b), finding that items presented to the parafovea did not affect the processing of a foveal stimulus. The problem of how much information is picked up in peripheral vision and to what level this is coded is the subject of the following section.
2.5
Parafoveal Processing-Automatic Orientation?
Experiments investigating processing of information within the parafovea during reading suggests that the further the information is from the center of fixation the less information can be picked up, or the less processing is performed upon that information. For example, McConkie and Rayner (1975) found, by blanking out letters to the right of fixation, that reading speed is optimal if 16 letters to the right of fixation are available for processing; with around 14 to 16 letters, if word length is kept constant, reading speed is not affected. Thus changing the word 'can' to a word of similar length like 'dog', during an eye movement, did not alter reading speed. This suggests word length, and little more, is acquired up to this point. Below 12 letters, changing the word during a saccade did affect the subsequent fixation. Along similar lines, Underwood and McConkie (1985) found that letter information up to 10 letters to the right of fixation affects eye movements, whereas McClelland and O'Regan (1981) found that previews, five-character spaces from fixation, of the target item (e.g. 'model') speeded up naming compared with previews of a highly similar item (e.g. 'molel'). It seems detailed information about a word is picked up in this area. Similar evidence for a reduction in detail picked up in the parafovea as distance from center of fixation increases has been found in picture processing (Nelson and Loftus, 1980), and is suggested by the reduction in Stroop interference effects the further a distracter is from the center of fixation (Gatti and Egeth, 1978). Further findings within the reading literature suggest that information in the parafovea of vision can be processed to a large degree, if not completely. This comes from findings that normal reading words, particularly function words, can be skipped within sentences without detrimental effects to reading (Carpenter and Just, 1983; O'Regan, 1979; Rayner, 1977) and that skipped words are not inferred from the rest of the text (Fisher and Shebilske, 1985). This suggests that a great deal of processing can occur in the parafovea, but that foveal processing provides more detailed processing. This is supported by the fact that reading by parafoveal processing alone is very difficult (Rayner and Bertera, 1979). These findings suggest that some process can accept information before the eyes land on that information, and there is evidence to suggest that the intake of information from the parafovea shows characteristics associated with automatic processes. For example, Jonides (1981) presents evidence that information in the parafovea does not draw heavily on cognitive resources compared with central information. This is suggested by the finding that there is no effect of increasing
208
G. Underwood and J. Everatt
memory load on detecting peripheral information, whereas there is on detecting central information. Jonides also found that it was harder to suppress the costbenefit effects of valid or invalid cues when they were in peripheral vision compared with when they were centrally located, and that expectancies show significant effects on central cues but not on peripheral ones. These findings led Jonides to conclude that automatic processes occur in peripheral vision, and that there is a relationship between attentional shift and eye movements. Similar views have been expressed by Eriksen, Webb and Fournier (1990) and Shepherd and Mfller (1989). Using a letter discrimination task in which two letters had to be searched for and responded to in an array of other letters, Eriksen et al. found that the effect of changing the letter in the parafovea depended on whether or not it was a target letter. The two potential positions in which a target letter could appear were cued, one after the other. The interesting effects came when subjects were fixating the first cued position, and the letter in the second cued position was changed to a target letter. The extent to which this second letter was processed before being changed could then be studied by analyzing the effect of the change on responses to the target letters. Eriksen et al. found that up to 50 ms after the second position was cued there was no effect of changing the letter. At about 80 ms, though, an effect of changing the letter from one target letter to the other was found, but not of changing the letter from a non-target letter to a target letter. This effect was considered to be due to an automatic system processing the letter at the second position while an attentional system is still processing the letter at the first position. If the letter in the second position is a target item, the automatic system produces a response bias such that, when the attentional system arrives and processes the word itself, a response conflict will occur between the automatic system and the attentional system. This will not occur in the situation where the letter is changed from a non-target to a target letter because here the automatic system will not have produced a response bias. If this is the case it suggests a fast-acting system that possibly processes information to output, and, if Eriksen et al.'s interpretation is correct, this precedes attentional processing. Shepherd and M611er found that presenting subjects with cues to fixation produced initial wide-ranging facilitation of stimuli detection which over time decreased for all locations except the location specifically cued. This they proposed was due to a focusing of an initially broad beam of attention onto a particular location. A more rapid focusing process is suggested when peripheral cues are used. Here facilitation for the specific location was found as early as 50 ms after cue onset. Shepherd and Miiller suggest that these effects may be due to a process involved in programming saccadic movements to a particular location or stimulus. This process rapidly focuses in on a position so that a saccade to that position can be programmed, and more attentional, finer-detailed processes follow. This suggests that eye movements are associated with attentional shifts. Similar views have been expressed by Kennedy (1983), Nelson and Loftus (1980), and Henderson, Pollatsek and Rayner (1989). Although there is evidence that attentional shifts can occur without eye movements (Posner, 1980; Posner, Cohen and Rafal, 1982), the argument expressed by Kennedy (1983) and Henderson et al. (1989) is that it is more usual for attention and eye movements to be closely linked. There is, for example, evidence that when
Automatic and controlled information processing
209
shifts of attention are induced, eye movements are also induced. Cooper (1974) and Kahneman (1973) present findings that if subjects are provided with a concurrent auditory stimulus, or questions, about an object, eye movements are found toward those objects, even when fixation of those objects is not necessary for the performance of the experimental task. This suggests the possibility of an automatic triggering of inspection processes, or an orientation response induced by attending to a particular item, and may be related to Shiffrin and Schneider's (1977) view of an automatic attentional response. Whether these are the same processes or not remains to be seen. (An orientation response within the auditory system may be suggested by the attentional switch to unattended information that occurs when highly important information, perhaps highly learnt information such as the listener's own name, is presented in an unattended message. This may be related to the findings from dichotic listening experiments such as Moray (1959).) The value of parafoveal information within reading is suggested by the findings that reading is considerably slowed down when parafoveal information is removed (McConkie and Rayner, 1975; see discussion by Rayner and Pollatsek, 1987). Even the removal of word boundaries in the parafovea severely impairs eye movements (McConkie and Rayner, 1975; Pollatsek and Rayner, 1982). Parafoveal processing also seems to be valuable within picture processing. In the Henderson et al. (1989) study, pictures were presented at different locations on a screen and subjects inspected these in order to answer questions about them. In one condition subjects were given previews of the pictures, in another they were not; only the picture at fixation was presented to the subject. The results showed that picture preview produced about a 100 ms advantage in fixation duration on the picture, compared with the fixation durations when no preview was available. In a second experiment there was evidence that this effect was mainly due to the availability of previews of the picture that was to be fixated next. This advantage of preview in picture recognition is very similar to that referred to above in sentence reading. Previews of information allow for faster processing of that information. Henderson et al. propose that this is because covert visual attention shifts to a parafoveal location and processes that information to some extent. This processing can be complete (if, say, the information is well known, as in the skipping of function words mentioned above) or not complete, in which case foveal processing will take over. Since a certain amount of processing will have been accomplished parafoveally then less processing need be accomplished foveally. This would generate a preview advantage. Whether the covert attention mechanism proposed by Henderson et al. is the same as the automatic mechanism discussed above remains to be seen, but both are considered to have the same function: programming saccadic movements. Henderson et al. (1989) refer to the views of Morrison (1984), who considers that attention (or covert attention) moves to the next piece of information, and is used to trigger off an instruction to move the eyes to a new position. The information used in this decision may help us to decide what information is processed parafoveally. Evidence from studies of eye movements during sentence-reading tasks suggests that the initial fixation on a word is affected by the location of information within that word (Hy6n~i, Niemi and Underwood, 1989; Underwood, Clews and Everatt, 1990). Information in the sense used here relates to the novelty of the letter sequence within a word. The more novel a letter sequence within a word, the more
210
G. Underwood and J. Everatt
distinctive that word is, because of that particular letter sequence. The less novel a letter sequence is, the more the number of other words that possess that sequence, and the less likely it is that the word can be recognized from that letter sequence. Letter sequences that occur in many other words are considered to be redundant, because processing of them alone will not be sufficient for the particular word to be identified. The finding that initial fixations are affected by where these more informative parts of a word occur leads to the suggestion that processing can move ahead of fixation. This processing results in the identification of some features of the parafoveal display, and triggers a saccade to the informative parts of the word. This suggests a process that is sensitive to the parafovea, which may be used to find distinctive features (informative parts) of to-be-fixated pieces of information to allow them to be processed more easily. A second possibility is that well-known pieces of information can be processed by the parafoveal processes and so foveal processes are directed away from such well-known words or parts of words. There is also evidence that within picture recognition eye movements may be attracted to informative parts of the picture, and that this process is vital for information processing. For example, Mackworth and Morandi (1967) and Loftus and Mackworth (1978) found that subjects quickly fixate on an informative area of a picture when that informative area is defined in terms of subject ratings, or the probability that a particular detail belongs in a particular s c e n e - the less probable, the sooner the eyes land on that piece of information, suggesting parafoveal processes are picking out unusual information within a scene. Loftus (1981) found that with various 50 ms tachistoscopic presentations of a picture, performance on a following recognition test did not improve with the number of presentations, unlike in the case where 100 ms presentations were used. Loftus suggested that this was because the smaller presentation duration was not long enough to allow peripheral scanning, which is necessary to determine where a subsequent fixation should occur. Subsequent fixations were thus at random locations instead of at informative locations. With longer presentations there was enough time to perform this peripheral scanning and determine where an informative part of the picture was located for future foveal analysis. The evidence reviewed here suggests that some information can be extracted from words in the parafovea. It suggests that parafoveal information can be used to program a saccade to the next point of regard. It also suggests that the important features of the information around fixation are chosen by this process as the points to be fixated next. Whether this process is controlled by redundancy of information (whereby well-known pieces of information would be processed parafoveally, and so foveal processes can move onto other areas of information), or by informative pieces of information (whereby unusual/informative areas would be recognized by parafoveal processes, and the analysis from this is used to attract foveal vision toward them) is yet to be decided. It is also possible that both processes occur to some extent. In either case parafoveal processing seems to result in deep analysis of the information dealt with by those processes. Let us return to the point of interest, in deciding whether unattended parafoveal information can affect attended foveal information. For parafoveal processing not to interfere with foveal information it would have to be concluded that all processing of the attended item is accomplished before parafoveal processes start, or that parafoveal processes and foveal processes do not interfere. The data presented here suggest that parafoveal processing seems to be fast, and that of Eriksen et al. (1990) suggest that parafoveal
Automatic and controlled information processing
211
displays can be processed to a response biasing stage where this bias can interfere with later attentional processes. The arguments of previous sections suggest this stage as the most likely for Stroop interference to occur (see section 2.1, but, as mentioned, Shaffer and LaBerge (1979) suggest this interference can occur sooner). The fast and biasing nature of this parafoveal process then seems a plausible candidate to produce Stroop-like interference in word-processing. This process of orienting attention towards a particular location suggests a process in use in much of what we do in everyday life (in visual recognition and perhaps auditory recognition) which may possess the features of an automatic process, and if this process seeks out unusual, informative parts of a stimulus it suggests an automatic process that it is more than the memory retrieval processes suggested by Logan's (1988) model. The actual features of this process also may give us an idea of how to distinguish automaticity from more controlled processes. Thus, if we return to the idea of a continuum of automaticity, we may consider this orientation process as being at, or near, the automatic end of the continuum. In the introduction to this chapter we talked about reading a page of text. If attention is not paid to the meaning of the text then understanding will almost certainly be lost. This leads us to the question of processing near the other end of the continuum, when we attempt to understand complex stimuli which are themselves composed of familiar components. This is the problem of determining the role of attention in language comprehension.
3
FAILING TO INTEGRATE
UNATTENDED
MESSAGES
To understand the meaning of a sentence it is necessary to integrate the meanings of each of the words. This is to perform a comprehension calculation which selects the most appropriate shade of meaning of each of the words and combines them into an underlying meaning which reflects the relationships between the words. In Chomskian terms, this process involves the recognition of the deep structure of the sentence. The relationship made explicit by this comprehension calculation would include, for example, the assignment of a subject and an object to a verb, the recognition of anaphora, and the recognition of the propositional structure of the sentence. The product of the calculation is an interpretation of what the speaker or writer meant. What we will consider next is whether this process requires attention, or if comprehension can be independent of attention. Early selection theory is quite clear on this question: if attention is required for perception then there should be a comprehension deficit suffered by unattended sentences. Late selection theory makes a less straightforward prediction: if the attended and unattended messages compete for the same postperceptual processes (including storage; see Deutsch and Deutsch, 1967; Norman, 1968), then only one message will be understood. The comprehension calculation requires storage, and Baddeley (1979) has specifically discussed the cognitive resources of the 'working memory' requirements in reading comprehension. To integrate the words at the beginning of any sentence with the words appearing towards the end of it, all of the words must be stored, and if storage is a process that is available only to selected messages then we should see no evidence of the comprehension of
212
G. Underwood and J. Everatt
unattended messages. This was the case with Cherry's (1953) listeners, but they were questioned some time after presentation, and so the deficit may have occurred after recognition. To know whether unattended messages can be understood we must find a task in which storage is not required, otherwise late selection theorists are entitled to object that the comprehension deficit is a result of postrecognition competition. This approach satisfies the demands of the late selection theorists, but another aspect raises objections from early selection theorists. In the previous section several experiments were described as demonstrating the occurrence of cognitive effects of unattended messages. In many of these experiments subjects were presented with stimuli to which a single response was required, and the cognitive processing of an unattended word may be inferred from the nature of its effect upon that single response. A word might appear on a screen, for example, and a response key pressed according to whether or not the word was a member of a prespecified category. Kahneman and Treisman (1984) describe such single-trial tasks as being 'selective-set' experiments, in that subjects respond to one of several stimuli that might be presented. In contrast, many of the initial investigations of attention can be described as studies of 'filtering' in that the subject selected between two or more stimuli that were actually presented. Kahneman and Treisman are not satisfied that the conclusions drawn from selective-set experiments may be compared with those from filtering experiments. In addition to differences between selecting between possible versus actual stimuli, they emphasize the complexity of the filtering task in that the organization of a continuous response requires additional processing. The difficulty in organizing a continuous response such as shadowing is unquestionable, but their argument is not entirely satisfactory. If the selective-set experiments remove the load induced by response competition then the only difference between the processing of attended and unattended inputs must be a perceptual difference. Furthermore, several of the 'filtering' experiments have demonstrated cognitive effects of unattended messages: the semantic interference experiments of Lewis (1970), Bradshaw (1974) and Underwood (1976, 1977), for example. In such experiments the subjects were required to organize a response to an attended message, and there was evidence of the processing of unattended meanings. Kahneman and Treisman are dissatisfied with such results because the effects are typically small in magnitude, but they are prepared to admit the effects as demonstrations of the semantic processing of unattended messages. What they do insist is that we cannot conclude that perception does not require attention. We can agree that inattention produces a recognition deficit- there are numerous demonstrations of this deficit from studies of target detection. In order to demonstrate that attention does not affect perception with the single-trial experiments, it would be necessary to demonstrate similar interference effects with focused and divided attention. This is clearly not the case: the focusing of attention induced a different pattern of interference in word-naming and picture-naming experiments (Dallas and Merikle, 1976; Underwood, 1976). In the following discussion of the attentional demands of comprehension, evidence will be taken from 'filtering' experiments, not because the Kahneman and Treisman argument is irrefutable, but because it is not easy to imagine a suitable experiment. It would require the presentation of an unattended sentence requiring one of a fixed number of responses, and the absence of competition between the attended and unattended messages. The relevant evidence comes from tasks in which subjects make responses to sentences while their attention is diverted.
Automatic and controlled information processing 3.1
213
Unattended Ambiguous Sentences
To what extent can a listener recognize the underlying meaning of an unattended sentence? Three reports have attempted to answer this question by having the interpretation of an ambiguous sentence affected by an unattended message. The strongest claim comes from Lackner and Garrett (1972), who found that a number of varieties of ambiguity can be resolved. By saying that the ambiguities were resolved in what follows, what is meant is that an ambiguous sentence was interpreted in the suggested direction more often when the unattended message was present than when it was not present. The size of this bias shift is often quite small, less than 5% in some cases. Lackner and Garrett found that an unattended sentence could bias the interpretation of lexical, surface structural and deep structural ambiguity. For lexical ambiguity, the listener might have attended to: 'The plot occupied much of his time that month' in preparation for a paraphrasing response, and at the same time the unattended ear might have been presented with: 'The scheme was very good but they did not like it.' In this case the ambiguity resides in the word 'plot' and is resolved by the word ' s c h e m e ' - a n alternative would have been to replace 'scheme' with 'soil'. For surface structural ambiguity, an attended sentence could be: 'They are eating apples' and the unattended sentence: 'They are making gloves.' The ambiguity in this famous sentence resides in the question of whether the word 'eating' is a verb attached to the noun 'they' (as suggested by the structure of the unattended sentence), or an adjective attached to the noun 'apples'. To obtain this adjectival interpretation the unattended sentence would have to have the structure of: 'They are evening gloves.' The experiment actually distinguished between particle-preposition ambiguities such as: 'The boy looked over the stone wall' and surface structure or bracketing ambiguities such as: 'Jack left with a dog he found last Saturday' and found successful disambiguation in both cases. Finally, Lackner and Garrett reported that sentences with deep structural ambiguity such as: 'They knew that the shooting of the hunters was dreadful' were interpreted according the reading suggested by unattended sentences such as: 'Tom said the sportsmen had been slain prematurely.' This last example is the one most relevant to the question of whether comprehension requires attention, for if the 'sportsmen' sentence is to be influential its deep structure must be recognized. In addition, this interpretation must be available to the process that is used in generating a paraphrase of the 'hunters' sentence. The meaning of the unattended sentence was effective in this experiment, clearly suggesting that comprehension proceeds independently of attention. Although this is a temptingly straightforward conclusion, it has not been supported by subsequent research and we cannot accept the suggestion of anything other than lexical analysis of unattended messages. Two further attempts to find disambiguating effects will be mentioned before considering the reasons for the lack of empiric support that they give for Lackner and Garrett's conclusions.
214
G. Underwood and J. Everatt
If the Lackner and Garrett (1972) result could be confirmed, then we would have evidence that the underlying meaning of a sentence can be recognized when the listener's attention is elsewhere. This follows from the influence of an unattended meaning, gainable only through the integration of the words in a sentence, upon the interpretation of an ambiguous attended sentence. MacKay (1973) and Newstead and Dennis (1979) attempted to replicate Lackner and Garrett's result, but with mixed fortune. MacKay was able to find effects of occasional unattended words upon the interpretation of lexically ambiguous sentences. Further, he found that ambiguity that depended upon surface structure could be resolved by an unattended phrase with a structure corresponding to one of the interpretations. Surface structure ambiguities were unaffected by the meanings of unattended words, and deep structural ambiguities were unaffected by the underlying meaning or lexical meaning of the words in the unattended message. Newstead and Dennis were even less successful in their experiments. They found no effects upon surface structural ambiguities, and effects upon lexical ambiguities only under specific conditions. These conditions included the inclusion of a long intertrial interval and the use of students rather than housewives as subjects in the experiments. Newstead and Dennis did not examine effects upon deep structural ambiguity. The collective conclusion from these three reports is that lexical ambiguity may be resolved by the presence of an unattended message, but that surface structures and deep structures are unaffected. When more than a word or two appears as the unattended message, then no effects are found, suggesting that the integration of words is a process for which attention is required. Why is the Lackner and Garrett (1972) result so difficult to replicate? One suggestion is that they used an inadequate method of controlling the direction of the listener's attention. They instructed their subjects to listen carefully to the attended sentence, and to paraphrase it immediately afterwards. The usual method of attention control- s h a d o w i n g - was not not used. The experiments reported by MacKay (1973) and Newstead and Dennis (1979) used shadowing, and found restricted effects of unattended messages. Lackner and Garrett were aware of this problem, and included informal tests of their listeners' knowledge of the unattended message. None of them could report the content of these messages, and most were unable to say that they had heard sentences. None of the subjects had noticed that the paraphrased sentences had been ambiguous and so, presumably, they had no reason to collect cues intentionally from the unattended message. The task does seem to have been difficult. Some subjects were rejected from the experiment because they were unable to listen to one sentence while ignoring the other, and an acceptable subject would 'sit with eyes closed and head cocked to one side, one hand pushing against the headphone carrying the message to be paraphrased, and immediately blurt out his paraphrase' (p. 366). This does suggest that they were attending quite carefully to the ambiguous sentence, rather than dividing their attention between messages. In view of the apparent selectivity demanded of listeners in Lackner and Garrett's (1972) experiment, it is difficult to understand the failure of subsequent attempts to find similar effects upon the full range of ambiguous sentences. One possibility lies with the structure of the sentences used in the three experiments, but none of the reports gives a list of materials and so this must be tentative. However, it is possible that the 'deep structural ambiguities' were formed in different ways in the different experiments, and that the disambiguating sentences
Automatic and controlled information processing
215
in Lackner and Garrett's experiment were able to influence a pivotal lexical ambiguity. The example quoted in their paper is the famous 'The corrupt police can't stop drinking' and, although this undoubtedly has deep structural ambiguity in that two underlying meanings are represented by one phrase structure, the word 'drinking' can be seen as being critically ambiguous. It is pivotal in the sense that the ambiguity rests upon its interpretation as a verb attached to 'police' or to a missing referent. Perhaps in the other two experiments the deep structural ambiguities were less dependent upon the attachment of a single word. However, the only consistent result from these experiments is that a lexical ambiguity may be resolved by the presence of an unattended message, and that the effect is best seen with an unattended message consisting of a single word. From these studies of linguistic ambiguity comes the single conclusion that a word in the unattended message may influence the interpretation of an ambiguous word in the attended message. All three of the studies found this result, and we must discount the early suggestion that attention is not required for the processing of the underlying meaning of a sentence. This result can be found only under circumstances where the direction of the listener's attention is undetermined. Again using dichotic presentations, Henley (1976) has provided support for the result that lexical ambiguity may be influenced by the content of the unattended message. Responses to ambiguous words were affected by words in the second message that were not only unattended but also presented at an intensity that was below the listener's individual threshold for awareness. The effect upon the interpretation of the homophone was not clear, but there was evidence of lexical processing of the unattended word. This influence was upon the delay in responding to the homophone with an associated word: faster responses were observed when the unattended/subliminal word matched the meaning of the word offered as an associate. Whereas we may be cautious about MacKay's (1973) demonstration of effects upon lexical ambiguity (because of the attention-attracting nature of a single unattended word), no such objection can be raised against Henley's result. Taken together these results are consistent with a wealth of reports of the lexical processing of single unattended words both in dichotic listening (Lewis, 1970; Smith and Groen, 1974) and in selective viewing (Bradshaw, 1974; Dallas and Merikle, 1976; Underwood, 1976). Individual words may gain lexical processing without being attended, but there is no evidence of sequences of words gaining the integration necessary for the recognition of their underlying meaning. The investigations of unattended phrases and sentences that accompany ambiguous attended sentences are one of the few sources of data pertaining to the relationship between attention and comprehension. Single unattended words are analyzed to the level of lexical meaning, but there are few experiments that have looked at the analysis of unattended sentence meanings. Traub and Geffen (1979) used lists of words to demonstrate that category effects in memory search are restricted to attended lists, and this result suggests that attention is necessary for the extraction of features common to the words in a sequence. Although Traub and Geffen used lists rather than sentences, their result is relevant because it shows that interword processing is impaired by inattention. The relationship between the words was appreciated and effective only when the list was attended.
216
3.2
G. Underwood and J. Everatt
Attention and Comprehension
The conclusion that attention is necessary for comprehension also comes from an investigation of the effects of cumulative context in dichotic listening. One of the better-established results in psychology is that objects and words gain easier responses when they are placed in a familiar context. This context allows the perceiver to anticipate their arrival, and to prepare the response in advance of the stimulus. Tulving and Gold (1963) showed this for the case of a word appearing at the end of a sentence, and Palmer (1975) showed it for a simple line-drawing appearing immediately after a context-setting scene. Is context useful at all times, or is it necessary to attend in order to make use of the features contained in the context? This was the question we asked in an experiment that measured shadowing latencies as a function of attended and unattended context (Underwood, 1977). The only word that was critical was the final word of a sentence, in that this was the only word whose shadowing response was timed. The subjects were not told this of course, and they were also told that the unattended message was a distraction. On some trials the critical final word was preceded by useful context, as with: 'The angler returned the fish to the trout stream.' Whenever the congruent context formed part of the attended message, the unattended message contained a list of unrelated words. On some trials the contextually congruent words were replaced by unrelated words, as with: 'Antelope cover income hat collect stream.' The amount of context replaced by unrelated words varied, and on other trials the listeners heard a few unrelated words, followed by a part of the congruent context and then the critical word. The shadowing response to the final word was faster when it was accompanied by context, and the more context there was, the greater was this facilitation effect. None of this was particularly surprising- it confirmed the result obtained by Tulving and Gold (1963) and many others. A more interesting result came from trials in which the context was presented in the unattended message. Listeners would then shadow unrelated words, with a variable amount of unattended context leading up to the attended critical word. In these cases there was still a shadowing advantage, but it was constant in size, regardless of the number of contextually congruent words. So, the benefits of attended context accumulated, but unattended context had a constant effect. Why should an unattended context have a nonaccumulating effect upon shadowing latency? There are several possibilities consistent with the general picture of unattended lexical processing that is starting to emerge. Sentence context effects can accumulate only if the listener can calculate the relationships between successive words, and generate a sentence theme which is, so to speak, greater than the sum of the component words. This sentence theme is, of course, the underlying meaning of the whole sentence, and lexical processing alone is insufficient if the comprehension calculation is to be successful. If the meaning of each unattended word can be recognized, but not the relationships between those words, then we should expect that the effects of unattended context should be restricted to the last few words heard before the critical word. These words could be effective as associative primes, through the process of spreading activation suggested by Meyer and Schvaneveldt (1971), Collins and Loftus (1975) and others. Alternatively, the
Automatic and controlled information processing
217
most recent unattended words could be effective as predictors, by the same constructive calculation process that is able to use an accumulating attended context. By either of these processes the unattended context could have only a constant effect: associative priming would be minimal from the words early in an unattended sentence. The calculation process requires the resources of a working memory system not available without attention. The lexical activation of the earlier words would have dissipated by the time the critical word was heard. All unattended words would gain lexical processing, but only those immediately prior to the critical word would have any effect. To make use of context, we need to attend; to make use of the categorical relationships between the words in a list, we need to attend; and, a more disputable result, if we are to make use of the relationships between words when resolving the ambiguity of a sentence, we again need to attend. These are the conclusions from investigations of unattended context upon shadowing latencies (Underwood, 1977), unattended categories upon memory probes (Traub and Geffen, 1969), and unattended words upon the resolution of ambiguity (Lackner and Garrett, 1972; MacKay, 1973; Newstead and Dennis, 1979). Taken together they suggest that we cannot calculate the relationships between words unless we attend to those words and to their alternative relationships. Ambiguous sentences provide one of the best demonstrations of the need to attend when attempting to understand. Words may be recognized at the level of lexical processing, but the system that integrates individual words can be accessed only when attention is directed towards their relationships. Attention to the words may be insufficient, of course, but it is necessary. In addition to attending to the meaning of each word, successful comprehension will depend upon the selection of the appropriate referent of each word. If the listener attends without attempting to integrate, then a relatively shallow level of processing is achieved, a level perhaps equivalent to Craik and Lockhart's (1972) type I or maintenance processing. Successful retention is more easily achieved by type II or elaboration processing, and this is the deeper processing associated with attention being directed to the relationships between the words. The success of elaborative processing may depend in part upon the new associations that are created between the incoming stimulus and existing knowledge structures, and in part upon the creation of memories of the cognitive operations themselves. A final demonstration in support of the general conclusion comes from a slightly different background. Kleiman's (1975) experiments were designed to determine whether speech recoding was necessary before, during or after lexical access, but his approach and his conclusions are related to the question of the role of attention in comprehension. Subjects made various judgements about words appearing on a screen, either while they were shadowing lists of digits or waiting quietly, and Kleiman observed the potentially disruptive effects of shadowing upon the speed of the judgements. For example, a graphemic judgement about a pair of words ('heard/beard' get a judgement of true) suffered a 125ms decrement due to shadowing, and a phonemic judgement ('heard/beard' are now false) suffered a 372ms decrement. This increase in the time required to make the judgement gives an indication that the processes required for shadowing are more disruptive towards the processes required for making phonemic comparisons. In one of Kleiman's experiments subjects judged whether a target word was the category
218
G. Underwood and J. Everatt
label for any of the words in a sentence. For example, the word/sentence pairing: Games Everyone at home played monopoly gets a true response. Both word and sentence appeared on the screen at the same time. The category decision suffered a 78 ms decrement if the subjects were shadowing while judging. The critical judgement, from the point of view of our general conclusion, was about the legality of a single sentence. Subjects judged whether a list of five words, written in a particular order, formed a semantically acceptable sentence. For example, the single sentence: Noisy parties disturb sleeping neighbors gets a true response, while the sentence: Pizzas have been eating Jerry is false. The sentence acceptability judgement suffered the greatest decrement of all, with an additional 394ms being necessary if the subject was shadowing while reading. So, while judgements about the meanings of individual words could be performed without any great cost, subjects were heavily penalized by the requirement to shadow while integrating the meanings of those words. Given that shadowing is known to require attention, Kleiman could have described the experiment as a demonstration of category judgements during divided attention. Whereas individual words could be processed under conditions of divided attention, the integration of those words could not. Kleiman's experiment suggests that divided attention impairs word integration but not word recognition. Although the result is consistent with the conclusions drawn from studies of focused attention during dichotic listening, we cannot be sure that the experiments are completely comparable. Kleiman's subjects divided their attention between spoken digits (to be shadowed) and printed words seen on a screen (to be judged), and were required to respond to both messages. The other experiments which have allowed us to draw the conclusion of 'recognition without integration' differ in two respects. First, they required the subjects to focus their attention upon one of the messages- the second message was u n a t t e n d e d - and second, they used two messages within the same modality. The division of attention between two modalities might be expected to involve different processes to those required for a similar task which requires just one of those modalities. Indeed, several experiments that have used bisensory presentations appear to have found evidence of effective sharing of attention between messages (Allport et al., 1972; Hirst et al., 1980; Shaffer, 1975; but see Broadbent, 1982, for some doubts about this evidence). It might be argued that these experiments have bypassed the processing bottleneck during input by using two input modalities. If some processes compete for common resources while other processes do not, then we are moving our model in the direction of a modular description of cognitive processing.
INATTENTION AND AUTOMATICITY: SOME CONCLUSIONS The evidence reviewed here can be summarized as suggesting three main conclusions. First, individual words are recognized at the level of analysis of their
Automatic and controlled information processing
219
lexical meaning. This evidence comes from observations of the effects of unattended spoken words upon the shadowing latencies to associated attended words in dichotic listening tasks, and from observations of the effects of printed unattended words upon the responses to competing visual stimuli in Stroop and Stroop-like interference tasks. These unattended words are processed to a cognitive stage where they can influence other ongoing processes. Second, the words in the unattended message, although recognized themselves, cannot be interrelated. This evidence comes from the failure of listeners to recognize the deep structure of an unattended sentence, to recognize the common category of words in an unattended list, and to use the context of an unattended sentence as a contextual predictor of what attended word is coming next. Unattended words are not integrated with themselves or with the attended message. From these two conclusions it appears that unattended messages are not processed beyond the level of lexical recognition, and their effects probably manifest themselves through an automatic process of associative priming. The third conclusion is that the attention deficit is not only one of restricting the integration of individual words, but inattention also moderates the perceptibility and effectiveness of these words. This evidence comes from comparisons of divided attention and focused attention instructions, in studies that ask listeners to detect the presence of target words in dichotic messages, and in studies of the effectiveness of distracting printed words. This process of moderation, which Treisman termed attenuation, does not prevent the semantic processing of individual unattended words, however, but it can, in some cases, change the direction of interference upon the processing of the current attended message. The model suggested by these conclusions is one that emphasizes the role of attention in the processing of novel sequences. Whenever the input is familiar and well learned, in the sense that it has an invariant response associated with it, or in the sense that it has an invariant meaning, then that input may be processed without attention. The invariant response might be part of the shoe-lace tying operation in response to a particular state of the shoe-laces- having pulled on a shoe and taken hold of each lace-end, the next part of the operation is invariant and does not require attention. Using another terminology, it is a constant environmental calling-pattern to which a specific condition ~ action rule can be applied. In exactly the same way a drop in temperature is a calling-pattern to a thermostat that is set to operate a specific action when a specific condition is detected. Attention is not required for the detection of the condition of shoe-laces and for the operation of the next action, in the same way that selective attention is not a part of the equipment in the thermostat. It would be wasteful of our cognitive resources if invariant, regularly performed actions required the same selectivity that we reserve for solving crossword puzzles or deciding what to say to the bank manager. If an invariant environmental pattern requires an invariant response, then a little practice will ensure performance without attention. What holds for shoe-laces also holds for word recognition. Individual words are invariant calling-patterns in that they call for specific condition tE action rules. In this case the responses are the cognitive actions that result in a sensory pattern activating a lexical representation. The lexical meaning of a word is recognized whether or not attention is available for processing. As we have seen, inattention does have a detrimental effect, and so we must conclude that attention can vary the strength of the sensory signal admitted to the lexicon.
220
G. Underwood and J. Everatt
When a word is presented it accesses the internal lexicon, but if this follows the rule that invariant signals are processed without attention, how should we deal with the problem of ambiguous words? A word such as "ball' or 'mint' will have any number of meanings, and so there is no single relationship between the environmental calling-pattern and the required cognitive response. This is a problem because we are concluding that attention will not be available to select the appropriate meaning of the homograph. There is, of course, a considerable literature concerning the recognition of words with multiple meanings, with a consensus view being that all meanings are accessed (Simpson, 1984). The context of presentation does influence the interpretation given to a word, as does the frequencies of the alternative meanings, but the evidence points to an ambiguous word activating all of its possible interpretations. A particularly strong demonstration of this nonselective access of lexically ambiguous words comes from a divided attention experiment reported by Swinney (1979). While subjects listened to prose, in preparation for a comprehension test, they also watched a screen in preparation for making a lexical decision response to a letter-string. The letter-string was sometimes an associate of a word heard at the same time, and when this happened the decision was facilitated. Critically, this facilitation effect was observed with ambiguous spoken words, and both meanings were helpful regardless of which one had been suggested by the prior context. For example, at the same time as hearing the word 'bugs' in: '... The man was not surprised when he found several spiders, roaches and other bugs in the corner of the room...' the letter-string 'ant' might be presented, and this would gain a faster lexical decision than a word such as 'sew'. The nonsuggested meaning of "bugs' was also accessed, however, because the letter-string 'spy' gained a faster response than the neutral 'sew'. The model requires that an invariant stimulus be processed without attention, but the variable meanings of words pose no problem here, because it is possible to demonstrate that their alternative meanings are processed nonselectively. This implies that, although words cannot be considered as invariant stimuli, their meanings can be. A word is a calling-pattern to each of its invariant meanings, with the strength of the resulting activation depending upon stimulus frequency. The meanings of individual words are invariant calling-patterns, and are admitted to the lexicon without the perceiver's attention. They call for very specific cognitive actions and do not require the generation of a new algorithm to perform these actions. This is performance without consideration of the match between intention and action, unlike the recognition of novel stimuli such as the novel combination of words in this sentence. Recognizing the underlying meaning of a sentence is a process that requires attention, for the simple reason that the recombination of word meanings requires the selection of referents. In Swinney's sentence printed above, to take an example, the reader must decide who was not surprised, who did the finding, where the finding was done, what is the relationship between a room and a corner, what is the common feature of spiders, roaches and bugs, and so on. This amounts to a propositional analysis of the sentence. It requires a reconstruction of the underlying meaning which involves the selective attachment of each word to the other words within the proposition. This is a process of selection because sentences do not often share the same propositional structure. The process will be driven by a sentence-processing routine which will
Automatic and controlled information processing
221
produce a unique computation. If sentences did share a regular structure then sentences would possess an invariant feature and propositional attachment would not require attention. The comprehension calculation would then only require selective processing for the determination of relationships between sentences. Provided that there is some feature of the text that does not always produce the same output from this comprehension calculation, then the model suggests that it is to this feature which the reader must attend. I am occasionally aware of my eyes arriving sleepily at the bottom of a page of text, with the realization that I have not been attending to the underlying meanings intended by the writer. As I re-read I have a feeling of familiarity for the words, and even for the positions of the words on the page, and the model attributes this to their original, preattentive recognition. To understand the sentences, and to compute the relationships of the sentences into a schema of the text, we need to attend selectively to their specific and unique underlying meanings.
REFERENCES Adams, J. A. (1976). Issues for a closed-loop theory of motor learning. In G. E. Stelmach (Ed.), Motor Control: Issues and Trends (pp. 87-107). London: Academic Press. Allport, D. A. (1977). On knowing the meanings of words we are unable to report: The effects of visual masking. In S. Dornic (Ed.), Attention and Performance V/(pp. 505-533). Hillsdale, NJ: Erlbaum. Allport, D. A., Antonis, B. and Reynolds, P. (1972). On the division of attention: A disproof of the single channel hypothesis. Quarterly Journal of Experimental Psychology, 24, 225-235. Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406. Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior, 22, 261-295. Baddeley, A. D. (1979). Working memory and reading. In P. A. Kolers, M. E. Wrolsted and H. Bouma (Eds), Processing of Visible Language, vol. 1 (pp. 355-370). New York: Plenum. Balota, D. A. and Chumbley, J. I. (1984). Are lexical decisions a good measure of lexical access? The role of word frequency in the neglected decision stage. Journal of Experimental Psychology: Human Perception and Performance, 10, 340-357. Becker, C. A. (1976). Allocation of attention during visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 2, 556-566. Bradshaw, J. M. (1974). Peripherally presented and unreported words may bias the perceived meaning of centrally fixated homograph. Journal of Experimental Psychology, 103, 1200-1202. Briggs, P. and Underwood, G. (1982). Phonological coding in good and poor readers. Journal of Experimental Child Psychology, 34, 93-112. Broadbent, D. E. (1971). Decision and Stress. London: Academic Press. Broadbent, D. E. (1982). Task combination and selective intake of information. Acta Psychologica, 50, 253-290. Bryden, M. P. (1972). Perceptual strategies, attention, and memory in dichotic listening. Unpublished report, University of Waterloo. Carpenter, P. A. and Just, M. (1983). What your eyes do while your mind is reading. In K. Rayner (Ed.), Eye Movements in Reading: Perceptual and Language Processes (pp. 275-307). New York: Academic Press. Carr, T. H., McCauley, C., Sperber, R. D. and Parmalee, C. M. (1982). Words, pictures, and priming: On semantic activation, conscious identification, and the automaticity of information processing. Journal of Experimental Psychology: Human Perception and Performance, 8, 757-777.
222
G. Underwood and J. Everatt
Cherry, C. (1953). Some experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America, 23, 915-919. Cohen, J. D., Dunbar, K. and McClelland, J. L. (1990). On the control of automatic processes: A parallel distributed processing account of the Stroop effect. Psychological Review, 97, 332-361. Collins, A. M. and Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82, 407-428. Cooper, R. M. (1974). The control of eye fixations by the meaning of spoken language. Cognitive Psychology, 6, 84-107. Craik, F. I. M. and Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11, 671-684. Dallas, M. and Merikle, P. M. (1976). Semantic processing of non-attended visual information. Canadian Journal of Psychology, 30, 15-21. DeGroot, A. M. B., Thomassen, A. J. W. M. and Hudson, P. T. W. (1982). Association facilitation of word recognition as measured from a neutral prime. Memory and Cognition, 10, 358-370. Deutsch, J. A. and Deutsch, D. (1963). Attention: Some theoretical considerations. Psychological Review, 70, 80-90. Deutsch, J. A. and Deutsch, D. (1967). Comments on 'Selective attention: perception or response?' Quarterly Journal of Experimental Psychology, 19, 362-363. Dunbar, K. and MacLeod, C. M. (1984). A horse race of a different colour: Stroop interference patters with transformed words. Journal of Experimental Psychology: Human Perception and Performance, 10, 622-639. Duncan, J. (1980). The locus of interference in the perception of simultaneous stimuli. Psychological Review, 87, 272-300. Dyer, F. N. (1973). The Stroop phenomenon and its use in the study of perceptual, cognitive and response processes. Memory and Cognition, 1, 106-120. Eriksen, B. A. and Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a non-search task. Perception and Psychophysics, 16, 143-149. Eriksen, C. W. and Schultz, D. W. (1979). Information processing in visual search: A continuous flow model and experimental results. Perception and Psychophysics, 25, 249-263. Eriksen, C. W., Webb, J. M. and Fournier, L. R. (1990). How much processing do nonattended stimuli receive? Apparently very little, but... Perception and Psychophysics, 47, 477-488. Fisher, D. F. and Shebilske, W. L. (1985). There is more than meets the eye than the eye-mind assumption. In R. Groner, G. McConkie and C. Menz (Eds), Eye Movements and Human Information Processing (pp. 149-157). Amsterdam: North-Holland. Fodor, J. A. (1983). The Modularity of Mind. Cambridge, MA: MIT Press. Forster, K. I. (1981). Priming and the effects of sentence and lexical context on naming time: Evidence for autonomous lexical processing. Quarterly Journal of Experimental Psychology, 33a, 465-495. Francolini, C. M. and Egeth, H. (1980). On the non-automaticity of 'automatic' activation: Evidence of selective seeing. Perception and Psychophysics, 27, 331-342. Gatti, S. V. and Egeth, H. A. (1978). Failure of spatial selectivity in vision. Bulletin of the Psychonomic Society, 11, 181-184. Glaser, M. O. and Glaser, W. R. (1982). Time course analysis of the Stroop phenomenon. Journal of Experimental Psychology: Human Perception and Performance, 8, 875-894. Goolkasian, P. (1981). Retinal location and its effect on the processing of target and distracter information. Journal of Experimental Psychology: Human Perception and Performance, 7, 1247-1257. Greenwald, A. G. (1972). Evidence of both perceptual filtering and response suppression for rejected messages in selective attention. Journal of Experimental Psychology, 94, 58-67.
Automatic and controlled information processing
223
Guenther, R. K., Klatzky, R. L. and Putnam, W. (1980). Commonalities and differences in semantic decisions about pictures and words. Journal of Verbal Learning and Verbal Behaviour, 19, 54-74. Hasher, L. and Zacks, R. T. (1979). Automatic and effortful processes in memory. Journal of Experimental Psychology: General, 108, 356-388. Henderson, J. M., Pollatsek, A. and Rayner, K. (1989). Covert visual attention and parafoveal information use during object identification. Perception and Psychophysics, 45, 196-208. Henley, S. H. A. (1976). Responses to homophones as a function of cue words on the unattended channel. British Journal of Psychology, 67, 529-536. Hintzman, D. L. (1986). 'Schema abstraction' in a multiple-trace model. Psychological Review, 93, 411-428. Hirst, W., Spelke, E. S., Reaves, C. C., Caharack, G. and Neisser, U. (1980). Dividing attention without alternation or automaticity. Journal of Experimental Psychology: General, 109, 98-117. Hy6n~i, J., Niemi, P. and Underwood, G. (1989). Reading long words embedded in sentences: Informativeness of word parts affects eye movements. Journal of Experimental Psychology: Human Perception and Performance, 15, 142-152. Jacoby, L. L. and Brooks, L. R. (1984). Nonanalytic cognition: Memory, perception, and concept learning. In G. H. Bower (Ed.), The Psychology of Learning and Motivation (pp. 1-47). New York: Academic Press. James, W. (1890). The Principles of Psychology. New York: Holt. Jastrzembski, J. E. (1981). Multiple meanings, number of related meanings, frequency of occurrence and the lexicon. Cognitive Psychology, 13, 278-305. Johnston, W. A. and Wilson, J. (1980). Perceptual processing of non-targets in an attention task. Memory and Cognition, 8, 372-377. Jonides, J. (1981). Voluntary versus automatic control over the mind's eye's movement. In J. Long and A. Baddeley (Eds), Attention and Performance IX (pp. 187-203). Hillsdale, NJ: Erlbaum. Jonides, J., Naveh-Benjamin, M. and Palmer, J. (1985). Assessing automaticity. Acta Psychologica, 60, 157-171. Kahneman, D. (1973). Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall. Kahneman, D. and Chajzyck, D. (1983). Tests of the automaticity of reading: Dilution of Stroop effects by colour-irrelevant stimuli. Journal of Experimental Psychology: Human Perception and Performance, 9, 497-509. Kahneman, D. and Henik, A. (1981). Perceptual organisation and attention. In M. Kubovy and J. Pomerantz (Eds), Perceptual Organisation (pp. 181-211). Hillsdale, NJ: Erlbaum. Kahneman, D. and Treisman, A. M. (1984). Changing views of attention and automaticity. In R. Parasuraman and R. Davies (Eds), Varieties of Attention (pp. 29-61). New York: Academic Press. Keele, S. W. (1973). Attention and Human Performance. Pacific Palisades, CA: Goodyear. Keele, S. W. and Summers, J. (1976). The structure of motor programs. In G. E. Stelmach (Ed.), Motor Control: Issues and Trends (pp. 109-142). London: Academic Press. Kellas, G., Ferraro, F. R. and Simpson, G. B. (1988). Lexical ambiguity and the time-course of attentional allocation in word recognition. Journal of Experimental Psychology: Human Perception and Performance, 14, 601-609. Kennedy, A. (1983). On looking into space. In K. Rayner (Ed.), Eye Movements in Reading: Perceptual and Language Processes (pp. 237-251). New York: Academic Press. Kidd, G. R. and Greenwald, A. G. (1988). Attention, rehearsal and memory for serial order. American Journal of Psychology, 101, 259-279. Kleiman, G. M. (1975). Speech recoding in reading. Journal of Verbal Learning and Verbal Behaviour, 14, 323-339. LaBerge, D. (1981). Automatic information processing: A review. In J. Long and A. Baddeley (Eds), Attention and Performance IX (pp. 173-186). Hillsdale, NJ: Erlbaum.
224
G. Underwood and J. Everatt
LaBerge, D. and Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323. Lackner, J. R. and Garrett, M. F. (1972). Resolving ambiguity: Effects of biasing context in the unattended ear. Cognition, 1, 359-372. Ladefoged, P., Silverstein, R. and Papcun, G. (1973). Interruptability of speech. Journal of the Acoustical Society of America, 54, 1105-1108. La Heij, W., Dirkx, J. and Kramer, P. (1990). Categorical interference and associative priming in picture naming. British Journal of Psychology, 81, 511-525. Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. Lewis, J. L. (1970). Semantic processing of unattended messages using dichotic listening. Journal of Experimental Psychology, 85, 225-228. Loftus, G. R. (1981). Tachistoscopic simulations of eye fixations on pictures. Journal of Experimental Psychology: Human Learning and Memory, 5, 369-376. Loftus, G. R. and Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4, 565-572. Logan, G. D. (1979). On the use of a concurrent memory load to measure attention and automaticity. Journal of Experimental Psychology: Human Perception and Performance, 5, 189-207. Logan, G. D. (1982). On the ability to inhibit complex movements: a stop-signal study of typewriting. Journal of Experimental Psychology: Human Perception and Performance, 8, 778-792. Logan, G. D. (1985). Skill and automaticity: relations, implications and future directions. Canadian Journal of Psychology, 39, 367-386. Logan, G. D. (1988). Toward an instance theory of automatisation. Psychological Review, 95, 492-527. Lupker, S. J. and Katz, A. N. (1981). Input, decision and response factors in picture-word interference. Journal of Experimental Psychology: Human Learning and Memory, 7, 269-282. MacKay, D. G. (1973). Aspects of the theory of comprehension, memory and attention. Quarterly Journal of Experimental Psychology, 25, 22-40. MacKay, D. G. (1982). The problem of flexibility, fluency and speed-accuracy trade-off in skilled behaviour. Psychological Review, 89, 483-506. Mackworth, N. H. and Morandi, A. J. (1967). The gaze selects information details within pictures. Perception and Psychophysics, 2, 547-552. MacLeod, C. M. and Dunbar, K. (1988). Training and Stroop like interference: Evidence for a continuum of automaticity. Journal of Experimental Psychology: Learning, Memory and Cognition, 14, 126-135. McClelland, J. L. and O'Regan, J. K. (1981). Expectations increase the benefit derived from parafoveal visual information in reading words aloud. Journal of Experimental Psychology: Human Perception and Performance, 7, 634-644. McClelland, J. L. and Rumelhart, D. M. (1981). An interactive-activation model of context effects in letter perception: Part 1. An account of basic beginnings. Psychological Review, 88, 375-407. McConkie, G. W. and Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception and Psychophysics, 17, 578-586. McLeod, P. (1977). A dual task response modality effect: Support for multiprocessor models of attention. Quarterly Journal of Experimental Psychology, 29, 651-667. Meyer, D. E. and Schvaneveldt, R. W. (1971). Facilitation in recognising pairs of words: Evidence of a dependence in retrieval operations. Journal of Experimental Psychology, 90, 227-234. Minsky, M. (1980). K-lines: A theory of memory. Cognitive Science, 4, 117-133. Moray, N. (1959). Attention in dichotic listening: Affective cues and the influence of instruction. Quarterly Journal of Experimental Psychology, 9, 56-60.
Automatic and controlled information processing
225
Morrison, R. E. (1984). Manipulation of stimulus onset delay in reading: Evidence for parallel programming of saccades. Journal of Experimental Psychology: Human Perception and Performance, 10, 667-682. Navon, D. and Gopher, D. (1979). On the economy of the human processing system. Psychological Review, 86, 214-255. Neely, J. H. (1977). Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited capacity attention. Journal of Experimental Psychology: General, 106, 226-254. Neisser, U., Hirst, W. and Spelke, E. S. (1981). Limited capacity theories and the notion of automaticity: Reply to Lucas and Bub. Journal of Experimental Psychology: General, 110, 499-500. Nelson, W. W. and Loftus, G. R. (1980). The functional visual field during picture viewing. Journal of Experimental Psychology: Human Learning and Memory, 6, 391-399. Neumann, O. (1984). Automatic processing: A review of recent findings and a plea for an old theory. In W. Prinz and A. F. Sanders (Eds), Cognition and Motor Processes (pp. 255293). Berlin: Springer. Newstead, S. E. and Dennis, I. (1979). Lexical and grammatical processing of unshadowed messages: A re-examination of the MacKay effect. Quarterly Journal of Experimental Psychology, 31, 477-488. Norman, D. A. (1968). Toward a theory of memory and attention. Psychological Review, 75, 522-536. Norman, D. A. (1969). Memory while shadowing. Quarterly Journal of Experimental Psychology, 21, 85-93. O'Regan, J. K. (1979). Saccade size in reading: Evidence for the linguistic control hypothesis. Perception and Psychophysics, 25, 501-509. Paap, K. R. and Newsome, S. L. (1981). A perceptual-confusion account of the WSE in the target search paradigm. Perception and Psychophysics, 27, 444-456. Paap, K. R. and Ogden, W. C. (1981). Letter encoding is an obligatory but capacitydemanding operation. Journal of Experimental Psychology: Human Perception and Performance, 7, 518-527. Palmer, S. E. (1975). The effects of contextual scenes on the identification of objects. Memory and Cognition, 3, 519-526. Pollatsek, A. and Rayner, K. (1982). Eye movement control in reading: The role of word boundaries. Journal of Experimental Psychology: Human Perception and Performance, 8, 817-833. Posner, M. I. (1978). Chronometric Explorations of Mind. Hillsdale, NJ: Erlbaum. Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32, 3-26. Posner, M. I., Cohen, Y. and Rafal, R. D. (1982). Neural system control of spatial orienting. Philosophical Transactions of the Royal Society of London, B298, 187-198. Posner, M. I. and Snyder, C. R. R. (1975). Attention and cognitive control. In R. L. Solso (Ed.), Information Processing and Cognition: The Loyola Symposium (pp. 55-85). Hillsdale, NJ: Erlbaum. Rabbitt, P. M. A. (1966). Errors and error correction in choice response tasks. Journal of Experimental Psychology, 71, 264-272. Rayner, K. (1977). Visual attention in reading: Eye movements reflect cognitive processes. Memory and Cognition, 4, 443-448. Rayner, K., Balota, D. A. and Pollatsek, A. (1986). Against parafoveal semantic pre-processing during eye fixations in reading. Canadian Journal of Psychology, 41, 211-236. Rayner, K. and Bertera, J. H. (1979). Reading without a fovea. Science, 206, 468-469. Rayner, K. and Pollatsek, A. (1987). Eye movements in reading: A tutorial review. In M. Coltheart (Ed.), Attention and Performance XII: The Psychology of Reading (pp. 327-362). London: LEA.
226
G. Underwood and J. Everatt
Reason, J. (1979) Actions not as planned: The price of automatisation. In G. Underwood and R. Stevens (Eds), Aspects of Consciousness, vol. 1 (pp. 67-89). London: Academic Press. Reed, G. (1972). The Psychology of Anomalous Experience. London: Hutchinson. Rosinski, R. R., Golinkoff, R. M. and Kukish, K. S. (1975). Automatic semantic processing in a picture-word interference task. Child Development, 46, 247-253. Rubenstein, H., Garfield, L. and Millikan, J. A. (1970). Homographic entries in the mental lexicon. Journal of Verbal Learning and Verbal Behaviour, 9, 487-494. Schiller, P. H. (1966). Developmental study of colour-word interference. Journal of Experimental Psychology, 72, 105-108. Schneider, W. (1985). Toward a model of attention and the development of automatic processing. In M. I. Posner and O. S. Marin (Eds), Attention and Performance XI (pp. 475492). Hillsdale, NJ: Erlbaum. Schneider, W. and Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search and attention. Psychological Review, 84, 1-66. Seidenberg, M. S., Waters, G. S., Sanders, M. and Langer, P. (1984). Pre- and post-lexical loci of contextual effects on word recognition. Memory and Cognition, 12, 315-328. Shaffer, L. H. (1975). Multiple attention in continuous verbal tasks. In P. M. A. Rabbitt and S. Dornic (Eds), Attention and Performance V (pp. 157-167). London: Academic Press. Shaffer, W. O. and LaBerge, D. (1979). Automatic semantic processing of unattended words. Journal of Verbal Learning and Verbal Behaviour, 18, 413-426. Shepherd, M. and Miiller, H. J. (1989). Movement versus focusing of visual attention. Perception and Psychophysics, 46, 146-154. Shiffrin, R. M. (1988). Attention. In R. C. Atkinson, R. J. Herrnstein, G. Lindzey and R. D. Luce (Eds), Steven's Handbook of Experimental Psychology, vol. 2: Learning and Cognition (pp. 739-811). New York: Wiley. Shiffrin, R. M. and Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84, 127-190. Shulman, G. L. (1990). Relating attention to visual mechanisms. Perception and Psychophysics, 47, 199-203. Simpson, G. B. (1984). Lexical ambiguity and its role in models of word recognition. Psychological Bulletin, 96, 316-340. Smith, M. C. and Groen, M. (1974). Evidence for semantic analysis of unattended verbal items. Journal of Experimental Psychology, 102, 595-603. Spelke, E. S., Hirst, W. C. and Neisser, U. (1976). Skills of divided attention. Cognition, 4, 215-230. Stanovich, K. E. and West, R. F. (1981). The effects of sentence context on ongoing word recognition: Tests of a two-process theory. Journal of Experimental Psychology: Human Perception and Performance, 7, 658-672. Stanovich, K. E. and West, R. F. (1983a). On priming by a sentence context. Journal of Experimental Psychology: General, 112, 1-36. Stanovich, K. E. and West, R. F. (1983b). The generalizability of context effects on word recognition: A reconsideration of the roles of parafoveal priming and sentence context. Memory and Cognition, 5, 84-89. Swinney, D. A. (1979). Lexical access during sentence comprehension: (Re)consideration of context effects. Journal of Verbal Learning and Verbal Behavior, 18, 645-659. Traub, E. and Geffen, G. (1979). Phonemic and category encoding of unattended words in dichotic listening. Memory and Cognition, 7, 56-65. Treisman, A. M. (1960). Contextual cues in selective listening. Quarterly Journal of Experimental Psychology, 12, 242-248. Treisman, A. M. (1969). Strategies and models of selective attention. Psychological Review, 76, 282-299.
Automatic and controlled information processing
227
Treisman, A. M. and Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97-136. Treisman, A. M., Squire, R. and Green, J. (1974). Semantic processing in dichotic listening? A replication. Memory and Cognition, 2, 641-646. Tulving, E. and Gold, C. (1963). Stimulus information and contextual information as determinants of tachistoscopic recognition of words. Journal of Experimental Psychology, 66, 319-327. Underwood, G. (1976). Semantic interference from unattended printed words. British Journal of Psychology, 67, 327-338. Underwood, G. (1977). Contextual facilitation from attended and unattended messages. Journal of Verbal Learning and Verbal Behavior, 16, 99-106. Underwood, G. (1981). Lexical recognition of embedded unattended words: Some implications for reading processes. Acta Psychologica, 47, 267-283. Underwood, G. (1982). Attention and awareness in cognitive and motor skills. In G. Underwood (Ed.), Aspects of Consciousness, vol. 3 (pp. 111-145). London: Academic Press. Underwood, G. and Briggs, P. (1984). The development of word recognition processes. British Journal of Psychology, 75, 243-255. Underwood, G., Clews, S. and Everatt, J. (1990). How do readers know where to look next? Local information distributions influence eye fixations. Quarterly Journal of Experimental Psychology, 42A, 39-65. Underwood, G. and Thwaites, S. (1982). Automatic phonological coding of unattended printed words. Memory and Cognition, 10, 434-442. Underwood, G. and Whitfield, A. (1985). Right hemisphere interactions in picture-word processing. Brain and Cognition, 4, 273-286. Underwood, N. R. and McConkie, G. W. (1985). Perceptual span for letter distinctions during reading. Reading Research Quarterly, 20, 153-162. Warren, R. E. (1977). Time and the spread of activation in memory. Journal of Experimental Psychology: Learning and Memory, 3, 458-466. Wickens, C. D. (1984). Processing resources in attention. In R. Parasuraman and R. Davies (Eds), Varieties of Attention (pp. 63-102). New York: Academic Press.