Now or never: How consciousness represents time

Now or never: How consciousness represents time

Consciousness and Cognition 18 (2009) 78–90 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com...

310KB Sizes 0 Downloads 74 Views

Consciousness and Cognition 18 (2009) 78–90

Contents lists available at ScienceDirect

Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog

Now or never: How consciousness represents time q Paula Droege * Pennsylvania State University, Philosophy Department, 244 Sparks Bldg., University Park, PA 16802, USA

a r t i c l e

i n f o

Article history: Received 16 August 2007 Available online 28 November 2008 Keywords: Consciousness Representation Time Teleofunctionalism Millikan Specious present

a b s t r a c t Consciousness has a peculiar affinity for presence; conscious states represent their contents as now. To understand how conscious states come to represent time in this way, we need a distinction between a mental state that represents now and one that simply occurs now. A teleofunctional theory accounts for the distinction in terms of the development and function of explicit temporal representation. The capacity to represent a situation explicitly as ‘now’ and compare it with past situations in order to prepare for the future involves the separation of goals from the particular action required to attain them. That is, when a creature is able to consider alternative paths of action, it becomes necessary to conceive of alternate future times as distinct from the present moment. The developmental, functional approach of a teleofunctional theory is promising in its ability to integrate research from diverse empirical fields for support of its claims. Ó 2008 Elsevier Inc. All rights reserved.

1. Introduction Consciousness has a peculiar affinity for presence. What we are conscious of is present before us, occurrent, now. The importance of this temporal aspect of consciousness is often obscured by a focus on the contents of consciousness—the feelings, thoughts and sensations that fill our conscious minds. Yet it seems most or all of these contents can be unconscious. In dreamless sleep my anxieties and lower back spasms cause me to toss and shift through the night. Meanwhile thoughts churn and eat at the problems responsible for my anxiety, resulting—if I am lucky—in a solution that presents itself when I awake. An intuitive way to think about the difference between conscious and unconscious states is to think of these spasms and thoughts as unconscious, whereas the spasms and thoughts I now have while hunched over the computer are conscious. My unconscious states have various behavioral, even cognitive, effects, but they are not phenomenally conscious in Chalmers’ sense (1996). In contrast, my current spasms and thoughts are phenomenally conscious: the hurtfulness of my back pain is apparent, and the familiar internal sound of subvocal speech rehearses each sentence prior to commitment in writing.1 Conscious states, on my view, are essentially phenomenal; that is, all conscious states have some qualitative character. However, not all states with qualitative character are conscious, as in the case of a blindsight subject’s ability to utilize color information to make better-than-chance guesses.2 The question of consciousness at issue is: What accounts for the difference between my q A version of this essay was presented at the Society for Philosophy and Psychology 2006 and has benefited from comments by its audience there as well as helpful suggestions from Ruth Garrett Millikan, Jonathan Opie and several anonymous reviewers. * Corresponding author. Fax: +1 814 865 0119. E-mail address: [email protected] 1 Note that this definition of ‘phenomenal consciousness’ cuts across Block’s (1995) distinction between phenomenal and access consciousness. Phenomenal consciousness is the qualitative character of an experience, where access consciousness is the inferential availability of experiential content. As I understand Block’s distinction, regardless of whether I am asleep or awake, the qualitative character of my back spasms makes them phenomenally conscious, and the cognitive effects of my thoughts classify them as access conscious. According to my definition, however, spasms and thoughts are both phenomenally unconscious when in dreamless sleep or coma, and both are phenomenally conscious in paradigmatically wakeful, attentive states. 2 I realize these are contentious views, a defense of which is beyond the scope of the present topic. For a defense of the view that qualitative character is not essentially conscious, see Droege (2003, pp. 4–7). Tye (2003, pp. 78–82) has argued for the claim that conscious thoughts, as well as conscious sensations, are phenomenal.

1053-8100/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.concog.2008.10.006

P. Droege / Consciousness and Cognition 18 (2009) 78–90

79

appreciation of color as I gaze at the greens and yellows of this bright spring day and the appreciation of color as the blindsight subject makes her remarkable guess? Or in another sort of case, what accounts for the difference in feeling when poked while I am awake as opposed to while I am in a state of dreamless sleep? One might claim that there is no important difference between these states, or that any difference can be accounted for by behavioral differences related to the phenomena of sleeping and blindsight. To take this eliminativist position is to deny what is manifest from the first-person point of view: I feel the hurtfulness when I am poked while awake, but do not feel it when I am asleep. My behavior may be identical in both cases,3 yet in the wakeful case there is an additional phenomenon to be explained. Advocates of first-person methodology often conclude that the phenomenon of consciousness constitutes a special ontological category that resists accommodation by physical explanation (McGinn, 1999; Nagel, 1974). While this conclusion seems hasty in light of daily advances in brain science, the alternative is not to pretend there are no phenomena to consider. Rather, we should take the data of phenomenology seriously as among the data to be explained by a fully developed science of the mind. Which brings us back to the question of the difference between conscious and unconscious states. A representationalist theory of consciousness accounts for this difference in terms of representations. A Higher-Order Representational theory, for example, claims that states are conscious when represented by a higher-order state. One significant drawback of this account is that animals and infants seem to lack the cognitive capacity for higher-order representation and so would lack conscious states.4 The representationalist who rejects higher-order theory is left with the challenge of explaining the phenomenal difference between conscious and unconscious states, despite important similarities in their representational content. Let me be more specific about the similarity in representational content according to the teleofunctional theory of representation I favor. The theory will be explained in more detail later. For both conscious and unconscious states, my back pain represents tissue damage because the relation between pain and damage has successfully guided past behavior. This representational relation is what determines the content of the representation. The question, then, is what representational difference can account for the difference between unconscious pain and conscious pain,5 given that both bear the relevant representational relation to tissue damage. In the following I argue that the temporal aspect of consciousness provides an answer: conscious states represent their contents as occurring now. To understand how conscious states come to represent time, we need a distinction between a mental state that represents now and one that simply occurs now. In Section 2 I show how temporal representation begins with the ability to track change. At this rudimentary level creatures appreciate causal relations and learn to exploit them. Temporal relations are implicitly coded in these representations by virtue of the necessary coincidence between the representation of a cause and capacity to utilize it. Perception and action must be appropriately coordinated in sequential processes in order for behavior to successfully exploit its environment. The ability to track change and respond appropriately to causal relations is the first step in the development of temporal representation. Section 3 takes the next step, showing how the representation of change develops into the particular sort of temporal representation integral to consciousness. An explicit representation of presence occurs when it is part of the function of the representation to vary according to time. The necessity to compare the present moment against memories from the past and plans for the future arises in connection with the ability to consider alternative possible actions in response to an environmental situation. With the acquisition of explicit goal representation comes the demand for explicit temporal representation: the representation of presence as distinct from past and future. Conscious states are those states that explicitly represent the present moment. This claim raises the question of what constitutes a representation of ‘now’. Following Rick Grush, I argue in Section 4 that the time represented as now spans a range of ±100 ms from the time of representing. This range is well-motivated both neurologically and representationally as it accommodates latency differentials in sensory processing and incorporates anticipated movement by means of efference copies of motor commands. Further consideration of the distinction between the time of representing (the time at which the vehicle of representation occurs) and the time of events represented (the time at which events occur as specified by the content of the representation) suggests a solution to a puzzle about experience recently posed by Kelly (2005). He asks, how is that we sometimes experience an event as enduring through time when part of the event has already passed? The answer: we represent the event as now enduring over a period of time. Just as an event that occurs now need not be represented as now, an event represented as now need not occur now. 2. Tracking change A demonstration of the first point—events occurring now need not be represented as now—is the task of this section. Sequential relations such as before and after are, in a certain way, timeless. If it is the case that X comes before Y, it is 3 Some argue that consciousness consists in its reportability, and this feature would be absent when asleep (Dennett, 1991). But this view restricts consciousness to creatures capable of language, and does not allow for the possibility that a state may be conscious and then quickly forgotten—what Dennett disparagingly calls the ‘Stalinesque’ view. 4 Rosenthal (1993, 1997) and other higher-order theorists have addressed this objection in a variety of ways, with more or less success. None of the responses is fully persuasive, in my view. 5 ‘Pain’ here refers to the representation of tissue damage, not the characteristic hurtfulness of conscious pain representations. Likewise, I use ‘experience’ to refer to various sorts of mental states, some of which are unconscious. So on my usage ‘conscious pain’ is not redundant and ‘unconscious experience’ is not an oxymoron.

80

P. Droege / Consciousness and Cognition 18 (2009) 78–90

irrelevant to this relation whether the sequence is past, present or future. Representing sequence has a similar timeless quality, with this difference: successful use of a sequential relation requires that the representation of change runs temporally isomorphic to the change in events represented. In other words, a creature must be able to track the change in order to exploit sequential relations such as cause and effect when they occur. For example, when the frog tracks the fly from right to left, it represents the fly at one location and then at the next. In order for the frog to capture the fly, it must be able to match its order of representations to the progress of the fly. Success for the frog lies in its ability to respond to various cause–effect relations; its actions must be in synch with its perceptions. This all seems very simple, and so it is. If evolutionary theory applies to the mind and consciousness is a development of more rudimentary forms of mentality, it is best to start the investigation with the representational capacities of simple beasts.6 The teleofunctional theory of representation proposed by Ruth Garrett Millikan (Millikan, 1984, 1993, 2000, 2004) provides a persuasive account of the basic forms of representation, out of which more sophisticated abilities like conscious representation can arise. On this view, a representation producer has the function of making representations that vary according to their represented because the co-variant relation has been sufficiently successful in guiding behavior. Four elements are critical to this description: function, covariation, success, and history. First, a representational system serves a function; it does something, plays some role in the creature’s behavioral repertoire.7 In Millikan’s lexicon, a descriptive representation is object-oriented; its producer has the function of getting it to vary according to its represented in order to apprise the creature of useful object features, such as proximity. A directive representation is goal-oriented; its consumer has the function of varying its activity so as to create the state of affairs represented. Where the first sort of function is designed to change the head in order to accord with the world, the second sort is designed to change the world in order to accord with the head.8 According to Millikan, the ability to acquire purely descriptive or directive representations is a sophisticated achievement that requires prying apart two elements of a more primitive representational form, whimsically called pushmi-pullyu representations. A pushmi-pullyu representation is one that ties a descriptive function to a specific directive function. Simple creatures develop pushmi-pullyu representations in response to particular environmental challenges, such as the ability to spot flies in order to eat them (Millikan, 2004, pp. 77–81). The two aspects work in tandem to engineer appropriate behavior. The second essential element of a representation is that its content is defined in terms of a covariation, or an isomorphism, with the represented object or feature. The principle for individuation of representations—what makes a representation about flies—is the covariance relation. The producer and consumer of a representation perform their functions properly only when the representation varies isomorphically with the item represented. Millikan describes the covariation relation as mathematical or logical rather than causal. A representation maps its represented such that significant transformations in the representation correspond according to a mathematical rule or function with significant transformations in the represented object or feature (Millikan, 1993, p. 90; Millikan, 2004, pp. 48–50). The frog’s fly representation must vary systematically with the position and speed as well as size, color and time of appearance of the fly in order for the frog to capture it. If the fly changes in significant ways, such as moving from right to left, the representation must change in corresponding ways. But what if the frog is indiscriminate and eats all small, black, moving objects? Does this mean it misrepresents them as flies, or has it been representing small, black, moving objects all along? This is where the third element of success enters the teleofunctional story. Importantly, the covariation relation that defines representational content is not accidental. Producer and consumer cooperate to design a representation that helps the creature adapt in some way to environmental conditions. The success of the consumer in using the representation relation to guide its activities is what accounts for the continued production of the representation. When the frog indiscriminately consumes indigestible, small, black, moving objects, its activity fails to conform to the conditions for consumer success. Those conditions include the requirement that the objects ingested are flies, or at least something digestible. Because fly representations are produced in order to nourish the frog, the failure of faux flies to serve this function means that the representation of these small, black, moving objects as flies counts as a misrepresentation. The fourth and final element of representation is perhaps the most crucial: the success of the covariance relation is rooted in its history. The creature has acquired a pushmi-pullyu representation of flies-to-eat, because that representation has proved successful to the creature’s survival (or the survival of its ancestors) often enough in the past for the representation to be reproduced. The wiggle room provided by ‘often enough’ indicates the variable range of successful application needed for a representation to warrant reproduction. In some cases a representation is rarely accurate, more often misrepresenting than correctly representing its object. In such cases it may be difficult to determine why a representation continues to exist in the face of repeated failures. One possibility is the high survival value of correct representation, so that getting things right just once is worth all the trouble of being often wrong. Or perhaps the disvalue of misrepresentation is very small, so that there is little to lose by repeated misfires.9 The point is that the representation has proven itself in some way for the organism, and this past success in indicating its object accounts for its continued production. 6 By contrast, the traditional Cartesian methodology that begins with an examination of the self-conscious mind of a philosopher tends to produce the conviction that consciousness is inexplicably mysterious and magical. 7 As the title Language, Thought and Other Biological Categories (Millikan, 1984) suggests, Millikan takes ‘behavior’ to range quite widely. 8 Cf. Searle’s (1983) discussion of ‘direction of fit’ for beliefs and desires. 9 Millikan regularly stresses that the conditions for proper function of a representation are not necessarily typical or statistically average. The importance of this point is marked by capitalization of Normal conditions and Normal explanation (see, e.g., 1984. A particularly clear statement of this point is in 1989, pp. 281 and 284–285).

P. Droege / Consciousness and Cognition 18 (2009) 78–90

81

Returning now to the question of time, we can see how a teleofunctional theory of representation accounts for the representation of change. The relation of events as before and after gains meaning only given creatures capable of exploiting this relation. How simple can such a creature be? The key requirement is adaptability; an organism must have some mechanism to detect the change and be able to alter its behavior in some way so as to benefit from the cause–effect relation. An interesting example is the nematode worm Caenorhabditis elegans (C. elegans), discussed in Mandik, Collins, and Vereschagin (2007) These worms navigate the chemical gradient in their environment to move toward nutrients and away from toxins. To explain how these worms accomplish this navigational task, called chemotaxis, researchers hypothesize that the difference in chemical concentrations between two locations is calculated by comparing the concentration at an earlier time with the concentration at a later time. Mandik et al. operationalize this hypothesis by using artificial life software to create a synthetic form of C. elegans. The successful model includes a sensory neuron that encodes information about the current chemical concentration and four interneurons with recurrent connections to one another and to the sensory neuron. Mandik et al. conclude that the recurrent connections constitute a representation of the past chemical concentration that is compared with an incoming sensory representation of the current concentration to determine in which direction to steer. The illuminating feature of this example is how the synthetic C. elegans represents environmental change. In order to move in the correct direction, the system must treat the sensory information as representing a situation that occurs after the situation represented by the network as a whole, and it must initiate movement immediately in order to take advantage of the resulting calculation. Thus, temporal presence is implicit in the system because sensation and action must operate within the same temporal framework to be effective in this example.10 However, nothing in the description of the recurrent network requires that it explicitly encode information about the time a chemical concentration occurred. The distinction between explicit and implicit representation is widely but inconsistently used.11 While this variation is not surprising in light of the variety of ways the terms can be applied, it calls for some attention to how the distinction is drawn in each case. In this context, an explicit representation fulfills the four elements in the teleofunctional description such that it has been produced in order to covary with the item it represents because this isomorphic relation has been successful in guiding behavior. An implicit representation also bears an isomorphic relation to its represented, but its function is tied to successful explicit representation. Implicit representations are conditions that must be in place in order for explicit representations to function properly. In the case of C. elegans, appropriate timing relations are necessary in order for the calculation of the chemical gradient to accurately represent the location of nutrients, and these timing relations would differ if the environment were different. Say the creature lived in a fluid environment where nutrients flowed toward it, then the appropriate response might involve waiting some time for the nutrients to arrive rather than moving along the gradient toward the nutrients. In both cases, the success of the representation is determined by the content of the explicit representation—the location of nutrients. The timing, though crucial, is incidental. In order for a representational system to function properly, various conditions instrumental to its development must be in place. Generally, we can say that a feature of the environment is implicitly represented when a difference in the environmental feature would require a difference in a feature of the representational system in order for it to function according to design.12 In pushmi-pullyu representations of the sort used by C. elegans, time is represented implicitly in the sequential structure of behaviors appropriate to current conditions. Psychologist Gibson (1986) referred to such perception–action sequences as ‘affordances’ and argued that perception is essentially the ability to appreciate the ways one’s environment allows or ‘affords’ action. A chair affords sitting; flies afford eating. We don’t first see the chair and then consider what to do with it. We see the chair as a sitting affordance.13 Pushmi-pullyu perception includes a representation of appropriate action, not merely as a hypothetical possibility but as a trigger that initiates response. So the representation of flies triggers consumption, which has the function of sating hunger. Temporal representation at this level is limited to the ability to register environmental markers in relation to appropriate response. In other words, it is the ability to exploit cause–effect relations, to appreciate the difference between before and after. Because causes come before effects, the sensitivity to their temporal order facilitates appropriate action. Time is not explicitly represented in simple cause–effect sequences because action is tied to perception in a one–one relation. The success of action is not dependent on any particular time, only time-relative-to-perception. Distal future states are linked through affordance chains to currently perceived objects, and past states are manifested exclusively in terms of learned associations between events. Likewise, present states are not represented as present either, for this would require distinguishing the present from future and past. No such distinction is possible because all events are specified simply as 10 Many familiar examples of simple cause–effect representation are tied to the present in just this way. However, there is no principled reason that a delay of some set amount of time might not figure between sensation and action. Though these representations must occur at some time—now, 2 weeks from now— they need not represent time—as now or as 2 weeks from now. The marine Palolo worm Eunice viridis, for example, times its reproductive cycle to a 2-hour period in late Fall. The precise synchronization of thousands of these organisms is attributable to a combination of biologically based oscillations (daily, annually, lunar and tidal) (Gallistel, 1990, p. 236).Biological oscillations provide cues that can extend the phase of time between sensation and action, but they are not, as such, equivalent to a representation of the interval itself. 11 See Dienes and Perner (1999) and Opie and O’Brien (1999) for alternative descriptions of the distinction between explicit and implicit representation. While there are interesting threads of similarity in our descriptions, there are important differences as well. Dienes and Perner, for example, focus on knowledge acquisition whereas the concern here more broadly includes non-conceptual forms of representation. Likewise, connections can be drawn to the literature on implicit memory, although brevity prevents a full comparative analysis. 12 This description of implicit representation roughly follows Millikan’s description of ‘tacit suppositions’ (1993, pp. 104–105). 13 Gibson’s insistence on the necessity of action in perception led him to eschew any talk of representation as a component of perception. I will make use of Gibson’s insightful notion of ‘affordance’ without adopting his restrictions on representation or other more controversial Gibsonian views.

82

P. Droege / Consciousness and Cognition 18 (2009) 78–90

later or earlier in a sequence. The fact that a creature is now at one point in the sequence is no more explicitly represented than the fact that the creature is located on the third planet from the sun. Yet both facts are implicitly represented since variation in temporal or spatial location would entail variation in representation. As an example of the distinction between explicit and implicit representation of time, compare two mechanisms in my dishwasher. The first is the washing mechanism that is designed to pump water through the system for a set period of time. It runs longer on the ‘pots and pans’ setting than on the ‘normal wash’ setting, yet it would be incorrect to say that this mechanism therefore produces a representation of the time it takes to wash pots as opposed to other dishes. When functioning properly, the mechanism varies its operation according to the wash setting, not according to the passage of time. In contrast, the delay mechanism does vary its operation according to the passage of time. When functioning properly, the ‘1 hour delay’ setting will delay the wash for one hour. For the wash settings, the timing is incidental to the function of proper washing; whereas for the delay settings, the timing is essential to the function. Both of these are derived representations, of course, since their representational abilities were designed by the manufacturer, not by evolutionary success. Still, they illustrate the difference between a mechanism that implicitly varies according to time from one which explicitly represents the period of time as its function. There is a parallel difference between the implicit way a creature such as C. elegans utilizes time from the explicit way a creature such as a human represents time. 3. Whither consciousness? Here we are, a third of the way through, and the advertised connection between consciousness and time has not yet appeared. Indeed, if the argument of the previous section is correct, consciousness is not necessary for temporal representation. Any organism capable of exploiting cause–effect relations is capable of representing events as before and after. But surely we do not want to cast the net of consciousness so widely as to include the nematode worm, much less the synthetic nematode worm. At least, I do not. As noted earlier, consciousness arises with the development of the capacity for explicit representation of time. In particular, conscious states represent events as occurring in the present moment.14 Whereas non-conscious states represent temporal relations implicitly as conditions for successful pushmi-pullyu representation, conscious states represent events explicitly as the current state of affairs. My claim in this section is that explicit temporal representation comes about when a creature acquires the ability to disconnect the descriptive and directive elements of a pushmi-pullyu representation in order to consider alternative possible actions in response to environmental conditions. Given the ability of pushmi-pullyu representation to appreciate sequential relations, it may seem that implicit temporal representation is all anyone would ever need. So it is useful to consider both the resources and limitations of pushmi-pullyu representation in order to better circumscribe the developmental shift in representational capabilities. Certainly, appropriate responsiveness to a causally structured environment can be quite complex and intricate when cause–effect relations are linked in sequential chains. As Millikan notes, ‘‘The activities of insects may be largely or entirely governed in this way, by hierarchies of perception–action chains, or as ethologists call them, chains of ‘behavior releasers.’”(Millikan, 2004, p. 166) But even a very elaborate chain of pushmi-pullyu representations is constrained by an exclusively backward-looking system. The drawback of a ‘pushmi-pullyu animal’—one guided only by pushmi-pullyu representations—is that it lacks ‘‘the ability to recombine various segments of behaviors in its repertoire in new ways so as to achieve new goals. It could achieve new linkages of behavior chains only by reinforcement of accidental connections after the fact, never by inventively looking ahead” (Millikan, 2004, p. 168). Because pullyu goals are linked to specific pushmi facts, the pushmi-pullyu animal is guided by the affordances in the chain as they present themselves rather than by an abstract representation of some ultimate goal. The nematode worm is guided by the chemical gradient toward nutrients and away from toxins so as to survive and reproduce. It does not need to explicitly represent survival and reproduction as its goal for this to be the reason the worm is guided by the chemical gradient. Despite the remarkable successes of this system, its disadvantage is best illustrated when an affordance chain is interrupted and the pushmi-pullyu animal continues to repeat an action that fails to bring about its goal. Consider the digger wasp which drags its prey to its nest, checks inside, and then emerges to bring in the prey. By moving the prey a few inches away from the door each time the wasp goes inside, the wasp can be sent back to step one: drag, check, emerge, drag, check, emerge, and so on indefinitely.15 The wasp fails to realize that its goals in checking the nest have been successfully fulfilled. Its behavior is tied to a chain of affordances that is probably quite useful in the absence of pesky researchers, but nonetheless strikingly inflexible. The limitation of the pushmi-pullyu animal is that its behavior is not directed by an explicit goal. The explicit representation of a goal, what Millikan calls a ‘goal state representation,’ (2004, 198) allows the consideration of alternate means toward the goal and an assessment of success or failure in reaching it. At this level of representation, affordances are possibilities for action in relation to a goal rather than established means-ends patterns. So this developmentally advanced creature—which we can call the ‘practical animal’—utilizes its ability to represent affordances independently from actions in order to evaluate which possibility is most promising. In contrast, a pushmi-pullyu bee cannot evaluate affordances. It may 14 Conscious memories (representations of present representations of the past) and conscious plans (representations of present representations of the future) are discussed in the final section. 15 Millikan (2004, 169) cites Dennett (1984) as the popular source for this example.

P. Droege / Consciousness and Cognition 18 (2009) 78–90

83

act either on the fly-to-the-right affordance or the fly-to-the-left affordance in its search for nectar, since both affordances are available and only one can be exploited. The factors that make one or the other direction preferable are determined by means of the bee’s history and that of its species, but importantly, the bee is not able to represent the nectar as a reason for choosing one or another action. Even after a successful nectar search confirms the fly-to-the-right affordance as the best to exploit, the descriptive and directive aspects of the representation remain linked. Such a creature cannot decide one day to see if there might be some nectar on the left, because the goal of finding nectar is an inextricable component of the pushmipullyu representation ‘fly-right-to-get-nectar’. On the other hand, if a practical animal such as a bear represents honey as a goal separately from a specific path, then the bear is better able to decide among paths or even to construct new paths toward its goal.16 Here at last—at the evaluative moment—is when consciousness enters. A creature that no longer simply follows the directives of available affordances is a creature that needs conscious states. On my view, conscious states combine sensory and conceptual representations17 to provide a representation of the current state of affairs. That is, unconscious representations are selected and combined to form an explicit representation of the situation ‘now’.18 In teleofunctional terms, conscious representations are produced in order to vary according to current conditions, because the explicit representation of the world at the present moment has successfully guided behavior. Temporal presence is represented explicitly because the decoupling of the descriptive and directive aspects of pushmi-pullyu representation demands the ability to distinguish the present state of affairs from past and future states. The ability for explicit goal representation does not in itself constitute explicit temporal representation; rather, the two representational abilities are developmentally coincidental. Where an explicit goal is in mind, with several possible actions that could be performed toward reaching the goal, each possibility must be assessed by comparison of the present situation with various past situations the creature has encountered. To assess the ongoing success or failure of the chosen route, the creature also needs to compare the present situation with the desired future situation. The practical animal needs to know what is happening now in order to be appropriately guided by an explicit goal. On my view, a conscious state is a complex representation of environmental features such as colors, shapes, sounds and textures which combine in a representation of the world as it is now.19 This continually updated, complex representation allows for an ongoing evaluation of opportunities and dangers as well as feedback on the effects of a creature’s actions in the environment. The relation between conscious representation and explicit temporal representation is conceptual—conscious representations are representations of temporal presence. The relation between explicit goal representation and explicit temporal representation is pragmatic—the best way to evaluate alternate plans is to represent alternate futures in relation to an explicit representation of how things are now.20 This is not to say that practical animals are capable of utilizing the indexical concept ‘now’ abstractly. The practical animal is present-oriented, comparing the present state of affairs with past states toward a particular future goal state. It cannot think of the present moment as a point along a single timeline from the remote past to the remote future. An ability to collect information about places and times completely separately from practical use requires an additional developmental step that may well depend on the conceptual resources of a language.21 Thus, the ability we theoretical animals have to represent goals disconnected from times and vice versa, reflects our sophisticated conceptual resources rather than an objection to the claim that explicit goal representation is developmentally linked to explicit temporal representation. Likewise, despite the failure of practical animals such as bears and babies to demonstrate these more developed conceptual capacities, their ability to represent goals explicitly—as shown in flexible, problem-solving behavior—is positive indication that they utilize conscious representations. To dramatize the difference between non-conscious, implicit temporal representation of the pushi-pullyu animal with conscious, explicit representation of the practical animal, consider the differential abilities of each sort of creature to respond to affordances in its environment. The pushmi-pullyu animal, which by hypothesis is not capable of conscious representation, 16 An amusing example is Millikan’s description of the elaborate method followed by a squirrel in developing its plan to assault a bird feeder (Millikan, 2004, pp. 204–206). 17 Droege (2003) distinguishes sharply between these two forms of representation. Sensory representations vary in relation to particular forms of stimuli and are dependent on the presence of those stimuli for their consumers to function properly. Conceptual representations allow representation in absence of the item represented, and they admit of generalization, individuation and other features of propositional form. In order to minimize complications in the exposition of the theory, the explananda were limited to conscious sensory representations. Here I expand the explananda because the complications presented by conscious conceptual representations principally involve temporal features which can be accommodated by the proposed theoretical structure. A conscious thought about lunch can be incorporated into a representation of the present moment as easily as a conscious sensation of lunch before me. More difficult are conscious memories and conscious thoughts about future events. I consider these cases at the end of Section 4. 18 Some claim to be able to represent time itself or to achieve a pure non-representational conscious state through meditation or other technique. Such states seem to pose counterexamples to the conditions for consciousness as stated. I am skeptical that these sorts of state exist in that I find it difficult to conceive of time without change or consciousness without content. Even so, such odd cases could be explained in terms of functions that evolved for representational purposes and were adapted to serve non-representational goals. The attainment of such a state undoubtedly marks a sophisticated and unusual achievement and so is appropriately taken as a further stage in the development of conscious ability, to the point that another term such as ‘enlightenment’ might be more appropriate than ‘consciousness’. 19 As noted above, conscious states also represent internal features such as bodily states (conscious pains, emotions and other bodily sensations) and mental states (conscious thoughts, beliefs, and desires). For simplicity of exposition, I revert to representations of features external to the organism. 20 Anderson, Josyula, Okamoto, and Perlis (2002) have argued for a similar hypothesis in the design of decision-making programs that utilize what they call active logics. These are inference rules that incorporate a measure of time that changes with each inference step, a ‘now’. 21 See Millikan (1984, chap. 14–19; 2000, chap. 19) for the argument that abstract thought and language development are mutually interdependent.

84

P. Droege / Consciousness and Cognition 18 (2009) 78–90

responds to affordances automatically based on past successes and failures. The failure to achieve the benefits accorded to this response in the past can trigger adaptive future behavior—the bee that finds no nectar when it exploits the fly-to-the-left affordance will shift its pattern rightward—but all future behavior continues to be systematically and inflexibly tied to affordances. The practical animal, on the other hand, is able to conceive of more than one means to a particular end. Alternate paths to a goal must be evaluated in relation to a starting position that, in a dynamic world, is continually changing. Consequently, a creature must be able to track current conditions in order to consider which actions done ‘now’ are most likely to bring about a goal in the future. In this way, the present detaches from the future, generating an explicit representation of ‘now’ distinct from ‘then’ and ‘hence’. Since conscious states are the explicit representation of the present moment, creatures capable of this sort of representation have conscious states. These conscious creatures represent the present explicitly in order to assess how past plans and actions are proceeding and to determine future actions in pursuit of their goals. Given the close pragmatic connection between the capacity to evaluate alternate plans and the explicit temporal representation that constitutes conscious states, a good indicator of consciousness is a creature’s behavioral flexibility. If, like the digger wasp, a creature cannot assess current conditions in light of its goals, then that creature has likely not developed the representational capacities necessary for consciousness. Whereas the child who happily claps when she has put all the shaped pegs through the appropriate holes clearly indicates conscious representation. The ability of the pushmi-pullyu animal to adapt to cause–effect relations depends only on a representation of before and after. The ability of the practical animal to assess the value of alternative possible actions requires a representation of the present moment as distinct from past and future. A final word of caution is in order. If consciousness is representational, and representational content is determined teleofunctionally, then we cannot simply read its presence or absence off behavior or brain state. A more complicated story that triangulates arguments from evolution, analogy and physical functional relations will need to be told and reconfigured as various areas of research advance. So my examples of digger wasp as pushmi-pullyu animal and bear as practical animal depend on further empirical evidence. It may turn out that the digger wasp is more clever than it currently seems or the bear is more simple. One consequence of naturalism is accepting the fact that we cannot settle an issue like the nature of consciousness through a priori reasoning. Nor should we expect one indicator to be conclusive. 4. Defining ‘now’ This is not to say that a priori reasoning plays no role in the explanation of consciousness. Reflection from the armchair (with a periodic glance through the empirical window) provides an operational definition of conditions for a representation of ‘now’ within the context of the proposed theory. The teleofunctionalist accounts for representational content in terms of the successful covariation of a representation with its represented that is successful in guiding behavior often enough to warrant reproduction. So the question before us is: what does it mean for a representation to covary with ‘presence’? What constitutes a representation of ‘now’? Happily, we need not resolve debates about the metaphysics of time to answer this question. Time may be an external property of the world that our representational system manages to discern, or we may create time by our representation of change. In either case, the argument of the previous section is that, teleofunctionally, the representation of past, present and future is the representation of particular relations among events.22 In this section, I consider which events are included in a representation of presence by looking at how such a representation can be used to track events as current. Because a representation of presence is not just a present representation, not all events presently represented will be included in a representation of present events. To determine which events will be included, we need to specify what period of time counts as ‘present’ and what sort of selection process is involved in representing the present. One result of this description of temporal representation is that it can help solve various puzzles about how we experience objects as persisting through time. A good place to begin a discussion of the experience of presence is with William James: Let me try, I will not say to arrest, but to notice or attend to, the present moment of time. One of the most baffling experiences occurs. Where is it, this present? It has melted in our grasp, fled ere we could touch it, gone in the instant of becoming (1890, 608). When we attend to our experience of time, we notice two things. First, the present is fleeting, constantly receding into the past, but second, that it has duration; it forms ‘‘a saddle-back, with a certain breadth of its own on which we sit perched, and from which we look in two directions in time.” (James, 1890, p. 608) James calls this represented duration the specious present. What we want to know is the contents of the specious present. It seems intuitively correct to say that the present is not represented as punctuate but as occupying a duration.23 Time seems to flow from one moment to the next without clear borders or abrupt transitions. To secure this sense of movement and continuity, the specious present represents a range of events: the motion of a ball, a sequence of notes in a melody, the 22 In the terminology of metaphysical theories of time, the representation of past, present and future is compatible with either a tensed or a tenseless theory of time. For an argument supporting the more counter-intuitive compatibility of tensed experience and tenseless time, see Mozersky (2006). 23 As James and others have noted, the experience of succession is not equivalent to a succession of experiences (James, 1890, p. 628; Husserl, 1905, p. 12; Tye, 2003, pp. 86–88). Therefore, succession must be represented within the frame of the specious present.

P. Droege / Consciousness and Cognition 18 (2009) 78–90

85

feeling of waves rushing by. What, then, is the frame of the specious present? How long a span is represented? James suggests the length is variable, but usually falls within a few dozen seconds (1890, p. 613). Neuropsychologist Pockett (2003) endorses the variability of the specious present, allowing a range of 10ms to several hours in exceptional cases. According to Pockett, the extent of a ‘now-moment’ depends on the frequency that a subject samples the external world. The greater number of times that information about the world is updated, the shorter each experienced moment will seem relative to world time. When I check the clock 10 times/min, each represented moment is very short while each clock moment seems to endure for ages. Others have been more determinate in their estimation of specious present duration. Grush (2005) proposes a range of ±100 ms as the unit of time represented according to the trajectory estimation model that he proposes to account for the representation of spatial and temporal dynamics. At each moment, sensory information from several sources with various transmission delays must be combined with anticipated movement in order to provide an accurate representation of ongoing events. Grush justifies the range of ±100 ms by noting that 100 ms is the longest sensory latency (proprioceptive feedback from the feet) as well as the approximate delay for motor commands to have their effects. Consequently, at time t of a trajectory estimate, sensory information from the past 100 ms as well as information from efference copies of motor commands anticipating the next 100 ms are both available for calculation. One virtue of Grush’s trajectory estimation model is its clear temporal borders. While the phenomenological sense of presence may well vary widely as James and Pockett suggest, the biological function of distinguishing the present moment from past and future motivates a more determinate range of represented events. If ‘now’ is represented in order to assess past efforts toward future goals, the present moment should be fairly brief and continually revised. A bear foraging for honey is alert for clues to be sure it is following the correct path and no predators have appeared in the vicinity. It is the privileged beast that can afford to be ‘lost in the moment’ and rapture for seconds or hours in phenomenological stasis. An empirical prediction of this model, then, is that a suite of abilities should be evolutionarily coincident. The ability for explicit goal state representation should coincide with behavioral flexibility, and this in turn places demands on the attentional system to shift, select and maintain resources at differential rates according to task. The ability to maintain attention for long periods of time despite distracters is another indicator of explicit goal state representation, although it is not a necessary precondition. The next question to ask is what gets included within this representation of ‘now’, given its range of ±100 ms? If conscious representation is the best approximation of the current state of affairs, then we can expect it will include both fewer and more representations than are available within the allotted ±100 ms. On one hand, it will include fewer than all available representations in order to avoid overload and to facilitate interpretation. Our representational systems process vast quantities of sensory and conceptual information, only some of which is relevant to the evaluation of ongoing tasks. Routine or automated responses do not require monitoring, and so the pushmi-pullyu representations involved in these behaviors need not be incorporated into a representation of the present moment. Body position, for example, is usually unconscious. If at some point this information becomes salient in some way—as a result of imbalance or relevance to a non-automatic behavior—then body position representations can be recruited into a representation of ‘now’. On the other hand, conscious representation will include more than available representations as representational content is often confabulated to fit a plausible interpretive scheme. Perceptual illusions provide good examples of the mechanisms designed to predict stable object features such as edges and motion.  In the Müller-Lyer illusion, one line is represented as longer than the other, though both are the same length.  A white Kanizsa triangle is represented on a page displaying only black shapes.  When the visual system has habituated to a downward-moving stimulus and is then shown a still stimulus, the result is a representation of the water moving up.24 In these cases, the representational content of the conscious state is not limited to the physical stimuli. By representing features additional to those conveyed by the stimuli themselves, perceptual processes systematically resolve input into predictable patterns. How things seem to us as conscious creatures—unified, composed of stable, enduring objects—is a consequence of the selective and interpretive processes prior to conscious representation.25 As a structural feature of all perceptual representation, this interpretive aspect is not unique to conscious representation. Nonetheless, the heuristics involved in selecting and combining representational features are precisely those necessary to the distinction between implicit temporal representations (those that occur now) from explicit temporal representations (those that represent ‘now’ as part of their representational content). If a conscious system has the function of representing the current state of affairs, it must resolve timing discrepancies both internal and external to its own representational system. While internal timing considerations were discussed in the argument for a ±100 ms window for the specious present, representation of external change also demands accurate perception and anticipation of temporal cues. Visual and auditory systems, for example, must be calibrated so that the sound of someone’s words matches the shapes their lips make.26 Also, as 24 To view these and the flash-lag illusion mentioned in the next paragraph, a good site is Michael Bach’s Optical Illusions and Visual Phenomena at http:// www.michaelbach.de/ot/index.html. 25 Smythies (2005) offers an empirical hypothesis for constructing visual representations that articulates some of the technical specifications necessary to balance sensory input with scene interpretation. Additionally, Droege (2003) proposes a mechanism, called a second sense, to serve the selection and coordination function necessary to producing representations of the present moment that are conscious states. 26 The phenomenon known as the McGurk Effect demonstrates that visual perception of the speaker’s lips affects the interpretation of spoken syllables (McGurk & McDonald, 1976).

86

P. Droege / Consciousness and Cognition 18 (2009) 78–90

Fig. 1. Succession model.

the flash-lag illusion shows, the perceptual system adjusts its representational content to compensate for the motion of objects. When a flash is presented directly above a moving bar, subjects will report seeing the bar ahead of the location of the flash. While Nijhawan (1994) argues that the perceptual system predicts the future location of the bar based on its previous rate and direction of motion, Eagleman and Sejnowski (2000) propose the opposite hypothesis: perceptual representation is delayed 80 ms in order to incorporate a range of events in the representation. Grush’s (2005) trajectory estimation model can incorporate both predictive and postdictive sources of information in its ±100 ms range. What is represented as ‘now’ is not simply whatever sensory representations occur in the brain at any given moment. Rather, representations are selected within a temporal window of ±100 ms to best approximate the current state of affairs. This approximation requires the omission and addition of information to effectively interpret and anticipate present conditions. Given the account of conscious temporal representation in terms of the coordination of representations that vary according to events within a specified time frame, we can now address a puzzle about temporal experience recently raised by Sean Kelly: ‘‘How is it possible for us to have experiences as of continuous, dynamic, temporally structured, unified events given that we start with (what at least seems to be) a sequence of independent and static snapshots of the world at a time?” (Kelly, 2005, p. 210) The experience of a tone extending for some time is paradigmatic of this puzzling sort of temporal experience. We now experience the tone as lasting, even though the external stimuli that account for this sense of temporal extent are no longer present. Kelly eventually suggests that perceptual tracking through short-term storage is required, but he is reluctant to call the required capacity ‘memory’ because it does not seem readily assimilable to familiar forms of memory (Kelly, 2005, p. 233). This reluctance is justified on my account, since the tone is represented as now lasting, whereas memory is involved only if a representation of the tone as past is included.27 To put the point in condensed form, representations of tone stimuli that occurred in the past are incorporated into the temporal window of the specious present to represent the tone as ‘now lasting’. To elaborate, the solution to Kelly’s puzzle is to distinguish the content of conscious representation (as of continuous, dynamic, temporally structured, unified events) from its vehicles (multifarious neural structures and processes). While this proposal is similar to Husserl’s (1905) Retention Theory in accounting for temporal presence in terms of an intentional act that represents a span of time from just past (retention) to just future (protention), Kelly rightly notes that Husserl’s description merely names the problem without solving it. The handicap for a phenomenologist is the inability to conceive of the role of the vehicle in conveying content because, phenomenologically, content is all there is. Consequently, it becomes a mystery how a tone can be represented as having-lasted over a span of time that extends beyond the moment of representing. The mystery dissolves if we allow vehicles to convey (or store) information that can be incorporated into a representation of anything at any time. Since our representational systems are non-random, we tend to represent the world in predictable ways for generally good reasons. When all goes well, the representation of a lasting tone varies isomorphically with tones that do in fact last. Examination of another problem related to temporal representation further illustrates the importance of the distinction between content and vehicle, and may help brush away any vestiges of mystery from the previous puzzle. The problem is: how are representations of ‘now’ related to one another? Compare two possibilities: the succession model and the overlap model. In the succession model, each specious present immediately follows the one before it, as in Fig. 1. One problem with the succession model is that border events (at 200 ms, 400 ms, etc.) would likely get lost during the changing of the representational guard. Recall that on the trajectory estimation model the predictive 100 ms is an estimation of future events based on efference copies of motor commands and anticipation of object motion. Such predictions may be very good, but they are not foolproof. Unexpected stimuli could appear at, say, 110 ms that would not be incorporated into a conscious representation until now2 occurred at 300 ms. Since 200 ms goes by fairly quickly, this delay would not affect unconscious, pushmi-pullyu response; evolutionary design triggers action under life-threatening conditions without waiting for conscious comment. The new sensory information would simply be included in the next specious present, appearing a little late, but soon enough for evaluation. Phenomenologically, however, it seems the loss of information would result in a disjointed experience of the world. How is that I hear a single tone beginning at 100 ms and ending at 500 ms rather than three separate tones for each represented moment? If now1 represents a tone from 0 to 200 ms, now2 represents a tone from 200 to 400 ms, and now3 represents a 27 Or at least one way of thinking about memory requires the explicit temporal marker ‘as past.’ On the other hand, there are other forms of memory, such as skill-based or declarative memory, that do not seem to require this explicit marker. Since these forms of memory have no particular experiential quality, I will exclude them from the category of ‘conscious memory.’ In future work I hope to deal more fully with memory in all its forms.

P. Droege / Consciousness and Cognition 18 (2009) 78–90

87

Fig. 2. Overlap model.

tone from 400 to 600 ms, we need to explain why there seems to be only one tone rather than 3. Here is where careful attention to the difference between vehicle and content of representation proves useful. Though the tone is represented by three separate vehicles (now1, now2, now3) occurring in succession, the content (one continuous tone or three separate tones) depends on whether the tone is represented as the same by each representation. Though a full discussion of the grounds for sameness in represented object is a complicated matter that would take me afield of the topic,28 notice that the debate has shifted from a question about the relation among vehicles to a question about the relation among external events—one tone or three. We may marvel at the ability of the auditory system to manage a distinction between one and three, but the representation of continuity does not require continuous representations. Successive ones may do just as well. Or they may not. Due to worries about the continuity problem described above, many theorists have advocated some version of the overlap model as an alternative description of the relation between specious presents.29 On the overlap model, the time represented by each specious present overlaps some portion of time represented by the previous specious present and the next specious present. In Fig. 2, for example, each ‘now’ overlaps 100 ms of the time represented by adjacent moments. The overlap model seems to offer a better account of continuity because the intermediate representations serve as a bridge from one moment to the next. The tone that extends from 100 to 500 ms cannot be lost in the gap between representations because each instant is represented by at least two representations. As so often happens, however, the virtue of duplication is also its vice. If each instant is represented twice, it would seem that everything ought to be doubled in experience. A click that occurs at 150 ms would be represented by both now1 and now2, but I would only hear one click. This objection seems especially pressing when the specious present is diagrammed in a way that suggests the representation itself occupies a certain span of time, as in Fig. 3.30 The diagram in Fig. 3 also illustrates the problem with this objection. Overlapping representations would only result in duplicate experiences if one experiences the representations themselves. Again, a careful distinction between content and vehicle resolves the issue: two representations (now1 and now2, vehicles) both represent the click (C, content) as occurring at 150 ms. The multiplication of vehicles does not entail the multiplication of contents. So, as long as the vehicle/content distinction is in place, either the succession model or the overlap model could account for the relation between specious presents. The question is an empirical one: does the evidence favor more or fewer vehicles of representation? Neural architecture may accommodate both models, alternating between them as the task demands: rapid-fire production of new representations allow adjustment to a quickly changing, high-stress environment, and relatively slow, successive representations provide a low cost assessment of a stable, calm environment. The potential variability in the number of representations of ‘now’ produced over the course of time highlights the indexical character of conscious representation. Linguistically, the function of ‘now’ is to vary according to the time of its representation; that is, the time picked out by a token of ‘now’ is determined by the context in which that token appears. Yet ‘now’ need not refer to the instant of its tokening; the exact moment and span of ‘now’ is determined pragmatically. Consider the following:  The answering machine: ‘‘I’m not available to take your call right now” refers to the time of the call, not the utterance.  The narrator of a documentary: ‘‘Now is the moment when the treaty is signed” refers to events represented in the footage.  A parent: ‘‘We need to go now” may refer to the moment of utterance, a few moments hence or at some relatively near future time depending on volume and tone. Similarly, the moment of time represented by a conscious state is not restricted to items and events occurring during the ±100 ms that constitutes each representation of ‘now’. Since we are obviously capable of consciously remembering events from the past and entertaining conscious thoughts about the future, these events must also be included in the specious present. Consider these cases of conscious content:

28 29 30

Millikan (2000) develops an argument for representational sameness grounded in perceptual tracking abilities. Among adherents are Broad (1923) and Dainton (2000). Similar diagrams appear in Tye (2003, p. 93) and Gallagher (2003, p. 11).

88

P. Droege / Consciousness and Cognition 18 (2009) 78–90

Fig. 3. Overlapping representations.

 Feel the breeze, think of Spring: refers to events occurring roughly at the time of representation (now).  Remember the heat at this time last year: refers to an event represented as a past state of affairs in relation to the time of representation (now).  Think of lunch, imagine a ham sandwich: refers to an event and object represented as a possible future state of affairs in relation to the time of representation (now). In all of these cases, temporal content—as past or future—is specified explicitly in relation to ‘now’: at the same time as now, past relative to now or future relative to now. ‘Now’ is defined as the representation of events within a frame of ±100 ms. A conscious memory is the representation of a past event, and this representation is included among the events represented as ‘now’. That is, one of the events I am representing as within this ±100 ms specious present is my representation of the heat at this time last year. Likewise, in the third case, one of the events I am representing as within this specious present is my representation of lunch an hour from now. The representational structure is analogous to indirect discourse (she said that he said that snow is white): I am now thinking that I thought last year that August is hot. As a final thought, we should consider how dreams figure in relation to the presence. Memories and plans may be representations of past and future events incorporated into a representation of the present moment, but dreams represent no events at all. How could there be an evolutionary advantage to dream representations when the environment represented does not exist? There are (at least) two possible answers. First, dreams may function in a way similar to future plans. Revonsuo (2006) has suggested that dreams simulate potential threats as a way to practice skills without the perilous consequences of a real environmental test. In this case, the representational content of dream states is borrowed from successful representational relations formed during previous interactions with the environment. A second possibility is that dreams are simply a form of misrepresentation and serve no representational function at all.31 Just as waking representations sometimes occur in the absence of the item they represent, dreams may be a systematic activation of representational vehicles by means other than the items they have the function to represent. That dreams are systematic, correlated with sleeping patterns and varying in predictable ways across species, suggests that some function is performed by dreams, if not a representational one. The neural activation that generates dreams may serve to purge extraneous connections or consolidate newly acquired ones, depending on whether the random or non-random features of dreams prove to be more significant. In any case, it is useful to remember that a teleofunctional theory of representation differs from a purely causal account in that representational content is tied to evolutionary success, not correct causal source. A representation needs to be right often enough to warrant reproduction, where ‘often enough’ might be quite rare. Consequently, the possibility that we spend each night in a world of misrepresentation need not undermine the function of those representations so long as fatal error does not result. 5. Conclusion To review, the teleofunctional theory of representation neatly connects temporal representation and consciousness. The first level of temporal representation—the ability to track change—occurs when a change internal to the organism has the function of varying in accord with environmental change. At the most primitive level, these representations both describe how the world is and direct the organism to make an appropriate response. Temporality is represented implicitly in the coincidence of change in representation and change in environment without an explicit representation of time as such. Temporal representation becomes explicit when a creature is capable of representing future goals separately from the specific routes to achieve them. The consideration of alternate possible actions requires a representation of the future as temporally distinct from the present. Further, the evaluation of alternatives depends on an assessment of past actions in relation to the current situation in order to choose the best course of action. Conscious states represent the present moment in order to facilitate the evaluative function of past assessment and future planning that arises when a creature becomes capable of explicitly representing its goals. Two features of the theory are especially useful in connecting philosophical work on representation to empirical research on consciousness and cognition. First, the theory offers a developmental approach to the mind which stresses the continuity of 31

Flanagan (2000) develops a version of this position.

P. Droege / Consciousness and Cognition 18 (2009) 78–90

89

representational capacity from simple to sophisticated creatures, and yet articulates critical junctures that mark the acquisition of more complex abilities. This continuity allows for further connections to be drawn between consciousness theory and animal cognition research. Presently, the inability for non-linguistic creatures to report on their experience is an obstacle to defending claims about animal consciousness. By adopting my account, we can look for behavioral flexibility and resources for shifting and maintaining attention as indicators of explicit goal state representation and therefore, by hypothesis, conscious states. A second valuable feature of the temporal theory of consciousness is its functional approach. A representation of the present moment is constrained in various ways by the structural requirements of the task, as described in Section 4. Multimodal representations of events need to be selected, integrated and adjusted to form the best interpretation of ‘now’. This hypothesis is consistent with neuropsychological theories that stress integration and selection as central functions of consciousness, such as the Global Workspace Theory and neural Darwinism.32 Furthermore, the identification of consciousness with the ability for explicit goal representation provides a potential diagnostic tool for assessing consciousness in patients in a vegetative state. Not only are vegetative patients unable to report on their experience, they exhibit no purposive behavior at all. Yet a recent study by Owen et al. (2006) suggests that vegetative patients are able to imagine behavior in response to instructions. fMRI scans of a patient told to imagine playing tennis match scans of healthy volunteers who performed the same mental task. Because imagination of this kind requires voluntary control, the ability of the patient to complete the task shows she is capable of explicit goal representation, and therefore by hypothesis, she has conscious states. The most important contribution a philosophical theory can make to the study of consciousness is to show how empirical work on different aspects of the mind applies to consciousness. A focus on the representational transition from pushmi-pullyu representation to explicit goal representation provides a developmental wedge to organize sets of coincident capacities: inflexible/flexible behavior, implicit/explicit temporal representation, unconscious/conscious states. Further empirical research is needed in fields such as animal cognition, neuropsychology, memory and learning theories, and computational/ connectionist theory to determine the extent to which these capacities cohere, as well as when and why they come apart. The value of the theory rests in its potential to forge these interdisciplinary connections. References Anderson, M. L., Josyula, D. P., Okamoto, Y. A., & Perlis, D. (2002). Time-situated agency: Active logic and intention formation. Presented at the 25th German conference on aritificial intelligence. Available from http://www.cs.umd.edu/~anderson/papers/gcai2002.pdf. Baars, B. (1997). In the theater of consciousness: The workspace of the mind. Oxford: Oxford University Press. Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18, 227–247. Broad, C. D. (1923). Scientific thought. London: Routledge and Kegan Paul. Chalmers, D. (1996). The conscious mind. Oxford: Oxford University Press. Dainton, B. (2000). Stream of consciousness: Unity and continuity in conscious experience. London: Routledge. Dennett, D. (1984). Elbow room. Cambridge, MA: MIT Press. Dennett, D. (1991). Consciousness explained. Boston: Little Brown & Co. Dienes, Z., & Perner, J. (1999). A theory of implicit and explicit knowledge. Behavioral and Brain Sciences, 22, 735–808. Droege, P. (2003). Caging the beast: A theory of sensory consciousness. Amsterdam: John Benjamins Publishing. Eagleman, D., & Sejnowski, T. (2000). Motion integration and postdiction in visual awareness. Science, 287(5460), 2036. Edelman, G., & Tononi, G. (2000). Reentry and the dynamic core: Neural correlates of conscious experience. In Thomas Metzinger (Ed.), Neural correlates of consciousness (pp. 139–151). Cambridge, MA: MIT Press. Flanagan, O. (2000). Dreaming souls: Sleep, dreams and the evolution of the conscious mind. Oxford: Oxford University Press. Gallagher, S. (2003). Sync-ing in the stream of experience: Time-consciousness in Broad, Husserl, and Dainton. Psyche, 9(10), 1–14. Gallistel, C. R. (1990). The organization of learning. Cambridge, MA: MIT Press. Gibson, J. J. (1986). The ecological approach to visual perception. Hillsdale, NJ: Lawrence Erlbaum Assoc. Grush, R. (2005). Internal models and the construction of time: Generalizing from state estimation to trajectory estimation to address temporal features of perception, including temporal illusions. Journal of Neural Engineering, 2(3), 209–218. Husserl, E. (1905). On the phenomenology of the consciousness of internal time (John Barnett Brough, Trans., pp. 1893–1917). Dordrecht: Kluwer Academic. James, W. (1890). The principles of psychology (Vol. 1). New York: Dover. Kelly, S. D. (2005). The puzzle of temporal experience. In Andrew Brook & Kathleen Akins (Eds.), Cognition and the brain: The philosophy and neuroscience movement (pp. 208–238). Cambridge: Cambridge University Press. Mandik, P., Collins, M., & Vereschagin, A. (2007). Evolving artificial minds and brains. In M. Keerthi & D. Hujdh (Eds.), Mental states. Vol. 1: Nature, function, evolution. Amsterdam: John Benjamins. McGinn, C. (1999). The mysterious flame: Conscious minds in a material world. New York: Basic Books. McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746–748. Millikan, R. G. (1984). Language, thought, and other biological categories. Cambridge, MA: MIT Press. Millikan, R. G. (1989). Biosemantics. Journal of Philosophy, 86(6), 281–297. Millikan, R. G. (1993). White queen psychology and other essays for Alice. Cambridge, MA: MIT Press. Millikan, R. G. (2000). On clear and confused ideas. Cambridge: Cambridge University Press. Millikan, R. G. (2004). Varieties of meaning. Cambridge, MA: MIT Press. Mozersky, J. M. (2006). A tenseless account of the presence of experience. Philosophical Studies, 129, 441–476. Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–450. Nijhawan, R. (1994). Motion extrapolation in catching. Nature, 370, 256–257. Opie, J., & O’Brien, G. (1999). A connectionist theory of phenomenal experience. Behavioral and Brain Sciences, 22, 127–196. Owen, A., Coleman, M., Boly, M., Davis, M., Laureys, S., & Pickard, J. (2006). Detecting awareness in the vegetative state. Science, 313, 1402. Pockett, S. (2003). How long is ‘now’? Phenomenology and the specious present. Phenomenology and the Cognitive Sciences, 2, 55–68. Revonsuo, A. (2006). Inner presence: Consciousness as a biological phenomenon. Cambridge, MA: MIT Press. Rosenthal, D. M. (1993). State consciousness and transitive consciousness. Consciousness and Cognition, 2, 355–363.

32

For Global Workspace Theory see Baars (1997), and for neural Darwinism see Edelman and Tononi (2000).

90

P. Droege / Consciousness and Cognition 18 (2009) 78–90

Rosenthal, D. M. (1997). A theory of consciousness. In Ned Block, Owen Flanagan, & Güven Güzuldere (Eds.), The nature of consciousness: Philosophical debates (pp. 729–754). Cambridge, MA: MIT Press. Searle, J. (1983). Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press. Smythies, J. (2005). How the brain decides what we see. Journal of the Royal Society of Medicine, 98, 18–20. Tye, M. (2003). Consciousness and persons: Unity and identity. Cambridge, MA: MIT Press.