Untangling two questions about mental representation

Untangling two questions about mental representation

New Ideas in Psychology xxx (2015) 1e10 Contents lists available at ScienceDirect New Ideas in Psychology journal homepage: www.elsevier.com/locate/...

305KB Sizes 3 Downloads 83 Views

New Ideas in Psychology xxx (2015) 1e10

Contents lists available at ScienceDirect

New Ideas in Psychology journal homepage: www.elsevier.com/locate/newideapsych

Untangling two questions about mental representation William Ramsey* Dept. of Philosophy, University of Nevada at Las Vegas, 4505 Maryland Parkway, Box 455028, Las Vegas, NV 89154-5028, USA

a b s t r a c t Keywords: Representation Cognitive maps Content Surrogative reasoning Functional role

In their efforts to provide a naturalistic account of mental representation, both cognitive researchers and philosophers have failed to properly address an important distinction between two core dimensions of representation: the functional role of representing on the one hand, and the content associated with that role on the other hand. Instead, accounts of representation tend to either conflate these two or ignore the functional role aspect altogether. Here it is argued that by properly separating these two dimensions, we can gain a better understanding of the actual challenge we confront in explaining mental representation. Moreover, it is suggested that certain theories that have traditionally been viewed as competing accounts of representation should instead be treated as complementary accounts of these different dimensions. It is shown that by adopting this perspective, we can overcome certain traditional problems and also improve our understanding of empirical models of cognition, such as those that invoke cognitive maps in the hippocampus of animal brains. © 2015 Elsevier Ltd. All rights reserved.

1. Introduction An old rule of thumb is that one should be clear on exactly what the question is before trying to provide an answer. An extension of this principle would be that it is important not to treat answers to different questions as competing answers to the same question. Below, I want to claim that something like this conflation of questions and answers has occurred in our attempts to make sense of the nature of mental representation. In particular, I want to suggest that writers have failed to properly distinguish two different questions about mental representation and as a consequence have mistakenly viewed potential answers to these different questions as competing answers to the same question. In other words, theories that perhaps should be treated as mutually supporting hypotheses about different aspects of mental representation have instead been treated as mutually exclusive theories about the same aspect. I'll

* Tel.: þ1 702 895 2920; fax: þ1 702 895 1279. E-mail address: [email protected].

suggest that we can make at least some progress by untangling these issues and developing a more comprehensive outlook regarding mental representation. This essay has the following organization. In the next section I'm going to clarify the two dimensions of representation that I believe have not been properly differentiated in the past. These are the functional role of representing on the one hand, and the content associated with such a role on the other hand. After trying to make this distinction clearer, I'll offer some speculative comments on why people may have ignored it in the past. Then, in Section 3, I'll turn my attention to two popular families of theories that have figured prominently in accounts of mental representation. These two can be roughly described as causal/informational accounts, and model or simulationbased accounts. Although these two kinds of theories have normally been regarded as competing accounts of representational content, I'll recommend in Section 4 that we treat them as complementing accounts of the two dimensions of representation that I discuss in Section 2. I'll try to show how the model story provides a sensible proposal about what it is for a brain structure to function as a

http://dx.doi.org/10.1016/j.newideapsych.2015.01.004 0732-118X/© 2015 Elsevier Ltd. All rights reserved.

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

2

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

representation, as long as it can be supplemented with a causal/informational account of content. By adopting this perspective, we can develop a sort of hybrid view of representation that allows each theory to avoid a central problem traditionally associated with it. In the final Section 5, I'll offer a brief sketch of cognitive science research that makes the most sense when interpreted in this way. 2. Two dimensions of mental representation The distinction I want to emphasize involves two core dimensions of simple cognitive representations. By “core dimensions”, I am referring to aspects that are central to our basic understanding of what a representation actually is. The first of these is the set of conditions that make it the case that something is functioning as a representational state. In other words, it is the set of relations or properties that bestow upon some structure the role of representing. The second is the set of conditions that make it the case that something functioning as a representation has the specific content that it does. In other words, it is the set of relations or features that bestow upon some representational structure its specific representational content. While these two dimensions are certainly very closely related, and may even overlap to some degree or perhaps require one another, they are not the same thing and below I will reflect a little on the difference. But first I also need to clarify what I mean by “simple” representations. I mean relatively low-level representational states and structures that might be possessed by both humans and various animals. Here, Dennett's distinction between personal level and sub-personal levels of psychological explanation is helpful. The former is relevant to the discussion of persons and agency, and here we often appeal to folk psychological notions of representation like beliefs and desires to explain why someone did something. Mental representations in this sense are typically consciously accessible and might be used by an agent to justify a certain behavior. By contrast, sub-personal psychological explanations involve a lower, causalphysical level of analysis, including neurological or computational mechanisms invoked to explain various cognitive capacities. Within the so-called “informationprocessing” paradigm, representational states appear here as well, often as explanatory posits. They may be part of an account of commonsense representational states like beliefs, but they are not conceptually the same and they play an explanatory role that is mechanistic as well as intentional. By ‘simple representations’ I am referring to these representational posits that appear in the more subpersonal explanations of cognitive capacities and processes. Hence, more sophisticated mental phenomena, like full-blown personal agency, or the conscious interpretation of a representation, need not be a part of an account of how such representations work. Returning to the core dimensions, consider the first. Here we are talking about those conditions and features of a state or structure that give rise to its having a representational function (in the teleological sense). Many have argued that representation is a functional kind and I believe this assessment is correct. As John Haugeland puts it,

“representing is a functional status or role of a certain sort, and to be a representation is to have that status or role” (1991, p. 69). So understanding this dimension means understanding those conditions and features that make it the case that, say, a neurological state or structure functions as a representational state, and not as something else. With regard to cognitive representation, there are at least two reasons to think that providing such an account in naturalistic terms is going to be tricky business. First, the sorts of roles we ordinarily associate with representation are not easily cashed out in causal-physical terms. When we think of representations, we think of things that perform tasks like “standing for something else” or “informing” or “signifying” and such. Yet it is far from clear just how these sorts of tasks could be implemented in a purely physical system. These are not conventional causal roles like pushing or pulling. Second, as Godfrey-Smith (2006) has plausibly argued, our understanding of representation in the brain appears to be modeled on our experience with nonmental representations out in the world. But in the case of the latter, there is a key component of that role that is not allowed in the case of internal representation; namely, a full-blown interpreting mind. We obviously cannot explain inner representation by invoking a full-blown homuncular agent inside of our brains that is cognizant of internal representations and interprets them and uses them in just the way we use external representations. If we did invoke such an internal agent, nothing would be explained and we would obviously invite a regress problem. That is not to say that there cannot be some other sub-personal mechanism or sub-system that functions as, using Millikan's terminology, a representation-consumer. Nor is it to say that the inner representation is not functioning internally, in some way, as a representation for the overall cognitive agent. But the critical point is that the internal functioning cannot reintroduce an inner representation-interpreter/user that has all the capacities that we do.1 In other works, I have referred to this as the “job description challenge”. What answering the job description challenge requires is that we explain how something in the brain can function as a representation without something else in the brain functioning as a conscious representation-interpreter. Here, we can simply call this dimension the “functional role” dimension. Consider now the second dimension e the dimension pertaining to representational content. Here, rather than explaining a certain type of role, we are interested in a certain type of relation; namely, the content relation that exists between a representation and its intentional object. Our question is, what conditions or properties or relations make a representation about what it is about? Proposed answers to this question are often called “theories of content determination” or “naturalistic theories of content”. As Von Eckardt puts it “a [theory of content determination] is a

1 Dennett (1978) has famously argued that inner homunculi are not problematic, as long as they can be properly discharged through functional analysis. However, no one has offered a plausible account of how a symbol-interpreting or meaning-assigning homunculus might be functionally decomposed.

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

proposed answer to the following question: In virtue of what do our mental representations have the semantic properties they apparently have?” (1993, p. 198). For example, some philosophers have claimed the content relation between a mental representation and its intentional object is reducible to some sort of informational relation, where information is then cashed out as a type of statistical correlation or nomic-dependency. Others have claimed that content is grounded in some type of similarity relation, where the representation shares some sort of structural similarity with the thing represented. As with representational function, here again there are distinct challenges for the naturalist. It is far from clear how all the different aspects of intentionality, such as the specificity of content or the capacity for false representation, can be fully explained in terms of natural conditions. And, once again, this must be done without invoking an interpreting mind as a representation-user e something that, in the case of nonmental representation, allows us to explain at least some aspects of content. We can call this the “content grounding” dimension. We can better see the distinction I am emphasizing if we consider examples of non-mental representation. Take, for instance, a thermometer. The mercury in the thermometer plays a representational role because we use its position to tell us about the current ambient temperature. We use it as an informer, as something that tells us about how hot or cold it is. One can understand this functional role without understanding exactly how the mercury has the content that it does. That is, we can understand how it functions as a temperature informer without understanding the special link it has to temperature that permits such a role. Alternatively, you can know that the position of the mercury is nomically dependent upon ambient temperature without having a clear sense of how the thermometer actually plays a representational role, or that it even is playing such a role. Understanding how a physical structure actually functions as a representation is not the same thing as understanding the nature of the relationship in virtue of which it represents one sort of thing and not something else. Public language also reveals the sort of distinction I am emphasizing. Linguists and philosophers often put forward one sort of account about how linguistic symbols come to function as representational elements, and a different sort of account of what underlies the reference relation between those same words and the things that they stand for. For example, you might claim that words function as symbols through linguistic convention, whereby members of a linguistic community share in the understanding that particular words stand for particular things. But you might also, quite independently of this theory, embrace one of a variety of different theories of reference. You might, for example, embrace the causal theory of reference whereby you claim that along with linguistic conventions, words designate particular things by virtue of some elaborate set of causal relations. These two sorts of accounts complement one another by explaining different aspects of language. No one would think that the causal theory of reference tells us everything about how words actually serve as representations e that is, come to play the role of

3

symbols. The latter story is different from, though very closely related to, the former story. It should be noted that the distinction I am emphasizing here is somewhat similar to, but nonetheless different from other distinctions that have received greater philosophical attention. For example, the distinction I am emphasizing is similar to the distinction others have made between the representation vehicle and that vehicle's content. However, in discussions of representational vehicles, the emphasis is typically upon form rather than function. That is, while there are various competing accounts about the sort of structure mental representations have e whether they have, for example, a quasi-linguistic structure or not e there has been less discussion about the more basic question of how some neural state actually comes to play a representational role in the first place. The distinction I'm emphasizing is also different from the distinction between what is often referred to as narrow and wide content. The narrow-wide distinction pertains to different types (or perhaps different aspects of) content. The distinction I'm emphasizing is between matters pertaining to a representation's content (whether narrow or wide) and matters pertaining to the functional role of representing. When explicitly distinguished in this way, these two dimensions of mental representation reveal two closely connected but nevertheless distinct research questions that we should be trying to answer: 1) what makes it the case that a physical state or structure (such as a neurological state or structure) functions as a representational state (and not something else)? and, 2) what makes it the case that something functioning as a representational state has the content it does (and not some other content)? Answering only one of these questions is insufficient for a full and complete understanding of mental representation. We need both an account of how components of our nervous system actually serve as representations, and we need an account of how when they do so, they come to have the content they have. Without some reason to think otherwise, we should treat the conditions that give rise to the functional role dimension as possibly quite different from the sort of conditions that give rise to the content grounding dimension. Yet, in various accounts of the nature of mental representation, authors have often failed to do so. Instead, in many accounts of mental representation, the functional role of representing is more or less blurred together with the issue of content, or, alternatively, ignored altogether. It is worth pausing to consider some reasons why this might be so. In the process, I hope to make this oversight e or, more specifically, the fact that there is an oversight e more evident. One reason philosophers might conflate matters of representational function and content is that they assume that a theory about how a neural state comes to stand in a semantic relation to its intentional object can also be used as theory of what it is for some neural state to function as a representation. Some even explicitly suggest that the matter of explaining representational status collapses into the matter of explaining content. Fodor, for example, tells us that in computational accounts of cognition, “[i]n effect, what is proposed is a reduction of the problem what makes mental states intentional to the problem what bestows

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

4

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

semantic properties on (fixes the interpretation of) a symbol” (1980, p. 431). Here, Fodor at least implies that the problem of explaining how cognitive states come to serve as representations reduces to the problem of explaining how they get their semantic properties. Yet, as I hope the discussion of non-mental representation makes clear, there are good reasons for thinking that the one problem does not reduce to the other, even though there might be some overlap. You could solve the functional role problem by learning that something functions in a specific way as a detector of something else, but nevertheless remain clueless about what it is a detector of, or how it is linked to the thing it detects. Going the other way, one can learn that something is a representational device of some sort, and that it has a specific content by virtue of certain causal links, but remain uncertain about how it is actually used by the cognitive system it serves. In other words, knowing that something represents X by virtue of content-grounding relation Y does not tell us anything about how that something functions as a representation of X in a given system. A second reason may be that people assume that the functional role dimension is already fully understood, an assumption motivated by functionalism in the philosophy of mind and the degree to which philosophers have focused upon belief states. Commonsense functionalists claim that all mental states, including different sorts of mental representations, are defined by the functional relations specified by commonsense psychology. Thus, beliefs are defined as states that are generated by certain perceptual conditions, interact with desires to generate various forms of behavior, and so on. If your theorizing about mental representation is dominated by theorizing about beliefs, and if you think that commonsense psychology provides an account of what it is for something to function as a belief, then you might think that questions about what it is for something to function as a mental representation have already been answered by our folk psychology. Insofar as computational systems can replicate these relations, then you might also think that computational psychology has captured this functionality in computational terms. Yet the problem with this perspective is that, besides ignoring the wide array of different sorts of mental representations beyond those invoked by commonsense psychology, it focuses on the wrong sort of functional role. What commonsense functionalism provides is the sort of functional role that justifies treating something as a belief type of mental representation as opposed to a different sort of mental representation, or a different sort of mental state. It tells us the roles associated with being a belief state as opposed to, say, a desire state or a pain state. The roles provided by commonsense psychology are those that distinguish different types of mental representations. What we need and what isn't provided by commonsense psychology is, more generally, the sort of physical condition that makes something a representational state, period. In functional terms, we would like to know what different types of representations perhaps have in common, qua representation. Neither commonsense psychology nor computationalism tell us much about the sort causal/physical conditions that bestow

upon brain states the functional role of representing (at least not directly). Another reason why questions about representational function and representational content may be merged together is because various theories of representation deliberately intertwine the two. For example, “functional role semantic” theories claim that a determinant of content is the functionality or causal role of the representation (Block, 1986). Moreover, so-called “teleosemantic” accounts of content claim that part of what grounds content is the fact that the internal state in question has acquired a specific functional role through some sort of evolutionary or learning process. Consequently, if you are a proponent of one of these views, it might be tempting to think that explaining content and representation function amount to pretty much the same thing. Yet they do not. Insisting that a state's functionality contributes to its content, as these theories do, is not the same as showing that an account of representation content is also an account of representation function. Functional role semantics and teleosemantic accounts are at best theories of content determination. They do not tell us the conditions that explain how a neural state actually functions as a representational state, as such. In fact, teleosemantic theories merely supplement content grounding theories e especially informational ones e so that they can account for misrepresentation. They do this by treating cases of misrepresentation as cases of the relevant structure or state malfunctioning e of responding to something to which it was not selected (by some evolutionary or learning process) to respond. As Millikan puts it, “[w]hat teleological theories have in common is not any view about the nature of representational content; that is, about what makes a mental representation represent something. What they have in common is only a view about how falseness in representations is possible” (2009, p. 394). Functional theories of content typically presuppose that neural states function as representations; they do not explain how this happens. One final reason people may treat a theory of representational function as the same as a theory of content is that it is difficult to think of one without the other. For example, Millikan asks us to imagine a neural state or structure satisfying the conditions for representational function without satisfying the conditions for representational content. In such a case, as Karen Neander notes, “something could count as a representation without it representing anything, which is nonsense.” (2009). However, if this point is taken to show that representation function and content are conceptually inseparable, it strikes me as misguided. For something to actually qualify as a full blown representation, both the functional role and the content conditions would need to be satisfied; but that doesn't show that the two conditions are the same. Moreover, is it really so counter-intuitive to suppose that something could function as a representation without having any real content? In the imaginary case of Swampman (a physical replica of a real person that just pops into existence), many have claimed to hold the intuition that such a being would have internal states that are functionally similar to normal mental representations, but

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

that would nonetheless lack any external content. For them, it is indeed possible to imagine internal states that meet the conditions for functioning as a representation without meeting the conditions for a key form of content. Whatever is the reason writers have failed to explicitly separate these different dimensions of mental representation, a complete account requires that we do so. A complete theory of mental representation is going to be more complicated than is often assumed. At the very least, it will need to tell a story about how some part of the brain can function as a representation, and also a story about how something functioning in that way comes to have the content it does. The good news is that it is possible to reconceive some pre-existing theories that are normally treated as mutually exclusive theories of content, and instead view them as complementary theories about these different dimensions of cognitive representation. For the remainder of this paper, I will be exploring that possibility. I'll suggest that two of the most popular perspectives on cognitive representation can be viewed not as competing hypotheses about the same thing, but instead as mutually supporting perspectives about the two different aspects of representation we have been examining. 3. Two representational theories As I suggested earlier, philosophical theories of mental representation are generally modeled upon, or at least inspired by, our experience with non-mental representations. We try to get a sense of how representation might work in the brain by looking at how it works out in the world. Two very popular families of theories of mental representation are modeled on two different sorts of nonmental representations. First, as we noted above, there are theories inspired by our use of representational devices that exploit some sort of informational relation, where the device's states are somehow reliably or nomically dependent upon whatever it is that is represented. As we saw, thermometers are a classic example of this sort of nonmental representation. The position of the mercury depends upon the temperature, so it is described as “carrying information” about the temperature. The fact that the mercury's position is reliably dependent upon ambient temperature allows us to use the former to acquire knowledge about the latter. Philosophical theories of mental representation inspired by these sorts of things claim that something similar is going on with neural states. For example, neurons in a cognitive agent's perceptual system are often claimed to represent those things in the environment that cause them to fire at a high frequency, such as edges in the visual field or small moving objects. Much like the thermometer's mercury, the neurons' activation levels reliably correspond with specific stimuli, suggesting to many that this is at least one way neuronal states function as representations. Inspirations for this perspective in cognitive science can be found in Hubel and Wiesel (1962) and Barlow (1972). Philosophers who have promoted this way of thinking about representation include Millikan (1984, 1993), Papineau (1984), Tye (1997), and perhaps most notably, Dretske (1988, 1996).

5

The second cluster of theories is inspired by our use of things like maps or models, where the representational system stands in some sort of isomorphism relation to the thing it represents, often called the “target”. The lines and figures on a map of a town are configured in a way that is proportionally similar to the streets and buildings of the town; this allows us to use those lines and figures as representations of elements of that terrain, say, for navigation. A model plane is structurally similar to a real plane, and this makes it possible to use the model as a proxy, to learn things about how the actual plane will behave in certain conditions. Theories of mental representation in this group suggest that neural states represent in a similar manner, although here the details are often somewhat sketchy. In some way, at some level of analysis, neural structures are claimed to model or map various aspects of the world. Particular neural structures or states are described as playing a representational role because those structures or states are components of some “broader” neural structure or process that is similar to the target in a way that the brain can exploit. The neural element is a representation in the sense of serving as a surrogate or “stand-in” for the relevant component of the target.2 Cognitive researchers adopting this outlook include Gallistel (1998) and O'Keefe and Nadel (1978), while philosophers include Churchland (2012), Cummins (1989), Grush (2004), Ryder (2004), Swoyer (1991), and Waskan (2006). Each of these families of theories has its own strengths as weaknesses. In the case of informational theories, they gain considerable plausibility from externalist intuitions that suggest intentionality is grounded in causal relations. Putnam's insight that meaning “ain't in the head” (1975, p. 227) stems from the intuition that content is grounded in some sort of causal relation between what is in the head and what is outside. Informational theories of mental representation that also appeal to causal head-world relations build upon this insight. Moreover, there is a long tradition in philosophy suggesting that a kind of natural meaning occurs in the form of information, and that information arises from correlations or nomic dependency relations. Smoke carries information about fire e or indicates fire e because smoke is reliably dependent upon fire. If mental representation content can be grounded in some sort of causal or informational relation, something that occurs naturally in the world, then content can be shown to be a natural, scientifically respectable phenomenon.

2 Some (see Bickhard & Terveen, 1995) have called into question this notion of a representational stand-in. It is important to note that the term ‘stand-in’ in this context is not supposed to be read literally e so the marks on the page of the map are not literally serving as roads you can drive on and buildings you can enter. It is instead the notion of something serving as a proxy or surrogate in the context of being an element of a model or map or simulation. It is the same sense in which we say that an actor “plays the role of” Lincoln in a play or movie. One of the first uses of this terminology was in John Haugeland's important essay “Representational Genera”: As Haugeland puts it: “That which stands in for something else in this way is a representation; that which it stands in for is its content; and its standing in for that content is representing it.” (Haugeland, 1991, p. 62).

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

6

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

At the same time, as I have explained in detail elsewhere (Ramsey, 2007), when informational theories are offered not just as theories of content, but as full-blown accounts of cognitive representation, they have a tendency to reduce representation to a role that is not recognizably representational in nature. On many informational stories, alleged representations function as reliable responders that have the job of going into a specific state when and only when a given condition obtains (see, for example, Dretske, 1988). But the functional role of reliably responding to specific things, and perhaps then causing something else to happen, while no doubt important, is not (as such) a representational role.3 This is true even if the structure was selected for such a role through evolution or some sort of learning process. My immune system reliably responds to various infections and it developed in a way that allows us to say it is supposed to respond to those infections, but it not functioning as a representation of those infections. Of course, when we use the fact that something's states are reliably dependent upon some other condition to learn from the former about the latter, then the former is playing a representational role. But that role depends upon our use of something as an informer e it functions as a representation only because we treat its states as indicating something else. When a robust interpreting mind is removed from the picture (as it must be for mental representation), it is extremely difficult to see how that representational role can be sustained. In virtually all accounts that have been offered, the proper description of the functional role instantiated by the structure in question would be that of a reliable go-between or causal mediator. In engineering parlance, such a structure would be described as performing the role of a “relay circuit” or switch. This is no doubt an important role for neural structures to play in implementing cognitive operations. But it is not a representational role.4 Turning now to theories of representation based upon our use of maps or models, here again there are pluses and minuses. On the one hand, the idea that brains somehow construct models of the world and use these models for reasoning and planning is both plausible and widespread throughout cognitive science (see, for example, Gallistel, 1998; Johnson-Laird, 1983; Palmer, 1978). Moreover, the functional role assigned to neurons on this account is recognizably representational in nature. If neurons serve as proxies or stand-ins for aspects of a target domain, then they are clearly performing a representational job. Indeed, there are various accounts suggesting that such a

3 Others who have argued this point include van Gelder (1995) and Beer (1995). 4 Think about a spark plug in an automobile engine. When performing its proper role, a spark plug is supposed to fire at a much higher rate when and only when the accelerator pedal is depressed, thereby causing increased piston activity. It is thus similar to neural structures in our perceptual system in that it, too, is supposed to become activated by specific conditions. Yet no one thinks the spark plug is playing any sort of representational role. When we treat informational stories as describing the conditions that make neural states into representations, as many authors do, we wind up with a story about the wrong sort of job e that of serving as a relay circuit or causal mediator, not any sort of representation.

surrogative role can be implemented even when there is no independent interpreting mind (see, for example, Gallistel, 1998). In other words, it seems possible for there to be physical systems with no homunculi or internal interpreters that nevertheless employ internal structures that function as maps or models for navigation and problem-solving. When they do so, their elements are playing a recognizably representational role. On the other hand, there is a notorious problem of content indeterminacy for any account of representation based upon structural similarity. The problem is that isomorphisms are cheap e any given map or model is going to be structurally similar to a very wide range of different things. If I draw a crude map that outlines a path to my house, the lines and figures I draw will no doubt mirror lots of different paths and terrains. Consequently, the specification of what the map is about e a path to my house e cannot be determined by simply looking at what the map mirrors; something else is needed to fix the representational content of the map's elements. When the map is used by full-blown cognitive agents like us, that something else is provided by the interpretive powers of a full-blown mind e something that can assign a specific content to the map. But in the case of mental representation, representations that are parts of a mind, this solution clearly won't work. For this reason, the map/model account of mental representation is widely regarded as suffering from an intractable problem of content indeterminacy. While elements of models may function as representational proxies during various sorts of cognitive operations, exactly what they represent is impossible to determine by merely focusing on the “structural” properties of the model or map itself. 4. Putting these theories in their proper place Obviously, there is a lot more than can be and has been said about these two familiar sorts of representational theories. But I would instead like to return to my earlier suggestion that we consider how they might fare as theories about the different dimensions of cognitive representation discussed in Section 2. It is generally assumed that these two accounts are rivals e that if one is right about how representation occurs somewhere in the brain, then the other is almost certainly wrong about that sort of representation. But perhaps a more fruitful perspective would treat them not as competing theories of content or, more generally, as competing theories of representation, but instead as complimentary theories about the two different dimensions of mental representation we have emphasized. Rather than treat them as alternative answers to the same question, we can view them as possible answers to the two different questions we have untangled; namely, the question of how some neural structure function as a representation, and the question of how when it functions that way, the structure comes to have the representational content it has. Consider the first question about representational function. As we just saw, informational or causal accounts of representation do poorly as theories of what it is for something to function as a representation. The reason is

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

that the sorts of conditions emphasized by these accounts e conditions like reliably responding to some proximal or distal situation e simply aren't the right kind of conditions that, taken alone, could yield a recognizably representational role for a neurological state. The problem is that even if we grant that the neural structure has the function of responding to some things and not other things, properly responding to only those things is hardly the same as representing those things. The function of going into a particular state when and only when a certain condition obtains, is not, as such, a representational function. Hence, informational theories provide a poor answer for the question of what it is for something to function as a representation. By contrast, if we think of mental representations not as indicators but instead as something more like elements of maps, models or simulations, then we can at least get the outlines of a story about how a part of the brain could actually function in a representational manner. Models and maps involve elements that serve as proxies or surrogates of aspects of the target domain. They allow systems to engage in what Swoyer (1991) has called “surrogative reasoning” e focusing on the properties and relations in one sort of environment, the map or model environment, and then inferring analogous properties and relations in a relevantly similar environment e the represented environment. As I've argued elsewhere (Ramsey, 2007), representational elements stand for something else by standing in for something else e by playing the role of surrogates. An important but underappreciated finding from artificial intelligence is that complex systems like computers or, more importantly, brains, can take advantage of the same surrogative problem-solving principles by treating internal structures as analogues of external environments. This is true even when there is no internal interpreting mind. In other words, mindless automated systems can nonetheless exploit internal structures as maps or models during various tasks, such as navigation. When they do so, the internal structures that make up those maps or models are best seen as functioning as representational stand-ins for things in the target domain. That is, when we look closely at the sort of causal/functional role such structures are playing, the most intuitive, plausible and natural interpretation of their activity is one that treats them as serving as representational function.5 Turning now to the question of how something functioning as a representation comes to have a specific content, we have seen that while maps and models may provide a good sense of how something functions as a representation, there is nevertheless a problem of content indeterminacy. How do we specify the target of the broader

5 In Ramsey (2007) I argued that this sort of interpretation is based upon a judgment call, and that a judgment call in this context is the best we can hope for. So I claim that when the role something plays is a role that, when analyzed, is merely that of a causal relay, then we do not judge it to be a representation. But when the analyzed role is more along the lines of a model or map, of some sort of analogue used in surrogative reasoning (used, say, for guidance purposes while the cognitive system navigates through a terrain), then the role is judged to be representational in nature.

7

map/model structure? If all we appeal to in specifying the representational target is the structural similarity relation then, as we noted earlier, there is the problem that similarity relations are famously non-unique. Any given model or map is, in various ways, similar to a wide array of different things, including things that presumably are not the target. Consequently, the sort of conditions that plausibly explain how some brain state can serve as a representation do not, by themselves, also explain how that brain state serves as a representation with specific content. However, if we augment the map/model story with the informational/causal account, then it seems this problem can be surmounted. Intuitively, if I draw a map of the path to my house, that map is a map to my house because it is ultimately grounded in causal interactions with that particular path. As we noted earlier, at least one plausible determinant of content specificity is some form of causal relation. Thus, of the many things a model or map might be isomorphic with, the thing that is its real target is the thing that it is properly causally connected to. This might involve an etiological account where we appeal to the causal history that generated the relevant map in the first place. In the case of the brain, a neurological map would target that aspect of the world that played a role in its original development. For example, if a rat's internal map was developed in an effort to learn how to navigate a specific maze, then it is that particular maze that is the target. Alternatively, the map's content might be fixed through a process more along the lines of some sort of informational or nomicdependency story. The map might function during navigation by having constituent elements that become activated whenever specific items or locations are encountered; the elements would thereby stand (in) for those specific items or locations in the environment. In this way, an animal can locate itself relative to various environmental cues. The nomic dependency relation would not be the factor that bestows upon a neural state the status of representation. It would instead, on this proposal, be the factor that makes the neural state functioning as a representation a representation of some specific aspect of the environment rather than something else. It would be the sort of the thing that allows us to say what the map or model is a map or model of.6 Below, I will discuss one theory about the hippocampus that embraces this perspective. What all this suggest is the following hybrid strategy for combining theories of mental representation: Neurological (or computational) states come to function as representations by serving as elements of maps or models or simulations that are exploited in various ways by the broader mind/brain. Those models come to be models of specific sorts of things in part by being causally linked to those things. The causal links aren't what make the state in question a representation; what makes it a representation is the fact that it serves as a proxy for an aspect of whatever it is that is being mapped or modeled. But that role, as part

6 In Ramsey (2007), I suggested other ways in which the content of an internal model can be specified, such as by appealing to current usage. Here I am merely suggesting that the causal/informational account is another way this problem might be handled.

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

8

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

of a map or model doesn't allow us to fully understand the state's specific content. The content is grounded through the broader causal or informational relations between the elements of the map/model and external factors. By distinguishing representational function from representational content, we can now see that both the map/model story and the causal/informational story have the potential to provide a truthful and deeper understanding of mental representation. They can both be right because each serves to primarily explain fundamentally different dimensions of cognitive representation. Of course, this doesn't solve all the problems associated with understanding representation in cognitive systems. But it provides a promising framework or outlook that has not received sufficient attention because the matter of explaining how something functions as a representation has not been adequately distinguished from the matter of explaining content.7 A further virtue of the approach suggested here is that it provides a relatively robust approach to explaining the normative dimension of representation. Famously, accounting for the possibility of error in simple representations e explaining misrepresentation e is a tricky problem.8 For example, with simple information/causal accounts, whatever causes a tokening of the representation qualifies as its intentional object; thus, although we would like to say a cow representation misrepresents when it is triggered by, say, a distant elk, we are instead forced to say it is really a cow-or-elk representation. For informational theories, this is the well-known problem of delineating which of the many things to which a representation can respond counts as its legitimate intentional object. But with the proposal presented here, where the functional role of a representation is explained by appealing to the map/model notion, error is relatively easy to capture. A map/model can misrepresent in a myriad of ways. It can include an element that does not exist in the target domain (e.g., a building on a map that doesn't exist), it can include an element that does exist, but in a different manner in the target domain (e.g., representing a street on a map as proportionally longer than it is), it can fail to include elements that exist in the target domain (e.g., a map that leaves off a street), and it can mix elements up (a map that has a building and a park wrongly juxtaposed). Insofar as we treat the functionality of neurological representations in terms of homomorphism, and not in terms of responsiveness, we can provide an intuitive story about error that is a story of actual misrepresentation (as opposed to mere mis-firing). 5. Cognitive maps in cognitive science To illustrate this perspective a little more, I want to briefly discuss how something close to the hybrid stance I've

7 As one anonymous reviewer has noted, there might be some spillover in terms of what these different accounts explain. Even with the dual explanatory roles I have recommended here, the isomorphism relation might also be a contributing determinant of content, while causal and informational relations may contribute to something's functioning as a representational map or model. My account does not require an absolutely strict division of labor. 8 See, for example, Dretske (1986) and Fodor (1987).

endorsed here already exists, and has existed for some time, in some empirical theories. An important area of cognitive ethology is animal navigation, and within this area, there is considerable speculation about the use of cognitive maps and models. The notion of cognitive maps was first introduced by Tolman (1948), and since then the idea has gained steady support. In 1978, John O'Keefe and Lynn Nadel published their important book entitled “The Hippocampus as a Cognitive Map”. The core claim of the book is that the hippocampus is used in animal navigation by providing various maps of an organism's environment. O'Keefe and Nadel's account has been subsequently enhanced and supplemented in various ways, and several other researchers have also proposed maps in the hippocampus and other brain structures. The specific details of these different accounts are complex and varied. Fortunately, we need not be overly concerned with the specifics to see just how much these accounts of internal cognitive maps accord with the sort of two-dimensional picture I'm proposing here. Obviously, the claim that a neurological system uses maps, or that there is something in a brain that is structurally isomorphic to various aspects of the world, needs some qualification and elaboration. No one thinks that there are literally neurons in the brain that are spatially lined up in a manner that forms a little map of some environment. Even if this was the case, there would be no one “in” the skull to read it. So in what sense is there something in an animal's brain that is functioning like a map? The manner in which this happens varies between different accounts, but most theories adopt a more abstract conception of geometric encoding. All of the information we exploit in a cartographic map that is encoded via diagrammatic positioning can be encoded in other ways. For example, we could use some sort of Cartesian coordinate system to encode the specific and relative location of any item on a map with vector coordinates. We could use sets of such vectors in various ways e to plot out different paths, to determine one's current location, to calculate the relative position of new items, and so on. More recently O'Keefe and others have argued that similar sorts of vector calculations are performed in the rat hippocampus (Burgess & O'Keefe, 2002). Arrays of neural networks are used to both store coordinate positions and to perform vector calculations that ultimately steer the rat through the terrain. Insofar as these neural transformations implement a coordinate geometry during navigation that reflects the structure of the items and properties of the environment, it is perfectly natural and, more importantly, explanatorily beneficial to regard such a system as functioning as a map. Specific elements of the system are thus functioning as representations of features of the target domain.9 One of most important features of O'Keefe and Nadel's research was the discovery of so-called “place cells” in the hippocampus. These are cells that fire at a very high rate when the rat is in a specific location. Thus, the activity of these individual neurons is correlated with the rat's proximity to these location cues. Consequently, it is natural to

9 For further discussion of how cognitive maps can be utilized, see Rescorla (2009).

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

regard them as representations of those locations, and O'Keefe and Nadel and various others have adopted this interpretation of their role. But this leads to some confusion about the sort of representational story that is being offered. On the one hand, neurons in the hippocampus are supposed to comprise a map of the environment. That suggests the map/model notion of representation is in play e that neurons are representations because they are elements of a map. On the other hand, neurons are described as representing places in the environment because their activation levels co-vary with proximity to those locations. That suggests the causal/informational notion of representation is at work; that neurons are representations because their activity is strongly correlated with environmental cues. So which is it? I now hope it is clear that one perfectly sensible answer to this question is both. The place cells are not representations because their activity is nomically dependent upon elements of the environment, as some have suggested. Instead, they are representations because they are functioning as component parts of an encoded map of the environment that the rat is trying to navigate. They are functioning as representations because they are serving as surrogative stand-ins within a broader map-like neural structure. However, it may well be the case that the particular content of this map is grounded in causal relations that exist between the neuronal map and aspects of the environment. Informational content about the rat's location is grounded in nomic dependencies that exist between neurons and specific locations. In other words, it might be that the hippocampal map functions like certain subway maps. On some subway cars, maps of the subway route are posted that have lights corresponding to the individual stops. When the train reaches a specific subway stop, the spot designating that location on the map lights up. A natural way to look at things is the following: What makes the lighted element a representation is the fact it is part of a map of a subway system. Were it not part of a map, it would merely be an effect of something the train does. But what makes that lighted element a representation of something on that particular line, and not some other structurally isomorphic line, is that it is causally linked to a stop on that line. When an element of the map lights up, the content of the message is “you are now at this location on the map”. In accounts of the rat's hippocampus, neurons represent in much the same way. They function as representations by virtue of serving as constituents of a broader map-like neural structure that is used for navigation. But they function as representations about specific environmental places by virtue of their causal links to those locations. 6. Conclusion To sum up, there are two distinct dimensions of mental representation that need to be properly distinguished. One dimension is the functional role of serving as a representation, and the other is the specific content that the representation possesses. Explaining one of these dimensions is not the same thing as explaining the other. Moreover, there are currently a number of theories of

9

representation that are normally treated as mutually exclusive competitors, and two major ones focus on either causal/informational relations or some sort of isomorphism relation. Each account has problems, but if we treat them as complementing theories about the different dimensions of representation e with the map/model story explaining representation function, and the causal/information story explaining representation content e then at least two of the major problems disappear. Moreover, this sort of hybrid view provides us with a plausible way to interpret certain empirical theories in cognitive science, such as those about neural maps in the hippocampus. By embracing the more comprehensive outlook on mental representation described here, philosophers of psychology and cognition scientists can see much better how to understand and theorize about this crucial aspect of the mind. Acknowledgments Earlier versions of this paper were presented at the Southern Society of Philosophy and Psychology Meeting, April, 2010 and at the University of Nevada, Las Vegas Philosophy Colloquium, March, 2011. Feedback from these audiences was quite helpful. I am also grateful to Marcin Milkowski, Robert Campbell and two anonymous reviewers for their helpful comments and suggestions. References Barlow, H. B. (1972). Single units and sensation: a neuron doctrine for perceptual psychology? Perception, 1, 371e394. Beer, R. D. (1995). A dynamic systems perspective on agent-environment interaction. Artificial Intelligence, 72, 173e215. Bickhard, M., & Terveen, L. (1995). Foundational issues in artificial intelligence and cognitive science: Impasse and solution. Amsterdam: NorthHolland. Block, N. (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy, 10, 615e678. Burgess, N., & O'Keefe, J. (2002). Spatial models of the hippocampus, In: The handbook of brain theory and neural networks (2nd ed.). Cambridge, MA: MIT Press. Churchland, P. (2012). Plato's camera: How the physical brain captures a landscape of abstract universals. Cambridge, MA: MIT Press. Cummins, R. (1989). Meaning and mental representation. Cambridge, MA: MIT Press. Dennett, D. (1978). Brainstorms. Cambridge, MA: MIT Press. Dretske, F. (1986). Misrepresentation. In R. Bogdan (Ed.), Belief: Form, content and function (pp. 17e36). Oxford: Clarendon Press. Dretske, F. (1988). Explaining behavior. Cambridge, MA: MIT Press. Dretske, F. (1996). Naturalizing the mind. Cambridge, MA: MIT Press. Fodor, J. A. (1980). Searle on what only brains can do. Behavioral and Brain Sciences, 3, 431e432. Fodor, J. A. (1987). Psychosemantics. Cambridge, MA: MIT Press. Gallistel, C. R. (1998). Symbolic processes in the brain: the case of insect navigation. In D. Scarorough, & S. Sternberg (Eds.) (2nd ed.)An invitation to cognitive science: Vol. 4. Methods, models and conceptual issues (pp. 1e51). Cambridge, MA: MIT Press. van Gelder, T. (1995). What might cognition be, if not computation? The Journal of Philosophy, 91, 345e381. Godfrey-Smith, P. (2006). Mental representation, naturalism, and teleosemantics. In G. MacDonald, & D. Papineau (Eds.), Teleosemantics (pp. 42e68). Oxford: Oxford University Press. Grush, R. (2004). The emulation theory of representation: motor control, imagery, and perception. Behavioral and Brain Sciences, 27(3), 377e396. Haugeland, J. (1991). Representational genera. In W. Ramsey, S. Stich, & D. Rumelhart (Eds.), Philosophy and connectionist theory (pp. 61e89). Hillsdale, NJ: Lawrence Erlbaum. Hubel, D., & Wiesel, T. (1962). Receptive fields, binocular interaction, and functional architecture in the cat's visual cortex. Journal of Physiology, 160, 106e154.

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004

10

W. Ramsey / New Ideas in Psychology xxx (2015) 1e10

Johnson-Laird, P. (1983). Mental models: Towards a cognitive science of language, inference and consciousness. Cambridge, MA: Harvard University Press. Millikan, R. (1984). Language, thought and other biological categories. Cambridge, MA: MIT Press. Millikan, R. (1993). White queen psychology and other essays for Alice. Cambridge, MA: MIT Press. Millikan, R. (2009). Biosemantics. In B. Mclaughlin (Ed.), The Oxford handbook of philosophy of mind (pp. 394e406). Oxford: Oxford University Press. Neander, K. (2009). Teleological theories of mental content. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (Winter 2009 ed.) URL http://plato.stanford.edu/archives/win2009/entries/contentteleological/. O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Oxford University Press. Palmer, S. (1978). Fundamental aspects of cognitive representation. In E. Rosch, & E. Lloyd (Eds.), Cognition and categorization (pp. 259e303). Hillside, NJ: Lawrence Erlbaum.

Papineau, D. (1984). Representation and explanation. Philosophy of Science, 51(4), 550e572. Putnam, H. (1975). The meaning of ‘meaning’, philosophical papers. In Mind, language and reality (Vol. 2). Cambridge: Cambridge University Press. Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press. Rescorla, M. (2009). Cognitive maps and the language of thought. The British Journal for the Philosophy of Science, 60(2), 377e407. Ryder, D. (2004). SINBAD neurosemantic: a theory of mental representation. Mind and Language, 19(2), 211e240. Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87, 449e508. Tolman, E. (1948). Cognitive maps in rats and men. Psychological Review, 55, 189e208. Tye, M. (1997). Ten problems of consciousness. Cambridge, MA: MIT Press. Von Eckardt, B. (1993). What is cognitive science? Cambridge, MA: MIT Press. Waskan, J. (2006). Models and cognition. Cambridge, MA: MIT Press.

Please cite this article in press as: Ramsey, W., Untangling two questions about mental representation, New Ideas in Psychology (2015), http://dx.doi.org/10.1016/j.newideapsych.2015.01.004