JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.1 (1-31)
Available online at www.sciencedirect.com
ScienceDirect Physics of Life Reviews ••• (••••) •••–••• www.elsevier.com/locate/plrev
Review
Action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge Michiel van Elk a,∗ , Hein van Schie b , Harold Bekkering c a University of Amsterdam, Department of Social Psychology, The Netherlands b Behavioural Science Institute, Radboud University Nijmegen, The Netherlands c Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
Received 8 November 2013; accepted 11 November 2013
Communicated by L. Perlovsky
Abstract Our capacity to use tools and objects is often considered one of the hallmarks of the human species. Many objects greatly extend our bodily capabilities to act in the physical world, such as when using a hammer or a saw. In addition, humans have the remarkable capability to use objects in a flexible fashion and to combine multiple objects in complex actions. We prepare coffee, cook dinner and drive our car. In this review we propose that humans have developed declarative and procedural knowledge, i.e. action semantics that enables us to use objects in a meaningful way. A state-of-the-art review of research on object use is provided, involving behavioral, developmental, neuropsychological and neuroimaging studies. We show that research in each of these domains is characterized by similar discussions regarding (1) the role of object affordances, (2) the relation between goals and means in object use and (3) the functional and neural organization of action semantics. We propose a novel conceptual framework of action semantics to address these issues and to integrate the previous findings. We argue that action semantics entails both multimodal object representations and modality-specific sub-systems, involving manipulation knowledge, functional knowledge and representations of the sensory and proprioceptive consequences of object use. Furthermore, we argue that action semantics are hierarchically organized and selectively activated and used depending on the action intention of the actor and the current task context. Our framework presents an integrative account of multiple findings and perspectives on object use that may guide future studies in this interdisciplinary domain. © 2013 Elsevier B.V. All rights reserved. Keywords: Action semantics; Objects; Conceptual knowledge; Tool use; Affordances; Action planning
Contents 1. 2.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 State-of-the-art research on action semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1. Behavioral studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
* Corresponding author at: University of Amsterdam, Department of Social Psychology, Weesperplein 4, 1018 XA Amsterdam, The Netherlands. Tel.: +31 205256891. E-mail address:
[email protected] (M. van Elk).
1571-0645/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.plrev.2013.11.005
JID:PLREV AID:428 /REV
2
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.2 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
2.1.1. Object affordances . . . . . . . . . . . . . . . . . . . 2.1.2. Goals and grips in object use . . . . . . . . . . . . 2.1.3. Action–language interactions . . . . . . . . . . . . 2.2. Developmental studies . . . . . . . . . . . . . . . . . . . . . . 2.2.1. Pantomimes and motor representations . . . . . 2.2.2. Means-end reasoning . . . . . . . . . . . . . . . . . 2.2.3. Functional object use . . . . . . . . . . . . . . . . . 2.3. Neuropsychological studies . . . . . . . . . . . . . . . . . . . 2.3.1. Apraxia . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2. Action disorganization syndrome . . . . . . . . . 2.3.3. Semantic dementia . . . . . . . . . . . . . . . . . . . 2.4. Neuroimaging studies . . . . . . . . . . . . . . . . . . . . . . . 2.4.1. Object observation and tool naming . . . . . . . 2.4.2. Hierarchical view of the motor system . . . . . . 2.4.3. Retrieval of action semantics . . . . . . . . . . . . 3. Current debates in action semantics . . . . . . . . . . . . . . . . . . . 3.1. Automaticity of object affordances . . . . . . . . . . . . . . 3.2. Goals and means in action planning . . . . . . . . . . . . . . 3.3. Functional and neural organization of action semantics 4. Action semantics: a unifying framework . . . . . . . . . . . . . . . 4.1. Functional organization of action semantics . . . . . . . . 4.2. Neural dynamics of action semantics . . . . . . . . . . . . . 4.3. Relation with current debates in action semantics . . . . 4.3.1. Affordances . . . . . . . . . . . . . . . . . . . . . . . 4.3.2. Goals and means . . . . . . . . . . . . . . . . . . . . 4.3.3. Organization of action semantics . . . . . . . . . 5. Concluding remarks and future directions . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 4 6 6 7 7 9 9 9 11 12 12 12 13 14 15 15 16 16 17 17 19 20 20 21 22 22 23
1. Introduction Humans show a remarkable capacity for using tools and objects.1 Parents are often amazed by the speed whereby young infants learn to interact with novel objects, like a hammer, or an I-Pad or an I-Phone. As adults we continuously surround ourselves with objects that greatly extend our bodily and cognitive capabilities. Driving a car or riding a bike significantly increases our physical action radius; a mobile phone enhances our capacity for long-distance communication; and a calculator offloads the need for mental calculation. In addition, humans often use objects in a flexible and context-dependent way. For instance, a cup can be used for drinking, for storing small object parts or for catching a fly. An intriguing question is how our ability for the flexible and context-dependent use of objects comes about. Do we rely on long-term stored knowledge about the multiple actions that objects afford? How do we select the relevant knowledge for the task at hand? Is object knowledge represented in a multimodal format, or does it entail distributed representations across modality-specific sub-systems? Many studies have investigated the development and the neurocognitive basis of tool use from a variety of different perspectives. Developmental studies have focused for instance on the question when and how infants acquire conceptual object knowledge, patient studies have identified the relation between brain damage and specific impairments in object use and neuroimaging studies have focused on the neural correlates of retrieving object-related information. Here we propose the concept of action semantics as a unifying conceptual framework to integrate these previous findings and to highlight current debates that span different research domains. Starting from the premise that being 1 Previous authors have argued for a conceptual distinction between tools and objects (e.g. [1] Rothi LJG, Ochipa C, Heilman KM. A cognitive neuropsychological model of limb praxis. Cogn Neuropsychol 1991; 8:443–58) according to which tools are used to act upon recipient objects in the surrounding world (e.g. using a bottle opener to open a bottle). However, any given object can be used as a tool as well (e.g. opening a bottle with another bottle) and accordingly throughout this review we will use the terms ‘tool’ and ‘object’ interchangeably.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.3 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
3
able to interact effectively with the surrounding world is crucial for survival, we will argue that humans have developed declarative and procedural knowledge about objects, i.e. action semantics that enable them to use objects in a purposeful and effective manner. We prefer using the term ‘action semantics’ to other terminology (e.g. conceptual knowledge, object knowledge, action-oriented representations) for several interrelated reasons. First, action planning and object use do not only require low-level processes of motor control, but depend on the use of semantic knowledge as well. For instance, patients with semantic dementia are often characterized by a general loss of semantic knowledge and may show selective impairments in their ability to use objects in a meaningful fashion [2,3]. Vice versa: semantic processing taps into the resources of the action system as well. For instance, the processing of action-related words has been associated with activation in motor-related brain areas involved in the execution of those same actions [4]. Accordingly, the term ‘action semantics’ implies that action and semantics are strongly interlinked [5]. As such, the term ‘action semantics’ has a general scope, which is reflected in the use of this term by different authors to refer to interrelated domains [1,4,6–20]. It involves not only knowledge about the use of objects, but encompasses action–language interactions and the use of action semantics for understanding other’s actions as well. The main innovation of our conceptual framework of ‘action semantics’ is that it directly addresses and integrates findings from different research domains, involving behavioral, developmental, neuropsychological and neuroimaging studies. Throughout this review we will argue that each of these domains is characterized by similar discussions regarding the following three topics. First, with respect to object affordances, an important question is to what extent affordances for grasping and interacting with objects are automatically activated or dependent on top-down processing. Second, with respect to the relation between goals and means, discussion focuses on the question how the temporal and spatial coordination that is characteristic of object-directed actions comes about (i.e. does it entail a hierarchical organization or does it rely on general problem-solving abilities?). Third, a recurring discussion regarding the nature of semantics for action focuses on the question whether semantics are distributed across modality-specific brain systems or in amodal or multimodal association areas. In this review, we propose that action semantics entail both multimodal object representations and modality-specific sub-systems, representing functional knowledge, manipulation knowledge and the sensory and proprioceptive consequences associated with object use. We will argue that action semantics are hierarchically organized around the goal locations or end postures associated with using objects. Furthermore, we argue that action semantics are selectively activated depending on the action intention of the actor and the context in which the action takes place. Thereby, our framework will make an important contribution to the literature by highlighting the central role of intentions and context on the activation of object knowledge. In sum, in this replace we will first provide a state-of-the-art overview of research on action semantics, involving behavioral, developmental, neuropsychological and neuroimaging studies. For each of these domains, our review will follow a similar structure in which we discuss relevant findings with respect to (1) the role of object affordances, (2) the relation between goals and means in action planning and (3) the importance and nature of semantics in action. Next, the current debates in the different research domains will be highlighted. Finally, the different research lines will be integrated in a unifying conceptual framework, taking into account the role of context and the importance of action intentions for the selection of action semantics. 2. State-of-the-art research on action semantics 2.1. Behavioral studies Insight into several important principles underlying action planning and object use has been obtained from behavioral studies. We will discuss relevant findings on (1) object affordances, (2) action goals and grips and (3) action–language interactions. 2.1.1. Object affordances Object affordances can be defined as the specific possibilities that objects offer for bodily interaction. For instance, a cup affords drinking, a pair of scissors affords cutting, and a comb affords combing one’s hair. The notion that objects afford specific actions is well illustrated by patients who display utilization behavior following frontal lobe damage [21,22]. When presented with well-known objects, these patients are compelled to grasp and use the objects,
JID:PLREV AID:428 /REV
4
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.4 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
even when they are instructed not to do so. Lhermitte (1983) argues that this behavior is related to a release of control from frontal over parietal areas that result in the automatic activation of the motor programs associated with using objects. In the last 15 years, an important discussion in psychological research has focused on the question whether object affordances are automatically perceived and activated. Several studies have suggested that the mere observation of objects automatically results in the activation of the motor programs associated with using these objects [23–27]. For instance, it has been found that participants responded faster in a categorization task when the grip type used for responding was congruent with the size of the object that was categorized (e.g. faster responding to the presentation of car-keys when making a precision grip; [25]). Other studies have shown effects of spatial alignment evoked by the presentation of manipulable objects, reflected in faster responses when a graspable though task-irrelevant feature of the object was presented at the same side as the hand used for responding [28–30]. Furthermore, it has been shown that the observation of objects can automatically potentiate the specific hand gestures associated with both the functional use of an object (i.e. a grip used for meaningful interaction with the object) and the volumetric use of an object (i.e. a grip used for lifting the object; [23,24]). Together these findings are typically interpreted as reflecting that manipulable objects are primarily processed by dorsal stream areas, thereby resulting in the automatic activation of the motor programs associated with using these objects ([31]; see also Section 2.4.1). Other behavioral studies have provided more direct evidence for the involvement of dorsal stream areas and the importance of motor knowledge for the identification of objects. For instance, a right visual field advantage has been reported for the presentation of words referring to manipulable objects compared to animal words [32], in line with the proposed dominance of the left hemisphere in tool use representations [33]. Furthermore, experiments using advanced visual paradigms (i.e. continuous flash suppression and repetition blindness) have shown that dorsal stream areas rapidly process action affordances following the brief presentation of pictures representing manipulable objects, which in turn can facilitate the identification of these objects in the ventral stream [34,35]. Motor interference due to a concurrent unrelated motor-related task actually impaired performance in a tool identification task [36], while congruent action plans for a subsequent reaching movement facilitated tool identification [37]. These findings indicate that motor representations have a functional role and can support the identification of objects. Recently however, the automaticity of the activation of motor affordances in response to the observation of objects has been called into question. Several studies have suggested that object affordance effects are strongly driven by contextual effects and top-down processing [38,39]. It has been found for instance that spatial alignment effects of graspable objects (i.e. faster responding when a graspable object part is presented at the side of responding) are only elicited when the object can be reached by the participant (i.e. within peripersonal space), but not when the object is out of reach (i.e. in extrapersonal space; [40]). In addition, it was found that dangerous graspable objects do not automatically potentiate grasping affordances [41]. Other studies showed that spatial alignment effects do not occur for simple button press responses, but only when a grasping response is prepared that is congruent with the subsequent functional use of the object [42,43]. Interestingly, it has been found that grasping affordances were only activated when actions were planned based on a simple rule, but not during anticipatory action planning, indicating that more distal action plans may actually overrule the effects of object affordances [44]. Finally, we have shown that objects may activate functional motor programs, resulting in a facilitation of movements towards or away from the body, but only when participants are actually required to retrieve action semantics and not during a perceptual task [45]. Together these findings indicate that the activation of object affordances is strongly dependent on contextual and intentional effects, related to the objects involved and the action intentions of the actor. 2.1.2. Goals and grips in object use In addition to retrieving an object’s graspable affordances (e.g. knowing that a toothbrush should be grasped at the side of the handle and not at the brush side), an important aspect of object use involves knowing what to do with an object (e.g. knowing that a toothbrush is typically brought towards the mouth and used with a swinging movement). In the literature these different types of knowledge have been referred to as respectively: grip-related knowledge vs. goal-related knowledge [45–47], action knowledge vs. functional knowledge [48,49] and praxis (production system) vs. action semantics (conceptual system; [1]; see also: Section 2.2.1). Throughout this review we will define the goal of an action as the spatial location towards which an action is directed, whereas a grip is defined as the hand configuration used for grasping an object. Goals can be represented at both a proximal level (e.g. reaching towards a specific object)
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.5 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
5
and a distal level (e.g. moving an object to a specific body part). Grips typically involve the more proximal aspects of object use (i.e. one first has to grasp an object before it can be used). Several studies have suggested a dominance of action goals over action grips in the planning of object-directed actions [47,50–52]. That is, the way in which an object is grasped is dependent upon what one intends to do with the object. This notion reflects a more general principle underlying action planning and motor control, according to which movements are typically planned based on the intended end-state or outcome of the action [53]. The ‘end-state comfort effect’ has been well established in behavioral studies and reflects that the way in which an object is initially grasped is determined by its subsequent use [54–57]. Based on these findings, several authors have proposed a hierarchical view of the motor system, according to which higher-level action goals or outcomes determine the selection of lower-level motor programs [53,58–60]. Related to the use of objects this view implies that the selection of basic motor programs and affordances for grasping objects are strongly dependent on the final goal of the action (e.g. grasping a pen to write requires a different grip than grasping a pen to move). In support of the hierarchical view of action planning, we have shown that planning actions based on the desired end goal is more efficient than planning an action based on the proximate grip required for grasping, and that this applies both to the use of novel objects [61,62] and to the imagined use of well-known objects [16]. Similarly, it has been found that the initiation of grasping objects for usage takes longer than grasping objects for transportation, likely because using objects specifically requires the selection of a functional grasp [63]. Other studies have reported similar findings and also indicate that the selection of initial grasp types applied to everyday objects is strongly dependent upon the subsequent use of the object [64,65]. Finally, studies on action observation have indicated a similar dominance of goals over grips in the processing of object-related actions as observed in actual object use [45,47,66,67]. For instance, participants were faster in attending to goal-compared to grip-related information and the observation of goal violations (e.g. an object presented at the incorrect goal location) resulted in a stronger interference effect than the observation of grip violations (e.g. observing an object being grasped with an incorrect grip; [47]). Together these studies underline the central importance of knowledge about the associated goal location of objects for action in both the execution and observation of object-directed actions. The planning of actions based on intended outcomes, likely relies on the use of an internal forward model that makes predictions regarding the future state of limb movements on the basis of efferent copies related to the motor commands [68–71]. A comparison process between the predicted and the actual sensory consequences of one’s actions allows monitoring the progress of the action towards its final goal state and adjusting’ ones movements if necessary. A mismatch between anticipated and observed sensory consequences (i.e. based on visual, tactile or proprioceptive information) results in a ‘prediction error’ signal, which in turn can be used to adjust one’s movements accordingly. An important question is how the desired end state of an action is represented. Theories of the equilibrium point control of posture and movement have suggested that the brain signals for controlling movements do not encode the forces required but rather the positions or postures of the limbs [72]. On this account, the movement dynamics emerge from the interaction between limb mechanics and the set equilibrium position, i.e. the position to which one wants to move. In support of this account, it has been found for instance, that the endpoint of a reaching movement was insensitive to small perturbations applied to the moving hand [73,74]. A similar view as the equilibrium point control theory can be found in posture-based accounts of action planning, according to which knowledge about specific body postures is used to plan upcoming movements [54,75]. According to posture-based theories, action plans are evaluated in terms of the efficiency of the postures required to achieve the desired outcome. It has been found for instance that adopting an incongruent posture impairs one’s ability to judge the plausibility of observed body postures in a behavioral task [76]. Others have suggested that actions are primarily planned and represented in terms of the specific action effects, i.e. the sensory consequences of one’s actions ([77]; see also: Section 2.2.1). That is, according to the ideomotor principle a representation of the sensory consequences of one’s movements is used to select the appropriate motor program for achieving the action. For instance, based on learned action-effect representations (e.g. pressing a specific button is associated with a specific sound), the presentation of the effect (e.g. hearing a sound) results in the retrieval of the motor programs associated with producing the effect [78]. Both the posture-based account and the ideomotor account of action planning have received substantial report from experimental studies (see for instance: [79,80]). In fact, both views may be compatible, as adopting a specific posture is always associated with specific sensory consequences). Whereas posture-based theories put more emphasis on the embodied representation of postures in motor-related brain areas [59], ideomotor accounts focus primarily on the sensory consequences accompanying actions. As such, both theories highlight important aspects of action
JID:PLREV AID:428 /REV
6
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.6 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
semantics, as many objects are associated with a specific body posture [83], a specific end-location [11,39,45], and an intended-action effect ([67,81,82] see also: Scetion 2.4.2 and below). 2.1.3. Action–language interactions Important insights in the central relevance of semantics for action and for successful object use have been obtained from behavioral studies focusing on the bidirectional relation between action and language. On the one hand, many studies have shown that action planning is modulated by concurrently presented semantic information [84–89]. On the other hand, it has been shown that semantic processing (e.g. reading of action words) is modulated by motor-related processes [90–92]. In this review we will specifically focus on the involvement of semantics in the use of objects. Several studies have shown that grasping kinematics are influenced by the semantic distracting properties of words that are presented during action preparation or execution [84,86,87]. For instance, grasping an object with the word ‘large’ printed on top of it resulted in an enlarged grip aperture compared to grasping an object labeled with the word ‘small’ [93]. Similarly, the presentation of action verbs referring to hand or foot actions (e.g. ‘pick’, ‘kick’) interfered with the subsequent execution of grasping movements, as revealed by detailed kinematic analysis [94]. More direct evidence for the functional role of semantics in object use was found in a study investigating the conditions under which participants applied a meaningful grasp to an object (i.e. grasping an object in a way that allows its subsequent functional use; [85]). It was found that a concurrent semantic task resulted in less functional grasps compared to a concurrent visuospatial task, indicating that the recruitment of semantic resources is necessary in order to be able to use objects meaningfully. Studies on object observation have also provided evidence for the involvement of action semantics in object use and identification. It has been found, for instance, that action semantics can be more easily retrieved for object pictures compared to object words, suggesting a strong and direct link between visual object representations and action semantics [95]. In another study it was found that the observation of incorrect object use (e.g. grasping a hammer in an incorrect way) interfered with making a decision about the action associated with the object but not with simply naming the object, indicating that grip information influenced the retrieval of action semantic information [96]. Other studies have shown that the naming of visually presented objects was facilitated when the target object was preceded by another object associated with a similar action, suggesting a direct role for action representations in object recognition [97]. A follow-up study reported a similar effect and showed that it was specifically the grasp-related component (i.e. is the object grasped in a similar way) underlying the effect [98]. These studies indicate that motoric information can actually facilitate object recognition and object naming. In a series of behavioral studies we investigated the nature of the semantic representations involved in action planning more closely [11,39,99]. Following the hierarchical view on action planning, it was hypothesized that an important aspect of action semantics involves knowledge about the goal location typically associated with using objects. To test this hypothesis participants were instructed to prepare object-directed actions and to initiate the action in response to a visually presented word [11]. A priming effect of action on word processing was observed, reflected in faster responses if the word on the screen was congruent with the goal-location of the intended action (e.g. faster response to the word ‘mouth’ when grasping a cup). Importantly, this effect was only found when participants prepared a meaningful action but not during a simple finger-lifting response, indicating that only the meaningful use of objects selectively activated goal-relevant semantic information. In a follow-up study it was found that the preparation of unusual actions could overrule long-term semantic associations and activated semantic information that was relevant to the short-term action goal rather than the long-term goal association of the object (e.g. faster responding to the word ‘eye’ when preparing to bring a cup to the eye; [39]). Thereby these studies indicate that (1) action semantics involves semantic knowledge about the associated goal location of using objects, (2) that this knowledge shares resources with language processing and (3) that this knowledge can be flexibly updated according to one’s current behavioral intentions. 2.2. Developmental studies Developmental studies have provided important insight in the development of action semantics and the different factors involved in learning to use novel objects. We will discuss studies focusing on (1) object affordances, (2) goals and means processing and (3) functional object use.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.7 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
7
2.2.1. Pantomimes and motor representations An important developmental milestone for infants is their ability to grasp and manipulate objects in their environment [100]. By 22 weeks of gestation foetuses already display intentional action planning, as reflected in different kinematics depending on the goal of the action (e.g. their mouth or their eye; [101]). Four-month old infants plan their actions ahead based on the directionality of the target object [102] and 7-to-9 month old infants pre-orient their hand for efficient object interaction and for self-directed action [103,104]. Furthermore, it has been found that 10-month old infants show evidence for anticipatory action planning as their grasping kinematics are influenced by the subsequent goal of the action [105]. Accordingly, within the first year of life infants gradually learn how to grasp and manipulate objects on the basis of the objects’ physical properties [106]. Insight in the basic motor representations underlying object use in infants has been obtained from studies on gesturing and pantomimes. Classical developmental studies suggest a development from concrete sensorimotor experiences to symbolic play, reflected in a transition from object-directed actions to imagined object use [107,108]. In line with this suggestion, it has been found that concrete actions precede the development of both gestures and words [109, 110]. Several studies indicate that towards the end of the first year, children are well able to use gestures to refer to objects and to produce gestures that entail imagined object representations [111,112]. However, object gestures in children are often imprecise and involve typical errors, such as the replacement of an object by one’s body part (e.g. using one’s finger as a toothbrush instead of displaying imagined grasping of a toothbrush; [113,114]). This finding indicates that young infants are well able to imagine object use, but may have specific difficulties with activating the relevant manipulation knowledge about how an object is typically grasped. In the last decade several EEG studies have provided more direct evidence for the notion that young infants already have acquired basic motor representations about objects. These studies have typically focused on desynchronization of the mu- and beta-frequency band in the EEG, as a neural marker of the activation of sensorimotor areas of the baby brain (see for instance: [115]). That is, typically the preparation and execution of movements is accompanied by a decrease in power in the mu- and beta-frequency bands of the EEG (i.e. mu- and beta-desynchronization; cf. [116]). Importantly, mu- and beta-desynchronization has also been associated with the observation of other’s actions and as such it provides a neural marker for the activation of rolandic motor areas in the brain [117,118]. Several studies have shown for instance that the observation of object-directed actions by 9-month infants results in mu-desynchronization, reflecting motor-related activation [81,82,119–123]. It has been found that motor-activation in infants is only observed when the goal of the action can be inferred [121] and occurs irrespective of the effector that is used for grasping the object [120]. Furthermore, it has been found that action experience with using a novel sound-making tool results in a subsequent motor activation when the child is presented with the sound from the tool [82]. Interestingly, such novel action-sound associations can even be established in the absence of physical learning, by having the child simply observe a parent using the tool [81]. Finally, in a recent study it was shown that after a short period of training 14-month old infants showed a differential mu-desynchronization in response to the perception of actions with heavy compared to light objects [124]. Together these studies provide direct support for the notion that the observation of objects, the perception of action effects associated with object use and the observation of object-directed actions result in the automatic retrieval of the motor programs associated with object grasping and object use in young infants. 2.2.2. Means-end reasoning In addition to being able to grasp an object correctly, infants have to learn about the functional use of objects as well (e.g. knowing that a key is used to open a door). According to Piaget, an important aspect of functional tool use consists in the ability to apply means-end reasoning to tool use problems [108,125]. That is, children need to learn that a specific action (e.g. pulling a cloth) can serve as a means to obtain a more remote action outcome (e.g. obtaining a toy). Several studies have shown that the ability to apply means-end tool use gradually improves during the first year of life [126–129]. For instance, whereas 6-month old infants’ actions were not affected by the presence or absence of a tool on a cloth, 7- and 8-month old infants more often produced means-end sequences only when a toy was presented, indicating a basic understanding of the relation between their actions and the outcomes [129]. Furthermore, it has been found that means-end reasoning is more difficult for children if the tool and the goal are perceptually alike (e.g. when the tool and the goal object have the same color), suggesting a strong effect of bottom-up task related information on tool use in infants [130]. Another study indicated that 10- and 13-month old infants were able to extract the causal means-end relations necessary to perform a tool use task, and to extend the solution to novel problems [131].
JID:PLREV AID:428 /REV
8
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.8 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
Next to the abstract problem-solving view on tool use, other theoretical accounts regarding the relation between action means and end processing by infants have been proposed as well. For instance, Gergely and Csibra [132] have suggested that infants’ reasoning is subject to the ‘rationality principle’, based on which infants determine which actions and means are most rational given a certain context. Others have pointed out that the development of grammatical rules in language use and sequential effects in object use show striking similarities [133]. Accordingly, it has been suggested that tool use in infants relies on the acquisition of an ‘action grammar’ that specifies hierarchically organized action sequences [133]. It has been suggested that Broca’s area subserves a general function in structuring abstract hierarchical behavior across different domains, like action, language and tool use. An alternative view can be found in the ideomotor principle, according to which actions are primarily represented in terms of the effects that they produce [78,134]. On this account, due to a process of associative learning infants have acquired specific responseeffect associations, which in turn can guide action planning. Thus, instead of using abstract rules, action planning and object use rely on concrete sensorimotor representations of the intended action effects, that guide the selection of appropriate action plans to achieve the outcome. In support of this view, it has been found for instance that subsequent to the observation of actions that elicited a certain effect (e.g. producing a sound or not), 10-month old infants more often performed the action that elicited the effect [135,136]. Similarly, it has been found that a novel tool that elicited an action effect facilitated 12-month old infants’ ability to perceive the action as goal-directed (see also: [82,137]). These different studies and theoretical perspectives indicate that an important skill underlying tool use in infants is their ability to perceive actions involving objects as goal-directed (i.e. involving a specific way of grasping in order to accomplish a certain goal or action outcome). Developmental studies indicate that the ability to represent the goal of actions is already present at the end of the first year of life (e.g. [138]). For instance, 12-month-old infants showed a preference for looking at indirect reaching movements, only when an object was present but not in the absence of an object [139]. Furthermore, 14-month old infants tended to imitate intended but not accidental acts [140] and 18-month old infants could infer the goal of uncompleted human action sequences [141]. In a series of habituation studies infants’ ability to distinguish goal-related from means-related information was directly investigated [142,143]. 9- and 5-month old infants were presented with actions in which either the goal of the action was repeated (i.e. a hand grasping the same object) or the trajectory of the action was repeated (i.e. a hand moving along the same path). It was found that infants selectively represented the goal, but not the means of the action and this effect was most pronounced for 9-months old infants [143]. In a follow-up study, the younger infants were first given a short training with grasping, by attaching a mitten to their arm that allowed them to grasp objects. The short experience with grasping objects strongly affected the infant’s perception of subsequently presented actions, such that they showed a similar sensitivity to a change in goals compared to means as the older infants [142]. These studies underline the importance of the infant’s own action experience and attention on the ability to infer and represent action goals (however, for critical view see: [144]). In addition to these habituation studies, it has been found that 12- but not 6-month old infants can use action-related knowledge in a predictive way, by showing anticipatory eye-movements towards the end-location of an object directed action [145]. Together, these studies show that toward the end of their first year of life infants are well able to represent goal-related information about objects. Studies using eye tracking and EEG measures provide further support for the notion that during the first year of their lives, infants have acquired a basic knowledge about the prototypical goal location of objects [123,146–149]. For instance, it has been found that 6-month old infants already predict the end-location of object-directed actions as reflected in anticipatory eye movements to the associated goal location of objects ([146], see also: [147]). In an EEG study it was found that the observation of objects moving to the incorrect goal location resulted in a stronger mu-desynchronization in 9-month old infants [123]. These findings were interpreted in terms of the framework of predictive coding [150], according to which the observation of an unexpected action resulted in a prediction error, reflected in stronger motor activation. Other EEG studies provide converging evidence for the idea that young infants already have semantic knowledge about the functional use of objects. For instance, 8-month old infants showed an increase in gamma power for the observation of incomplete compared to complete action sequences [148]. Furthermore, 9-month old but not 7-month old infants showed an N400-like response for the observation of incorrect compared to correct actions [151]. The N400 has been associated with the detection of semantic violations in language research and thereby provides a direct measure of semantic processing in the human brain ([152]; see also: Section 2.3.3). Together these studies indicate that infants already have action semantic knowledge, even before they are able to use objects themselves.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.9 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
9
2.2.3. Functional object use Although children’s ability to recognize action goals and to represent goal-means relationships appears to have been well established within the first year of life, the actual use of means-end relations in object interaction remains rather limited until the second year of life. It is only during the second year of life that children learn to grasp everyday objects in a meaningful way (e.g. grasping a spoon to eat; [153]). For example, 9-month old infants always grasped a spoon with their dominant hand, irrespective of the spoon’s orientation, ending up with an odd grip when moving the spoon towards the mouth. Older infants anticipated the desired end posture (i.e. moving the food to the mouth) and adjusted their grip depending on the object’s orientation. Similarly, it has been found that 12-month old infants tended to grasp a spoon in a conventional way, even though this grasp actually hindered task performance [154]. These findings suggest that at the end of the first year infants have acquired basic semantic knowledge about the appropriate use of objects (i.e. knowing that spoon should be brought towards the mouth), that they tend to apply however in a rigid and inflexible way. As described above, in the second year of life children spontaneously begin to display a basic mastery of semantic knowledge about objects through gesturing and pretend-play (e.g. pushing a car or drinking from a cup; [155]). The advanced semantic knowledge about objects becomes apparent in the planning of actions and in the imitation of object-directed actions as well. It has been found, for instance, that 9- to 20-month old infants show better manual planning when required to grasp objects in a functional fashion [156]. Furthermore, 16-month old infants were more likely to imitate actions that involved appropriate (e.g. drinking from a cup) compared to inappropriate (e.g. drinking from a toy car) objects [157]. These findings suggest an advanced understanding and ability to use objects in a functional way in the second year of life. 2.3. Neuropsychological studies In the last century many important insights about the relation between brain and behavior have been obtained by studying patients who show selective behavioral deficits following brain damage. With respect to action semantics, (1) the study of patients with apraxia, (2) patients with the action organization syndrome and (3) patients with semantic dementia are highly relevant, as these patients often show selective deficits in object use and in understanding object-related actions. 2.3.1. Apraxia Apraxia is a disorder of high-level motor control that is often observed following left-sided brain lesions [158,159]. Typically, three interrelated types of apraxic errors are described: imitation of gestures, performance of meaningful gestures and use of tools and objects [160]. Different theoretical models have been proposed to account for apraxia (see Table 1 for an overview). An influential model of apraxia was proposed by Hugo Karl Liepmann, who argued that object use relies on the creation of a ‘movement formulae’ which is transferred to the motor cortex [161]. The ‘movement formulae’ was considered to be a visual mental image of the intended movement and was supposed to be instantiated in the posterior brain in the visual association cortex (i.e. in the occipital lobe). Damage to parietal areas results in an interruption of the transfer from the mental image to the motor cortex and a subsequent impairment in the guidance of movements by the internal image. This is exemplified for instance by a selective impairment in generating meaningful gestures in response to verbal commands. In contrast, movements that are guided by external constraints are not impaired and the interaction with external objects may compensate for the loss of the mental image. This condition was described by Liepmann as ‘ideo-kinetic apraxica’, because it represents a specific impairment in the performance of idea-guided movements [161]. A comparable account of apraxia can be found in the work of Norman Geschwind who proposed a similar posterior-to-anterior processing stream of action control, but placed the origin of the ‘movement formulae’ in the verbal commands that are processed by Wernicke’s area in the superior temporal gyrus [158]. Damage to the transfer of verbal to motor commands due to an interruption in the arcuate fasciculus results in a failure to produce meaningful gestures based on verbal commands, which Geschwind considered to be the key characteristic of apraxia. Kenneth Heilman criticized the Geschwind model, because it accounted only for selective deficits in the performance of meaningful gestures on verbal command [159,162]. Heilman introduced the notion of ‘visuo-kinesthetic engrams’ (gesture engrams) that consist of stored knowledge of motor skills and that he localized to the left inferior parietal cortex. The meaningful use of objects relies on the retrieval of this action knowledge, rather than the construc-
JID:PLREV AID:428 /REV
10
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.10 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
Table 1 Overview of the different theoretical models that have been proposed to account for apraxia, semantic dementia and the action disorganization syndrome. Author
Key deficit
Associated disorder
Neural correlates
Liepmann (1908) Geschwind (1985) Heilman (1982) Rothi (1991)
Movement Formulae Verbal Commands Gesture Engrams Production System Conceptual System Dorsal system Ventral system Central Praxis systems Mechanical Problem Solving Coding of spatial relations
Visuo-kinesthetic Apraxia
Visual Cortex Wernicke’s area Inferior Parietal Lobe
Buxbaum (2001)
Goldenberg (2003) Goldenberg (2009) Schwartz (1991) Norman and Cooper (2000)
Contention Scheduling
Warrington (1975)
General Loss of Semantic Knowledge Semantic Hub
Patterson (2007)
Ideomotor Apraxia Ideational Apraxia Dynamic Apraxia Ventral Apraxia Representational Apraxia
Superior Parietal Lobe Temporal Lobe Left Inferior Parietal Lobe Left Parietal Lobe
Action Disorganization Syndrome Action Disorganization Syndrome
Frontal Lobe
Semantic Dementia Semantic Dementia
Anterior Temporal Lobe
tion of motor programs de novo each time an action is performed. In support of this account, damage to the inferior parietal lobe has been shown to result in difficulties with gesture discrimination, whereas more anterior lesions are associated with action planning deficits [162]. A similar proposal can be found in the cognitive model of limb praxis, according to which praxis processing involves two different systems: an action production system and a conceptual action system [1]. Damage to the production system, involving the gesture engrams, results in Ideomotor Apraxia (IMA), which is characterized by difficulties with producing the correct movements and gestures associated with object use. For instance, a patient with IMA showed a selective impairment in gestural behavior such as inserting the wrong fingers in a pair of scissors [163]. In contrast, damage to the conceptual system (labeled ‘action semantics’ by [1]) results in ideational apraxia (IA), which is associated with impairments in recognizing the appropriate use of an object. For instance, a patient with IA showed inappropriate use of objects in natural settings, such as trying to eat with a spoon or trying to brush his teeth with a spoon [164]. On this account, damage to the action production system results in impaired knowledge of how to manipulate objects (e.g. knowing that a hammer is used with an oscillating gesture) whereas damage to the conceptual system results in impaired knowledge about the function of objects (e.g. knowing that a hammer is used for driving nails in hard material). In line with this suggestion, Buxbaum and Saffran [48] found that ideomotor apraxic patients were more strongly impaired in manipulation knowledge (e.g. using a computer keyboard and piano both involve finger tapping movements) compared to functional knowledge (e.g. knowing that a piano and a flute are used for making music). Interestingly, Buxbaum et al. also observed that apraxic patients were characterized by deficits in body part knowledge, suggesting that the tool use deficits of IMA patients may also be partly related to impairments in the body schema. This finding fits well with the observation that apraxic patients are typically not only impaired in the production of tool-related actions, but also in the imitation of meaningless gestures that do not involve retrieval of manipulation knowledge (e.g. [165,166]). Accordingly, an updated version of the Rothi et al. [1] model was proposed involving a dorsal system that supports a dynamic representation of the body, a ventral system that presents the functional knowledge of tools and a central praxis system containing the gesture engrams [167]. Damage to each of these systems is associated with specific functional deficits and different types of apraxia (see Table 1). For instance, damage to the dorsal system may lead to dynamic apraxia, reflected in specific impairments in the imitation of meaningless gestures [168]. Together, these different models have in common that (1) successful object use requires specific knowledge about the manipulation and function of objects, (2) that this knowledge is represented in a motoric format in the parietal lobe and that (3) apraxia is characterized by impaired (access to) manipulation or functional knowledge. These views have
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.11 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
11
been challenged by an alternative account of the existing data, according to which a key deficit in apraxia is the ability to apply problem solving skills to tool use problems [169] and more specifically to code the spatial relations between objects and the body [160]. It is argued that tool use often does not rely on the simple execution of a predefined motor plan and that the left parietal lobe, instead of representing stored knowledge about hand–object interactions, is primarily involved in determining the spatial relation between multiple objects and the body. For instance, the movements and grip type associated with using a regular screwdriver or a clockmaker type of screwdriver are quite different (i.e. using a full grip vs. pincer grip respectively) and it is unlikely that one common motor representation underlies these different actions. Goldenberg [61] suggests that selecting the optimal grip for tool use depends on a process of spatial analysis, involving the categorical apprehension of the relation between different objects and the body, which in turn determines mechanical interaction. However, it should be noted the ‘coding of spatial relations’ view selectively accounts for failures of apraxic patients to imitate meaningless gestures, while spared imitation of meaningful gestures suggests preserved action semantic knowledge in these patients [170–172]. In addition, the ‘problem solving’ account of apraxia relates to a more general discussion in psychology and neuroscience regarding the format of conceptual representations. On the one hand, it has been suggested that conceptual representations are modality-specific and that knowledge about objects and tools is represented in a motoric format [173–175]. On the other hand, it has been argued that conceptual knowledge is amodal [176–179] and that knowledge about tools and objects is represented in multimodal association areas [2,180,181]. We will come back to this debate when discussing neuroimaging studies on tool use, but we will now first provide further evidence for specific impairments in action semantics as can be found in studies on the action disorganization syndrome.
2.3.2. Action disorganization syndrome Whereas apraxia can be characterized by an impaired ability to use specific objects, the action disorganization syndrome (ADS) involves impairments in performing multi-step actions [182]. Many everyday naturalistic actions involve the manipulation and use of multiple objects in a specific sequence (e.g. coffee making, doing the laundry, preparing dinner). Patients with ADS often perform actions in the wrong order (i.e. a sequence error) or fail to include a necessary element in a chain of actions (i.e. an omission error; [183,184]). Importantly, the errors observed in ADS cannot be attributed to motor incapacity or lack of conceptual object knowledge, as patients may still be able to show the correct gestures associated with using objects [185,186]. The specific neurological origin of ADS is not well understood and ADS patients have been reported with different types of aetiology, such as closed head injury [184,187], dementia [185], stroke [182,183,186] and carbon monoxide poisoning [183]. Furthermore, unlike apraxia that is strongly associated with damage to the left hemisphere, ADS is observed following both left- and right-sided brain lesions [185,188]. ADS has often been associated with frontal lobe lesions, resulting in general impairments in planning, coordinating and controlling actions [182,183,186,189]. Accordingly, it has been suggested that patients with ADS are characterized by a general decrease in cognitive resources, which is evidenced in difficulties with maintaining all relevant information for task performance [187] and with ignoring task-irrelevant distractors [190]. On the basis of the finding that ADS patients often display action errors in a selective subset of multi-step actions, several authors have proposed the notion of a hierarchical organization of action knowledge [183,191–193]. The ‘contention scheduling system’ (CSS) is proposed as a mechanism for the control of routine actions that monitors the execution of hierarchically organized action schemas [191,192]. For instance, preparing a cup of tea involves multiple sub-actions (e.g. grasping a cup, pouring water, grasping a teabag, pouring a spoon of sugar) that each consists of different action primitives (e.g. making a full grip, moving one’s arm to the right etc.). Impairments in specific nodes of this hierarchically organized network could underlie the deficits typically observed in ADS. Similar hierarchical accounts of action knowledge have been proposed by other authors as well [183,193,194] and these approaches fit well with the hierarchical view of the motor system [53,58]. However, it should be noted that the neural correlates of the action hierarchy are still not well understood (see also Section 2.4) and that alternative accounts suggesting a recurrent connectionist network underlying action deficits in ADS have also been proposed (e.g. [195]).
JID:PLREV AID:428 /REV
12
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.12 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
2.3.3. Semantic dementia Semantic dementia (SD) is a neurodegenerative disorder that is characterized by a gradual loss of conceptual knowledge, while other cognitive functions remain largely intact [2,196]. Patients with SD often show impaired conceptual knowledge in different domains, involving written and spoken words, pictures, sounds, smells and tastes. In some cases patients with SD also show a selective deficit in object use and may display semantic errors with objects (e.g. using matches as a cigarette, e.g. [8]). For instance, a co-occurrence of loss of conceptual knowledge and impairment in object use in SD has been reported [197]. Similarly, it was found that the impairments in object use were associated with naming difficulties of these same objects [198]. However, it should be noted that SD is not always characterized by an impaired ability to correctly use well-known objects [185,199]. Furthermore, in some cases patients showed a spared ability to use specific objects despite having impaired semantic knowledge about these objects, which could be explained by the affordances of the object and the problem solving skills of the patient [198]. This suggestion is further supported by a study showing a dissociation between problem solving skills and the use of well-known objects in SD [8]. Although patients with SD showed selective impairments in using well-known objects in a meaningful way, they performed well on a task involving novel tools. Loss of conceptual knowledge in SD has been associated with cortical atrophy and hypometablism in the ventrolateral anterior temporal lobes [200,201]. Accordingly it has been suggested that the loss of semantic knowledge across different modality-specific domains is related to damage to this specific area, that functions as a ‘semantic hub’ to integrate modality-specific information [2,3,202]. According to the ‘hub-and-spokes’ theory of conceptual knowledge, modality-specific information regarding concepts is represented in different sensory and motor brain regions. For instance, the color and shape of objects are coded in visual brain areas [203–205], whereas the actions associated with using objects are represented in fronto-parietal brain areas [48,49,206]. These different modality-specific representations are integrated in a semantic hub, instantiated in the anterior temporal lobe and damage to this area results in impaired access to the different modality-specific representations, which is a key characteristic of SD [2]. 2.4. Neuroimaging studies Already from the early beginning when brain imaging techniques were introduced to study the neural correlates of human cognition, many researchers have investigated the neural basis of conceptual object representations [204,206, 207]. This work was partly inspired by studies in neuropsychological patients, indicating that conceptual knowledge is represented across a set of modality-specific brain systems, each dedicated to the representation of one type of sensory or motor information [175,208]. Due to strong restrictions regarding movements of the subject because of the possibility of movement-related artifacts, early studies using PET and fMRI have primarily identified the brain areas involved in object observation and covert object naming. More recently, it has become possible to study the neural correlates of grasping and object-directed actions inside the scanner, thereby providing direct insight in the brain areas supporting real object use. Furthermore, EEG is also highly sensitive to movement-related artifacts, but by focusing on the action preparation interval and the post-movement interval, new insights have been obtained in the neural dynamics of object use. In this section we will discuss neuroimaging studies on (1) object observation and tool naming, (2) action goals and means processing and (3) the retrieval of action semantic information (for an overview, see Table 2).2 2.4.1. Object observation and tool naming Many studies on object observation have presented participants with pictures of manipulable objects and with pictures from a different semantic category (e.g. animals or fruits). Participants are asked either to passively observe the object, to name the object or to imagine using the object. Similarly, in studies on tool naming participants are presented with words referring to manipulable objects and are asked to perform an unrelated task (e.g. making a lexical decision) or to imagine using the object. Several studies have shown that the mere observation of pictures or words referring to manipulable objects results in activation of the premotor cortex (PMC) and in some cases of the supplementary motor area (SMA) [31,206,207,210–213]. In an early meta-analysis of different neuroimaging 2 Please note that we do not include studies on action observation in this review, that typically focus on identifying the existence of an ‘actionobservation network’ or a ‘mirror neuron system’ in the brain. For a recent review on this topic, see: [209] Caspers S, Zilles K, Laird AR, Eickhoff SB. ALE meta-analysis of action observation and imitation in the human brain. Neuroimage 2010; 50:1148–67.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.13 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
13
Table 2 Overview of the different brain areas associated with object observation, the execution of tool gestures and the retrieval of action semantics. Task
Brain areas
Associated function
Object Observation
Premotor Cortex (PMC)/Supplementary Motor Area (SMA) Inferior Parietal Lobe (IPL)/Posterior parietal cortex (PPC) Left posterior Middle Temporal Gyrus (pMTG) Ventral Visual Stream areas (e.g. fusiform gyrus)
Object Affordances Coding for hand–object interactions Visual Object motion Detailed analysis
Tool Use Object Pantomime
left Inferior Parietal Lobe (IPL)
Manipulation knowledge retrieval
Retrieval of Action Semantics
Inferior Parietal Lobe (IPL) Premotor Cortex (PMC) Middle Temporal Gyrus (MTG)/Supramarginal Gyrus (SMG) Anterior Temporal Lobe (ATL)
Manipulation knowledge retrieval Visual Object motion Somatosensory/Proprioceptive knowledge retrieval Pan-modal Semantic Hub
studies on object observation the PMC and the SMA were found to be consistently activated, in addition to activations in the occipito-temporal cortex [214]. These findings are typically interpreted as reflecting the activation of object affordances for grasping and/or the retrieval of the action knowledge associated with using objects (e.g. [31]). Some studies have reported activation in parietal areas, such as the inferior parietal lobule (IPL) and the posterior parietal cortex (PPC) during the observation of tools as well [31,211,213,215,216]. Single-cell studies have shown that neurons in the monkey homologue of the IPL show a strong specificity for hand-shape in relation to the manipulation of objects [217,218]. Furthermore, clinical observations indicate that damage to these areas is often associated with ideomotor apraxia and a loss of knowledge about the appropriate motor programs for using objects [160]. Accordingly it has been suggested that the activation in these areas in response to object observation reflect the automatic coding of hand–object interactions (see also: Section 2.4.2). In addition to activation in these motor-related areas (the PMC, SMA, IPL and the PPC, often commonly referred to as the fronto-parietal motor network), many studies have reported that the posterior middle temporal gyrus (pMTG) is activated during object observation [207,213,215,216,219–223]. The pMTG is a higher-order visual area that has been associated with the processing of visual motion [224]. It has been suggested that the activation of this area in response to object observation is related to the retrieval of visual motion information regarding the use of the tool [225]. Recent studies provide further evidence for the involvement of higher-order visual areas in the processing of objectrelated information [226–228]. Although object observation is typically considered to be mediated by dorsal stream areas, such as the IPL and the PMC [31], it was found that ventral stream areas, such as the left medial fusiform gyrus for instance, showed a stimulus-specific reduced activation following the repeated presentation of pictures representing manipulable objects [226]. Furthermore, it was found that disruption of the left MTG through transcranial magnetic stimulation (TMS) interfered with making judgments about the manipulabiblity of objects [229]. These findings indicate that determining the graspable and action-relevant properties of objects requires a detailed visual analysis in higher-order visual areas. This notion is supported by the existence of direct projections from the ventral temporal cortex to the left IPL [230,231]. Interestingly, in a recent meta-analysis involving the activation-likelihood-estimation (ALE) method, it was found that the processing of action concepts resulted in a selective recruitment of areas in the vicinity of pMTG only, and not in the activation of motor-related areas [232]. In contrast, other authors have underlined the central importance of motor-related areas for action semantic processing [174,233,234]. It could well be that whereas the pMTG reflects an automatic response of learned associations between objects and visual motion, activation in the fronto-parietal motor network is driven more strongly by processes supporting the top-down retrieval of action semantics (see also: Section 2.4.2). 2.4.2. Hierarchical view of the motor system In addition to investigating the neural correlates of object observation, neuroimaging studies have elucidated a fronto-parietal network underlying actual object use [235–237]. Following the hierarchical view of the motor system [53,58] and the observed clinical dissociation between manipulation knowledge (i.e. how should an object be grasped;
JID:PLREV AID:428 /REV
14
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.14 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
i.e. object means) and functional knowledge (i.e. what should one do with an object; i.e. object goals; cf. [238]), several researchers have attempted to investigate the organization of object goals and means in the fronto-parietal motor network. Supportive evidence for the notion that the motor system is organized in a hierarchical fashion was obtained from micro-stimulation studies in monkeys [239]. It was found that prolonged stimulation of the premotor cortex evoked complex movement sequences directed at a fixed end location (e.g. a sequence consisting of grasping, bringing to the mouth and opening the mouth). Based on these findings it has been suggested that the behavioral repertoire is represented as basic action primitives in premotor areas. Neuroimaging studies in humans have underlined the role of frontal motor regions, such as the PMC and the superior frontal gyrus, in representing action outcomes [61,240–243]. For instance, planning an object-directed action based on the end-location compared to the grip-type required for grasping, resulted in a stronger activation in the superior frontal gyrus [61] and the transport of objects towards the final goal location was accompanied by frontal slow-wave effects that were localized to the anterior prefrontal cortex [62]. In addition to these frontal motor regions, parietal motor areas such as the IPL, the PPC and the SPL likely support both goal-related and grip-related aspects of object use. Several studies have indicated that action goals are represented in the posterior parietal cortex [61,244–246]. For instance, neurons in posterior parietal cortex represent the monkey’s intention to move [245] and respond to the goal of an action [244]. Similarly, in humans the IPS was found involved when subjects repeatedly observed an actor performing the same goal [246,247]. Furthermore, by using multi-voxel pattern classification techniques (MVPA) it was found that activation patterns across parietal and premotor areas could actually be used to predict the upcoming action goals of object-directed movements [248]. However, in addition to representing object goals, parietal areas are also involved in representing grip-related aspects of object use. For instance, several studies have investigated the neural correlates of tool use pantomimes, by instructing subjects to conduct hand movements associated with using tools in response to visually presented words or pictures [249–254]. Brain areas that were found commonly activated during the performance of tool use pantomimes are the left IPL, the left ventral PMC and the left SMG. These findings are in line with the view that the left IPL and the PMC store the hand postures required for object-interaction [168] and suggest a possible role for the left SMG in representing proprioceptive information related to tool use. Further evidence for the specific involvement of the left IPL in representing grip-related information can be found in neuroimaging studies in which participants were actually required to reach to and grasp real objects [248,255–257]. It was found for instance that grasping-related activation could be dissociated from a region in the intraparietal sulcus specifically dedicated to tool manipulation knowledge [256] and a region in the anterior SMG that was related to tool kinematics [255]. Furthermore, by using repetition suppression it was found that the fronto-parietal motor network showed a reduced activation when observed actions were followed by actually executed actions, but only when the actions involved familiar but not arbitrary tools, also indicating that parietal areas selectively code for stored hand–object interactions [257]. 2.4.3. Retrieval of action semantics Several studies have investigated the brain areas associated with the explicit retrieval of action semantics in response to object pictures [49,258–261]. It has been found for instance, that the left IPL is selectively activated during the retrieval of manipulation knowledge compared to functional knowledge [49,258,259]. Other studies have shown the activation of the dorsal PMC in relation to the retrieval of action knowledge [260,261]. These findings indicate that motor-related areas are selectively activated, only when participants are required to retrieve action semantic information (see also: [38]). In a meta-analysis of 120 different neuroimaging studies, a broad network of brain areas associated with general semantic processing has been identified, involving the supramarginal gyrus (SMG) and the MTG [262]. It was found that specific sub-regions of this semantic network were involved with processing action knowledge and information about manipulable artifacts, notably the left pMTG and the ventral left SMG. The MTG may be specifically involved in coding the visual attributes of actions [225], whereas the SMG is a multisensory association area in close proximity to the somatosensory cortex that may represent somatosensory and proprioceptive knowledge associated with using objects. Another area associated with semantic processing is the anterior temporal lobe (ATL; [2]). Damage to this area typically results in general impairments in semantic knowledge, as observed in semantic dementia. However, individual
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.15 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
15
studies on action semantics often fail to report activation in this area, which may be related to the fact that this area is very sensitive to distortions in the magnetic field [263]. Still, a meta-analysis across different studies indicated that the ATL was significantly activated during semantic processing, irrespective of the sensory modality in which the stimuli were presented [180]. Accordingly, it has been suggested that the ATL functions as a ‘semantic hub’ that integrates information from different sensory modalities (however, for critical discussion, see: [264,265]). Further evidence for the involvement of semantics in action processing can be found in EEG studies focusing on the N400 component [99,149,266–273]. In language research the N400 has been associated with the detection of semantic violations and the integration of semantic information within the preceding context [152]. Studies with patients with intracranial electrodes indicate that the N400 originates from the anterior temporal lobe [274,275]. Several studies have shown that the processing of semantic object violations (i.e. using an object in an inappropriate fashion) results in a stronger N400 component, indicating the involvement of semantics in the processing of these actions [149,267, 270]. Furthermore, in two studies it was found that the preparation of meaningful actions with objects (e.g. bringing a cup towards the mouth) compared to meaningless actions (e.g. bringing a cup towards the eye) also modulated the N400 component in response to words or pictures presented prior to the action [99,273]. Together these studies indicate that the retrieval of action semantics is reflected in neural markers indicative of semantic processing and likely reflecting activation in left temporal areas. Action experience and familiarity likely have a strong influence on the ease whereby action semantic information about objects can be retrieved. For instance, several studies have shown that a short training with a novel tool can result in comparable effects of tool observation and action semantic retrieval as observed with well-known objects [233,276–278]. For instance, a short training with making movements towards visually presented objects, resulted in a subsequent re-activation of motor programs upon the mere presentation of the objects [233]. Other studies have indicated that training with using novel objects results in the activation of the left IPL when subsequently observing the same tool [276,278]. These findings indicate a strong plasticity and a capacity for learning in the human tool use system. It has been found that the observation of familiar tools results in a stronger activation in the SMG and the IPL, whereas the observation of unfamiliar tools resulted in increased activation in temporo-occipital areas, likely related to a more detailed visual analysis of these objects [228,279]. In contrast, making action inferences about the use of unfamiliar compared to familiar objects resulted in increased activation in the superior parietal lobe, which was probably related to a more effortful motor imagery process for unfamiliar objects [16]. In another study it was found that the observation of unfamiliar effector–object interactions (e.g. a hand near a football) compared to familiar effector– object interactions (e.g. a hand near a tennis ball) resulted in a stronger activation of the left SMG, likely reflecting the detection of a mismatch between stored and observed proprioceptive and somatosensory consequences of object use [280]. Interestingly, in a recent EEG study participants were required to adopt either a familiar or an unfamiliar end-posture with objects [83]. It was found that the familiarity of the end-posture had a strong effect on the activation of premotor and parietal motor areas, as reflected in a stronger beta-desynchronization and a subsequent rebound for unfamiliar actions. Together these findings indicate that familiarity and frequency of object use have a strong effect on the extent to which action semantics become established in the motor system. 3. Current debates in action semantics The different research fields on action semantics are characterized by similar discussions regarding (1) the automaticity of object affordances, (2) the processing of goals and means and (3) the functional and neural organization of action semantic representations. In this section we will highlight these discussions and indicate similarities and differences between the different research fields. 3.1. Automaticity of object affordances An important discussion in research on action semantics focuses on the question whether object affordances are automatically perceived or whether affordances require top-down processing involving the explicit retrieval of action semantics. Developmental, behavioral and neuroimaging studies provide some evidence suggesting that the grasping affordances of objects are automatically perceived and result in the retrieval of motor-related information. For instance, in infants object perception has been associated with the automatic activation of the motor programs associated with
JID:PLREV AID:428 /REV
16
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.16 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
using objects [82,124]. Behavioral studies have shown that object perception facilitates grasping responses towards objects [23–25,27] and neuroimaging studies have shown that the mere perception of manipulable objects results in the activation of motor-related brain areas [31,206,207]. These studies indicate that the motor programs associated with grasping objects are automatically activated when perceiving objects. However, other studies have shown that the activation of motor-related information is strongly dependent on the context, the intentions of the actor and the type of objects involved. For instance, behavioral studies have shown that grasping affordances are only activated when objects are within reach [40] and when subjects actually plan to use the objects in a functional fashion [37,42]. Neuroimaging studies have indicated that motor-related areas are only activated when participants are required to retrieve manipulation but not functional knowledge about objects [49,259]. Meta-analyses of neuroimaging data also indicate that object observation does not automatically result in the activation of premotor and parietal areas [180,232]. These studies challenge the view that object affordances are automatically perceived. Accordingly, a central question in the field is how these two views can be reconciled. 3.2. Goals and means in action planning Several theoretical accounts have been proposed in different domains to explain how people perform goal-directed actions with objects. A central and recurrent idea is that actions are hierarchically organized, such that higher-level action goals determine the selection of lower-level action means [53,58,281] and that this hierarchical organization is reflected at a neural level as well [59,60]. Developmental, behavioral and neuroimaging studies have provided (indirect) support for the hierarchical view of action and motor control. For instance, it has been found that infants selectively attend to goal compared to means-related information during action observation and imitation [52,142, 143]. In adults, a similar dominance of goal- over grip-related information has been observed, both in action planning and action observation [47,61]. Furthermore, neuroimaging studies indicate that the brain has specialized neural circuits for processing action goals and means [61,246,282]. Finally, the dissociation between ideomotor and ideational apraxia suggests that knowledge about object goals and means can be selectively impaired [48,167]. Together these findings fit well with the notion of an action hierarchy, reflected at both a behavioral and a brain level of investigation. However, the hierarchical view of action control has been challenged on both conceptual and theoretical grounds (for overview, see for instance: [195,283,284]). An important conceptual challenge for hierarchical views of action concerns the definition of action goals and means, which can be defined at multiple levels of complexity (i.e. high vs. low-level goals) and at different timescales (i.e. proximate vs. distal goals; [283,285]). Within this review we have focused on goals as reflecting spatial aspects of action planning (i.e. the object that is grasped or the end-location towards which the object is moved) and means as the specific grip applied to objects. However, in addition to these concrete movement-related goals, everyday action planning involves higher-level conceptual goals as well, such as intending to have a drink, to go to the movies etc. A challenge for future research is to elucidate how the hierarchical view of the motor system can account for these high-level action goals. 3.3. Functional and neural organization of action semantics A central discussion in research on action semantics focuses on the functional and neural organization of action semantics. According to an embodied view of cognition, action semantics are represented in modality-specific brain areas [225,286]. On this account, the activation of action semantics involves the re-enactment of previous experiences with object use, resulting in the activation of modality-specific brain areas. In line with this suggestion, many neuroimaging studies have shown that the retrieval of action semantics is associated with activation in motor-related brain areas and areas involved in the processing of visual motion [225]. Furthermore, studies with apraxic patients indicate that damage to motor-related areas such as the parietal cortex, results in a strong impairment in the ability to use objects meaningfully and to retrieve the names of tools [48,167]. In contrast, others have argued that activation in modality-specific brain areas is neither necessary nor sufficient for action semantics [2,160,178]. For instance, in meta-analyses of brain imaging data there is no strong evidence for the activation of modality-specific brain areas (i.e. motor areas or areas involved in visual motion processing) in association with the retrieval of action semantics [180,232]. Instead, different neuroimaging studies point towards a core region involved in action semantic processing that is located in the anterior temporal lobe [180]. Furthermore, there is no clear one-to-one relation between damage to motor-related areas in neuropsychological patients and loss of action
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.17 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
17
semantic knowledge [178]. Instead, studies with patients with semantic dementia indicate that damage to temporal areas results in a general loss of semantic knowledge that may involve action semantics as well [2]. Based on these findings a complementary view of action semantic representations has been proposed, known as the ‘hub-and-spokes’ model, according to which semantics are represented across modality-specific ‘spokes’ that converge in a multimodal association area (i.e. the ‘semantic hub’) that provides an amodal representation of semantic information [2,287]. The embodied view of cognition may be compatible with the ‘semantic hub’ view of action semantics, insofar as the semantic hub hypothesis allows for the possibility of modality-specific representations in action semantics (i.e. as the ‘spokes’ of the wheel). However, an important difference between these two views is that according to the ‘semantic hub’ view, activation in modality-specific brain areas is not constitutive of action semantics. A similar theoretical positioning can be observed in research on language, in which the embodied view of language implies that modality-specific brain processing is necessary for linguistic understanding [288], whereas an amodal view of language suggests that modality-specific effects in language are merely epiphenomenal and may reflect post-lexical mental imagery [178]. More recently, a hybrid model has been proposed, according to which language processing involves both modality-specific and amodal processing (e.g. see: [289,290]). A comparable account has been proposed for the processing of visual semantics, according to which the processing of words referring to concrete objects relies on both lexical-semantic and visual information [291]. On these accounts, amodal processes function as a ‘heuristic’ that allows a fast and multimodal access to semantic representations. A similar mechanism could play a role in the representation of action semantics, in which an amodal semantic hub provides direct access to modality-specific action semantic representations [2]. Recently the ‘hub and spokes’ model of action semantics has been criticized, because a careful analysis of the available neuropsychological data indicates that a general impairment in semantic knowledge is only observed in the advanced stages of semantic dementia [264]. The mild and intermediate stages of semantic dementia are often characterized by modality-specific semantic impairments, such as a loss of pictorial knowledge. Accordingly, rather than reflecting an amodal semantic hub, the anterior temporal lobe could be considered as a higher-order converging zone [292], integrating multisensory information [265] and enabling the re-enactment of sensorimotor experiences [293]. The crucial difference between this ‘convergence zone’ view and the ‘semantic hub’ account is that, rather than reflecting amodal processing, the anterior temporal lobes are primarily involved in integrating modality-specific information. As such, the discussion on the semantic hub view may be primarily related to a definitional question regarding the label applied to higher-order association areas (e.g. ‘amodal’, ‘modality-unspecific’ or ‘multimodal’). 4. Action semantics: a unifying framework 4.1. Functional organization of action semantics In addition to providing a review of research on action semantics, a main aim of this review is to introduce a novel conceptual framework that integrates previous findings from different research domains and that resolves discussions regarding the (1) role of object affordances, (2) the relation between goals and means in action planning and (3) the nature of semantic representations. We propose a conceptual framework in which action semantics consists of multimodal object representations, and modality-specific subsystems involving functional knowledge and manipulation knowledge (see Fig. 1). Functional knowledge concerns knowledge about the object’s meaningful use (i.e. what to do with an object and where to move it) and manipulation knowledge involves motor representations regarding the bodily interaction with the object (i.e. how to grasp an object). In addition, action semantics involves a representation of the proprioceptive consequences associated with object use (i.e. the relative position of one’s body parts with respect to the object) and the sensory consequences of object use (i.e. the visual, auditory, gustatory and olfactory effects associated with using objects). Action semantics are hierarchically organized, such that long-term functional knowledge is typically used to select manipulation knowledge and to generate predictions regarding the proprioceptive and sensory consequences of object use. Top-down effects of action intentions that represent a desired action outcome drive the selection of relevant action goals. Action intentions also determine the relative importance of the different modality-specific sub-systems (e.g. whether proprioceptive or sensory consequences are more relevant to the task at hand). Action intentions in turn are determined to a strong extent by the context in which the action takes place and the output of the action semantic system allows changing the context through the motor system. Furthermore, the context can also exert a direct influence on the activation and use of action semantics, by allowing automatic effects of
JID:PLREV AID:428 /REV
18
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.18 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
Fig. 1. Conceptual framework of action semantics. Action semantics consists of multimodal object representations that are associated with modality-specific sub-systems involving functional knowledge, manipulation knowledge and representations of the proprioceptive and the sensory consequences of object use. Action semantics are hierarchically organized and the selection of action outcomes and the relevant sub-systems is driven by action intentions, which are in turn determined by the context in which the action takes place. Action semantics are hierarchically organized in an action hierarchy and a control hierarchy allows the implementation of high-level action intentions.
object observation on the activation of affordances as observed for instance in patients displaying utilization behavior [21,22]. Two simple examples may be used to illustrate the conceptual framework of action semantics. In a specific context (e.g. being thirsty) one may form the intention to have a drink, involving a representation of the desired outcome of the action (i.e. satisfying one’s thirst). Based on one’s action intentions the relevant functional knowledge is activated (i.e. using a glass to drink) which in turn results in the retrieval of the relevant motor programs for grasping and moving and a prediction is generated about the consequences of one’s actions (e.g. expecting to see the glass near one’s mouth and to taste a drink). Following the successful execution of the action, which is monitored by using a forward model aimed at minimizing the prediction error, the context is changed (one is no longer thirsty), resulting in a different action intention (e.g. going back to work). Now imagine that at a campground one needs a hammer for hammering tent pegs, but in the absence of a hammer one may use a shoe instead. In this case, based on one’s action intention one needs to select the manipulation knowledge for grasping shoes, but also the functional knowledge about using hammers (e.g. making a hammering movement). In this case, the long-term associations between functional and manipulation knowledge need to be inhibited and two different though complementary movement programs need to be selected instead. Again, based on predictions about the proprioceptive and sensory consequences of one’s movements (e.g. expecting to see the tent peg move into the ground) the progress of the action is monitored, resulting in a change in context. An important theoretical assumption underlying our framework is that the interaction with the world relies on learned knowledge, that can be conceived of in terms of both knowing-how and knowing-that [294]. Action semantics as ‘knowing-how’ consists of the procedural or manipulation knowledge that enables us to grasp objects in a correct fashion and to use objects in a meaningful way (e.g. knowing that a hammer is grasped at the handle and used with a back-and-forth swinging movement). Action semantics as ‘knowing-that’ encompasses the functional knowledge regarding the use of objects (e.g. knowing that a hammer is used for hammering). Following studies in neuropsychological patients, showing a dissociation between both types of knowledge, we argue that both aspects of action semantics are important for the meaningful use of objects. Furthermore, our view is compatible with the idea of affordances, conceived of as possibilities for action that can be acted upon when using the appropriate set of sensorimotor skills [295]. The proposed conceptual framework bears similarities with other accounts of object knowledge and action planning (e.g. [225,281]). However, it is unique in its combination of the different elements involving (1) the notion that action semantics and motor programs are hierarchically organized [53,58,60,281], (2) the notion that action semantics in-
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.19 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
19
volves modality-specific sub-systems in addition to multimodal object representations [2,225,265], (3) the importance of forward models for action control [70,71] and (4) the emphasis on context and action intentions on the selection of action semantics and the selection of relevant modality-specific information [38,39,283,296]. A critical question for our model is how high-level action intentions can influence the selection of action semantics. We suggest that an important distinction in this respect is the difference between an action hierarchy and a control hierarchy (see Fig. 1; cf. [284]). The action hierarchy is primarily reflected at a motor level, in which actions tend to be organized around the end postures or end locations that in turn determine the selection of action grips and reaching movements [53,54,75]. Within our framework the action hierarchy is reflected in the hierarchal relation between functional and manipulation knowledge (see Fig. 1). In addition to the action hierarchy, a control hierarchy has been proposed that is involved in the production of complex hierarchically structured behavior [60,297,298]. More specifically, it has been argued that the prefrontal cortex shows a rostro-caudal organization related to the temporal organization of behavior, and supporting the maintenance of context and goal-related information at multiple hierarchically organized levels [298]. Accordingly, we propose that the control hierarchy is involved with higher-level aspects of action planning (i.e. representing desired action outcomes), whereas the action hierarchy is involved with the physical aspects of action planning (i.e. performing the correct movements). In this review we have selectively discussed research on the action hierarchy, but recent findings indicate that high-level action representations can directly influence low-level motor representations [283,296]. The interaction between the action and the control hierarchy is also supported by neural evidence, indicating a close interaction between prefrontal areas involved in high-level control and motor-related areas involved in action planning and execution [299]. The combination of a control and an action hierarchy allows the flexible use of objects, while still maintaining a hierarchal view on action planning (see also: Section 4.3.2). 4.2. Neural dynamics of action semantics Following the conceptual framework of action semantics and integrating the results from neuropsychological and neuroimaging studies on action semantics, we propose an integrative account of the neural dynamics of action semantics (see Fig. 2). Action semantics are represented across modality-specific brain areas that are involved with coding the manipulation and functional aspects of object use and with representing the proprioceptive and sensory consequences of object use. More specifically, ventral stream areas are involved with coding complex object features (the inferotemporal cortex; IT; cf. [226]) and in representing the sensory consequences of object use (the left posterior Middle Temporal Gyrus; left pMTG; cf. [225]). Dorsal stream areas represent motor-related and proprioceptive information about object use, such as the hand postures required for grasping and interaction with objects (the left Inferior Parietal Lobe; IPL; cf. [256,257]), the final spatial goals of upcoming movements [246,248], the body postures associated with object use (Premotor Cortex; PMC; cf. [59,83]), basic movement sequences associated with moving objects (Supplementary Motor Area; SMA; cf. [300]) and the proprioceptive information related to the use of objects (Supramarginal Gyrus; SMG; cf. [229,250]). These modality-specific areas project to the anterior temporal lobe (ATL), that functions as a multimodal association area, integrating information from different sensory modalities. Neural evidence for the involvement of the ATL in multimodal processing is obtained from neuropsychological studies with semantic dementia patients, indicating that general semantic deficits are often associated with damage to this area [301]. Furthermore, primate studies have shown direction projections from sensory cortices to the anterior temporal lobe [302] and similarly in humans similar projections from the occipital cortex to the ATL have been described as well using diffusion tensor imaging (DTI) [303]. Furthermore, strong bidirectional connections exist between parietal and frontal motor-related areas, directly supporting the planning and control of grasping actions and goal-directed movements [304]. Importantly, the associative connections between modality-specific and multimodal association areas are influenced in a top-down fashion by prefrontal areas, such as the dorsolateral prefrontal cortex (DLPFC) that is involved in maintaining action goals in working memory [298] and the frontopolar cortex (PFC) that is involved in representing task context [305]. Neuroanatomical evidence for such feedback connections from frontal to temporal regions has been provided by tracing studies in non-human primates [306], as well as by studies using brain tractography techniques such as DTI in humans [307]. In a specific context action-relevant information may become more relevant, resulting in a top-down modulation of activation in the ATL, thereby facilitating the retrieval of motor representations
JID:PLREV AID:428 /REV
20
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.20 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
Fig. 2. Neural dynamics of action semantics. Action semantics involves both processing in the dorsal stream and the ventral stream. Different modality-specific brain areas are associated with a multimodal association area, involved in the retrieval of action semantics. Frontal areas modulate activation in dorsal stream areas involved in action semantics and in ATL. (FPC = Frontopolar Cortex; DLPFC = Dorsolateral Prefrontal Cortex; SMA = Supplementary Motor Area; PMC = Premotor Cortex; left SMG = left Supramarginal Gyrus; PPS = Posterior Parietal Cortex; left IPL = left Inferior Parietal Lobe; IT = InferoTemporal Cortex; left pMTG = left posterior Middle Temporal Gyrus.)
in association with the retrieval of action semantics. On this account, it is possible that the retrieval of action semantics is associated with both automatic bottom-up effects, related to the automatic activation of bidirectional associations between ATL and modality-specific brain areas, and (stronger) intentional and contextual effects on modality-specific activations due to top-down influences from prefrontal areas on the ATL. High-level goal-representations in prefrontal areas also allow the flexible and context-dependent use of objects. Action intentions and the maintenance of goal-related information are characterized by temporal abstraction, in which a single representation is used to unite a sequence of temporally extended events. Furthermore, the task context may represent the highest level of the control hierarchy that supports a general representation of the task and situation in which the action takes place. Prefrontal areas show a rostro-caudal organization reflected in increasingly abstract levels of action and task representation [60,298]. As such, these areas may be primarily involved in maintaining high-level goal representations in working memory and may in turn support the selection of appropriate motor programs in the action hierarchy, that are required to achieve the intended goal [297]. 4.3. Relation with current debates in action semantics We will now shortly discuss the implications of our proposed framework for the current debates in action semantics, regarding (1) the role of affordances, (2) the relation between goals and means and (3) the functional and neural organization of action semantics. 4.3.1. Affordances As indicated in the third section, an important discussion focuses on the question to what extent objects’ affordances for action are automatically activated or driven by top-down influences. A possible answer to this question can be found by making a comparison with recent debates in research on language and research on action observation. Language research is characterized by a similar discussion regarding the automaticity of modality-specific activations during word processing [90,286,288,308]. For instance, some authors have suggested that the processing of action verbs results in the automatic activation of motor-related brain areas suggesting that modality-specific activation actually supports the understanding of word meaning [4,234]. In contrast, others have argued that these effects are not automatic and can be explained in terms of post-lexical simulation processes [178]. Similarly, in the domain of action observation it has been found that the observation of familiar actions results in the automatic activation of motor-
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.21 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
21
related brain areas, suggesting that these so-called ‘mirror neurons’ support action understanding [309]. In contrast, others have argued that motor simulation cannot be constitutive of social cognition, for instance because many actions are beyond our own motor repertoire [310]. In the last years, a more nuanced view has been proposed according to which modality-specific activation in language and action observation actually has a functional role, by supporting a ‘deep understanding’ of the actions involved [4,311]. That is, motor activation during action observation allows the observer to understand the action from his own perspective and motor activation during language processing may ground the meaning of words in one’s own action experience (e.g. [173]). Motor activation in relation to object observation could play a functional role as well, for instance by supporting the identification and categorization of objects [34,36,45]. On this account, the presumed automatic effects of object affordances may actually be driven by the experimental task that is used (e.g. contrasting manipulable and non-manipulable objects) and the explicit or implicit task representations of participants during task execution (e.g. categorizing objects as man-made or as natural). This account fits well with the available data, indicating that task requirements have a strong effect on the activation of motor areas during the processing of object-related information [38,49] and with the finding that motor interference actually hampers object identification [36,312]. Thus, our framework implies that the processing of objects’ affordances for action is to a strong extent determined by top-down influences related to one’s action intention and the action context. For instance, different affordances may become activated depending on whether one intends to grasp or to point towards an object [43,313] or when an object is presented in a visual compared to an action context [38,111,314]. The selection of object affordances is likely mediated by top-down effects of prefrontal areas, modulating the activation of modality-specific brain areas through projections to the temporal lobe. At the same time, our framework also allows for the possibility that affordances are automatically activated during object processing, due to supra-threshold activations of the associative connections between multimodal object association areas and modality-specific brain areas. In this way, the framework of action semantics allows for both automatic effects of object affordances as well as context- and intentionally driven effects. 4.3.2. Goals and means Several theories have been proposed to account for the human ability to plan goal- and grip-related aspects of object use. For instance, in infant research several researchers have proposed that tool use in infants relies on general problem solving skills, such as means-end reasoning or the rationality principle rather than motor-related knowledge [125,132]. Similarly, it has been argued that tool use deficits in patients with apraxia are primarily related to impairments in perceiving the spatial relations between the body and objects rather than specific impairments in goal- or grip-related knowledge [160]. These accounts of action planning and object use are proposed as a solution to the question how people are able to use objects in a flexible fashion (e.g. using objects in a non-conventional way or performing meaningless gestures with objects). For instance, in many cases object use does not rely on a simple oneto-one relation between the object and a specific bodily movement (e.g. using a precision clockmaker-type screwdriver involves different kinematics than using a conventional screwdriver). Accordingly, it has been suggested that general problem solving skills allow the flexible use of objects that cannot be accounted for by a rigid hierarchical framework of action planning (e.g. [160,169]). However, alternative theories of action planning (e.g. the problem-solving view) are often limited in scope and focus on providing an account of a specific phenomenon (e.g. imitation in infants; imitation of meaningless gestures in apraxic patients). Throughout the different domains we have shown that the hierarchical view of action planning accounts for a variety of different phenomena, observed in both infants, patients and healthy adults and studied with both behavioral and neuroimaging techniques. As such, the hierarchical view of action planning has a broader explanatory scope than alternative accounts of action and motor control. Furthermore, we argue that these alternative theoretical models overlook the central importance of action intentions and action context for the planning of actions. That is, one can still maintain a hierarchical view of action planning, while allowing flexibility in the coupling between action means and action goals, driven by the context in which the action takes place and driven by the intentions of the actor. Recent findings support the idea that high-level action intentions modulate the coupling between goals and means [296] and that action intentions can overrule the default goal-associations of objects [39,273]. For instance, when participants prepared a meaningless action with the object (e.g. bringing a cup to the eye), they were faster when initiating the action in response to a word that was congruent with the short-term goal of the action (i.e. ‘eye’) rather than to words that were congruent with the long-term goal association of the object (i.e. ‘mouth’; [39]). In another study it was found that the flexibility of perception–action
JID:PLREV AID:428 /REV
22
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.22 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
coupling cannot be accounted for by a process of associative learning, but rather reflects the top-down selection of relevant task-representations [315]. Furthermore, other findings highlight the importance of contextual information on the activation and processing of goal- and grip-related information about objects [38,314,316]. Together these findings indicate that a hierarchical view of action planning can well account for the flexible use of objects. As indicated in Section 3.3, an important question is how the hierarchical view of the motor system can account for high-level action goals related to abstract intentions. Recent findings indicate that the hierarchical framework can account for higher-level conceptual goals as well [283,296,316]. For instance, it has been found that high-level conceptual intentions (e.g. whether to move to a high or a low number) modulate low-level motor intentions (e.g. whether to move to the left or the right side) in a hierarchical fashion [296]. Similar findings have been reported with respect to the planning of joint actions, in which task context (i.e. planning an imitative or a complementary action) can overrule learned object-action associations [317]. Furthermore, a recent fMRI study indicates that high-level conceptual expectations about actions have a strong top-down influence on the processing of object-related actions in the fronto-parietal motor network [316]. Together these studies show that action intentions at a higher level automatically modulate sensory consequences at a lower level, even when they are task-irrelevant. 4.3.3. Organization of action semantics An important discussion in research on action semantics has focused on the functional organization of action semantic representations, ranging from embodied views (modality-specific representations), amodal views (amodal representations) to hybrid views (the ‘hub-and-spokes’ account of semantics). Each of these accounts is faced with empirical and conceptual difficulties that were discussed in detail in Section 3.3. Making a comparison with similar discussions in research on language and action observation [178,311], we argue that the most plausible model of action semantics is the hybrid view (i.e. the ‘hubs-and-spokes’ model of action semantics; cf. [2]). This view acknowledges that the brain has multiple systems to represent both multimodal and modality-specific aspects of semantic processing. Important arguments for the ‘hub-and-spokes’ model of action semantics is that it accounts for the finding of both modality-specific effects in relation to conceptual processing, as well as more general specific semantic impairments, as observed following damage to the anterior temporal lobe in semantic dementia. We argue that an important issue regarding the representation of action semantics concerns the question when people recruit specific modality-specific activations and when they rely on more general multisensory object representations. Two important factors that play a role in the retrieval of action semantics are the action intention of the actor and the context in which the action is performed. Several studies indicate that modality-specific representations are only activated when relevant to the task at hand and when elicited by the preceding context or one’s action intentions [38,45,314,318,319]. In other cases, participants may rely on shallower processing of action semantics, which does not result in strong activation of modality-specific brain areas (for a similar view on language, see: [289]). Thereby, this view makes an important addition to the ‘hub-and-spokes’ model of action semantics, by acknowledging the importance of context and action intentions on modality-specific processing. 5. Concluding remarks and future directions In this review we have argued that humans have developed action semantics that enable them to use objects in a flexible and meaningful way. We have provided an integrative review of research on action semantics, involving research with neuropsychological patients, developmental studies with infants and behavioral and neuroimaging studies. Each domain can be characterized by similar discussions regarding the (1) automaticity of object affordances, (2) the relation between goals and means in action processing and the neural mechanisms involved and (3) the functional and neural organization of action semantics. We have proposed a conceptual framework of action semantics, according to which (1) action semantics involves both multimodal object representations and different modality-specific sub-systems (i.e. manipulation knowledge, functional knowledge, representation of proprioceptive consequences and representation of sensory consequences), (2) action semantics and motor representations supporting object use are hierarchically organized, (3) action semantics are selected and modality-specific information is activated based on top-down influences related to action intentions and action context. Thereby this view accounts for a wide range of different findings in interrelated domains of research. At the same time the framework of action semantics allows generating novel and testable predictions to empirically investigate the key claims made by the model. Here we propose two important predictions that could be validated in
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.23 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
23
future research. First, whereas several studies have shown that modality-specific representations are selectively activated depending on task-context, less is known about the role of multimodal object representations. It could be that multimodal representations function as a heuristic to provide fast access to modality-specific representations [290]. Alternatively, it could be that the default mode for processing object-related information is to rely on modality-specific features and multimodal processing is only relevant when an integrated representation of the object is required [174]. Accordingly, it is important to obtain more insight in the specific temporal dynamics of action semantic processing (e.g. does ATL activation precede activation in modality-specific brain areas, or, alternatively, ATL rather integrates modality-specific information following modality-specific effects), for instance by using EEG studies or by manipulating the temporal aspects of task demands (for a similar approach with respect to the processing of visual semantics, see: [291]). Another important issue concerns the functional and neural dynamics underlying semantic selection for action. We have argued for an interaction between a control hierarchy, involved in high-level aspects of action planning, and an action hierarchy, directly supporting the planning of bodily movements. However, the precise relation between these systems and the neural networks involved are not well understood. This lack of interaction is reflected in the literature as well, as research on high-level control mechanisms has typically focused on relatively abstract planning tasks [60, 297,298], whereas research on action planning and motor control has typically used concrete object manipulation tasks [320]. An interesting question is whether and how the control and action systems interact, for instance by asking participants to use well-known objects in an unconventional way (e.g. using a phone to hammer or using a knife as a screwdriver). In sum, the framework of action semantics provides an integrative account of the functional and neural mechanisms supporting the meaningful use of objects. This framework will give a new impetus to discussions in the field, by integrating previous findings in different research domains, by indicating how the different apparently conflicting views can be integrated into a coherent framework and by proposing exciting new avenues for future research. References [1] Rothi LJG, Ochipa C, Heilman KM. A cognitive neuropsychological model of limb praxis. Cogn Neuropsychol 1991;8:443–58. [2] Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat Rev Neurosci 2007;8:976–87. [3] Pobric G, Jefferies E, Ralph MAL. Amodal semantic representations depend on both anterior temporal lobes: evidence from repetitive transcranial magnetic stimulation. Neuropsychologia 2010;48:1336–42. [4] Pulvermüller F. How neurons make meaning: brain mechanisms for embodied and abstract-symbolic systems. Trends Cogn Sci 2013. [5] Rueschemeyer SA, Lindemann O, van Elk M, Bekkering H. Embodied cognition: the interplay between automatic resonance and selectionfor-action mechanisms. Eur J Soc Psychol 2009;39:1180–7. [6] Cesario J, Plaks JE, Hagiwara N, Navarrete CD, Higgins ET. The ecology of automaticity: how situational contingencies shape action. Semant Soc Behav Psychol Sci 2010;21:1311–7. [7] Heilman KM, Maher LH, Greenwald ML, Rothi LJG. Conceptual apraxia from lateralized lesions. Neurology 1995;5. [8] Hodges JR, Spatt J, Patterson K. “What” and “how”: evidence for the dissociation of object knowledge and mechanical problem-solving skills in the human brain. Proc Natl Acad Sci USA 1999;96:9444–8. [9] Hoeren M, Kaller CP, Glauche V, Vry MS, Rijntjes M, Hamzei F, et al. Action semantics and movement characteristics engage distinct processing streams during the observation of tool use. Exp Brain Res 2013;229:243–60. [10] Labruna L, Fernandez-del-Olmo M, Landau A, Duque J, Ivry RB. Modulation of the motor system during visual and auditory language processing. Exp Brain Res 2011;211:243–50. [11] Lindemann O. Semantic activation in action planning. J Exp Psychol Hum 2006;32:633–43. [12] Noppeney U. The neural systems of tool and action semantics: a perspective from functional imaging. J Physiol 2008;102:40–9. [13] Noppeney U, Price CJ. Retrieval of visual, auditory, and abstract semantics. Neuroimage 2002;15(4):917–26. [14] Schwartz RL, Adair JC, Raymer AM, Williamson DJG, Crosson B, Rothi LJG, et al. Conceptual apraxia in probable Alzheimer’s disease as demonstrated by the Florida action recall test. J Int Neuropsychol Soc 2000;6:265–70. [15] Springer A, Prinz W. Action semantics modulate action prediction. Q J Exp Psychol 2010;63:2141–58. [16] van Elk M, Viswanathan S, van Schie HT Bekkering H, Grafton ST. Pouring or chilling a bottle of wine: an fMRI study on the prospective planning of object-directed actions. Exp Brain Res 2012;218:189–200. [17] Vingerhoets G, Vandekerckhove E, Honore P, Vandemaele P, Achten E. Neural correlates of pantomiming familiar and unfamiliar tools: action semantics versus mechanical problem solving?. Hum Brain Mapp 2011;32:905–18. [18] Willems RM, Casasanto D. Flexibility in embodied language understanding. Front Physiol 2011;2:116. [19] Yamazaki Y, Yokochi H, Tanaka M, Okanoya K, Iriki A. Potential role of monkey inferior parietal neurons coding action semantic equivalences as precursors of parts of speech. Soc Neurosci 2010;5:105–17. [20] Yoon EY, Humphreys GW, Riddoch MJ. Action naming with impaired semantics: neuropsychological evidence contrasting naming and reading for objects and verbs. Cogn Neuropsychol 2005;22:753–67.
JID:PLREV AID:428 /REV
24
[21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60]
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.24 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
Lhermitte F. Utilization behavior and its relation to lesions of the frontal lobes. Brain 1983;106:237–55. Shallice T, Burgess PW, Schon F, Baxter DM. The origins of utilization behavior. Brain 1989;112:1587–98. Bub DN, Masson ME, Cree GS. Evocation of functional and volumetric gestural knowledge by objects and words. Cognition 2007. Bub DN, Masson MEJ, Cree GS. Evocation of functional and volumetric gestural knowledge by objects and words. Cognition 2008;106:27–58. Ellis R, Tucker D. Micro-affordance: the potentiation of components of action by seen objects. Br J Psychol 2000;91(4):451–71. Klatzky RL, Pellegrino JW, McCloskey BP, Doherty S. Can you squeeze a tomato? The role of motor representations in semantic sensibility judgments. J Mem Lang 1989;28:56–77. Tucker M, Ellis R. The potentiation of grasp types during visual object categorization. Vis Cogn 2001;8:769–800. Phillips JC, Ward R. S-R correspondence effects of irrelevant visual affordance: time course and specificity of response activation. Vis Cogn 2002;9:540–58. Riddoch MJ, Edwards MG, Humphreys GW, West R, Heafield T. Visual affordances direct action: neuropsychological evidence from manual interference. Cogn Neuropsychol 1998;15:645–83. Tipper SP, Howard LA, Houghton G. Action-based mechanisms of attention. Philos Trans R Soc B 1998;353:1385–93. Chao LL, Martin A. Representation of manipulable man-made objects in the dorsal stream. Neuroimage 2000;12:478–84. Garcea FE, Almeida J, Mahon BZ. A right visual field advantage for visual processing of manipulable objects. Cogn Affect Behav Ne 2012;12:813–25. Johnson-Frey SH. The neural bases of complex tool use in humans. Trends Cogn Sci 2004;8:71–8. Almeida J, Mahon BZ, Caramazza A. The role of the dorsal visual processing stream in tool identification. Psychol Sci 2010;21:772–8. Harris IM, Murray AM, Hayward WG, O’Callaghan. C, Andrews S. Repetition blindness reveals differences between the representations of manipulable and nonmanipulable objects. J Exp Psychol Hum 2012;38:1228–41. Witt JK, Kemmerer D, Linkenauger SA, Culham J. A functional role for motor simulation in identifying tools. Psychol Sci 2010;21:1215–9. Bub DN, Masson MEJ, Lin T. Features of planned hand actions influence identification of graspable objects. Psychol Sci 2013;24:1269–76. van Dam WO, van Dijk M, Bekkering H, Rueschemeyer SA. Flexibility in embodied lexical-semantic representations. Hum Brain Mapp 2012;33:2322–33. van Elk M, van Schie HT, Bekkering H. Short-term action intentions overrule long-term semantic knowledge. Cognition 2009;111:72–83. Costantini M, Ambrosini E, Tieri G, Sinigaglia C, Committeri G. Where does an object trigger an action? An investigation about affordances in space. Exp Brain Res 2010;207:95–103. Anelli F, Borghi AM, Nicoletti R. Grasping the pain: motor resonance with dangerous affordances. Conscious Cogn 2012;21:1627–39. Bub DN, Masson MEJ. Grasping beer mugs: on the dynamics of alignment effects induced by handled objects. J Exp Psychol Hum 2010;36:341–58. Masson MEJ, Bub DN, Breuer AT. Priming of reach and grasp actions by handled objects. J Exp Psychol Hum 2011;37:1470–84. Randerath J, Martin KR, Frey SH. Are tool properties always processed automatically? The role of tool use context and task complexity. Cortex 2013;49:1679–93. van Elk M, van Schie HT, Bekkering H. Action semantic knowledge about objects is supported by functional motor activation. J Exp Psychol Hum Percept Perform 2009;35:1118–28. van Elk M, Bousardt R, Bekkering H, van Schie HT. Using goal- and grip-related information for understanding the correctness of other’s actions: an ERP study. PLoS ONE 2012;7:e36450. van Elk M, van Schie HT, Bekkering H. Conceptual knowledge for understanding other’s actions is organized primarily around action goals. Exp Brain Res 2008;189:99–107. Buxbaum LJ, Saffran EM. Knowledge of object manipulation and object function: dissociations in apraxic and nonapraxic subjects. Brain Lang 2002;82:179–99. Kellenbach ML, Brett M, Patterson K. Actions speak louder than functions: the importance of manipulability and action in tool representation. J Cogn Neurosci 2003;15:30–46. Bekkering H, Wohlschlager A, Gattis M. Imitation of gestures in children is goal-directed. Q J Exp Psychol, A Hum Exp Psychol 2000;53:153–64. van Elk M, van Schie HT, Bekkering H. Imitation of hand and tool actions is effector-independent. Exp Brain Res 2011;214:539–47. Wohlschlager A, Gattis M, Bekkering H. Action generation and action perception in imitation: an instance of the ideomotor principle. Philos Trans R Soc B 2003;358:501–15. Rosenbaum DA, Cohen RG, Jax SA, Weiss DJ, van der Wel R. The problem of serial order in behavior: Lashley’s legacy. Hum Mov Sci 2007;26:525–54. Rosenbaum DA, Loukopoulos LD, Meulenbroek RGJ, Vaughan J, et al. Planning reaches by evaluating stored postures. Psychol Rev 1995;102:28–67. Rosenbaum DA, Vaughan J, Barnes HJ, Jorgensen MJ. Time course of movement planning: selection of handgrips for object manipulation. J Exp Psychol Learn Mem Cogn 1992;18:1058–73. Cohen RG, Rosenbaum DA. Where grasps are made reveals how grasps are planned: generation and recall of motor plans. Exp Brain Res 2004;157:486–95. Marteniuk RG, MacKenzie CL, Jeannerod M, Athenes S, Dugas C. Constraints on human arm movement trajectories. Can J Psychol 1987;41:365–78. Grafton ST, Hamilton AF. Evidence for a distributed hierarchy of action representation in the brain. Hum Mov Sci 2007;26:590–616. Graziano M. The organization of behavioral repertoire in motor cortex. Annu Rev Neurosci 2006;29:105–34. Botvinick M. Hierarchical models of behavior and prefrontal function. Trends Cogn Sci 2008;12:201–8.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.25 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
25
[61] Majdandzic J, Grol MJ, van Schie HT, Verhagen L, Toni I, Bekkering H. The role of immediate and final goals in action planning: an fMRI study. Neuroimage 2007;37:589–98. [62] van Schie HT, Bekkering H. Neural mechanisms underlying immediate and final action goals in object use reflected by slow wave brain potentials. Brain Res 2007;1148:183–97. [63] Jax SA, Buxbaum LJ. Response interference between functional and structural actions linked to the same familiar object. Cognition 2010;115:350–5. [64] Osiurak F, Roche K, Ramone J, Chainay H. Handing a tool to someone can take more time than using it. Cognition 2013;128:76–81. [65] Valyear KF, Chapman CS, Gallivan JP, Mark RS, Culham JC. To use or to move: goal-set modulates priming when grasping real tools. Exp Brain Res 2011;212:125–42. [66] Bach P, Knoblich G, Gunter T, Friederici AD, Prinz W. Action comprehension: deriving spatial and functional relations. J Exp Psychol Hum Percept Perform 2005;31:465–79. [67] van Elk M, Paulus M, Pfeiffer C, van Schie HT, Bekkering H. Learning to use novel objects: a training study on the acquisition of novel action representations. Conscious Cogn 2011;20:1304–14. [68] Kawato M. Internal models for motor control and trajectory planning. Curr Opin Neurobiol 1999;9:718–27. [69] Scott SH, Norman KE. Computational approaches to motor control and their potential role for interpreting motor dysfunction. Curr Opin Neurol 2003;16:693–8. [70] Wolpert DM. Computational approaches to motor control. Trends Cogn Sci 1997;1:209–16. [71] Wolpert DM, Kawato M. Multiple paired forward and inverse models for motor control. Neural Netw 1998;11:1317–29. [72] Feldman AG. Functional tuning of the nervous system during control of movement or maintenance of a steady posture – II. Controllable parameters of the muscle. Biophysics 1966;11:565–78. [73] Gottlieb GL. The generation of the efferent command and the importance of joint compliance in fast elbow movements. Exp Brain Res 1994;97:545–50. [74] Shadmehr R, Mussaivaldi FA. Adaptive representation of dynamics during learning of a motor task. J Neurosci 1994;14:3208–24. [75] Rosenbaum DA, Meulenbroek RJ, Vaughan J, Jansen C. Posture-based motion planning: applications to grasping. Psychol Rev 2001;108:709–34. [76] Sirigu A, Duhamel JR. Motor and visual imagery as two complementary but neurally dissociable mental processes. J Cogn Neurosci 2001;13:910–9. [77] Hommel B, Musseler J, Aschersleben G, Prinz W. The theory of event coding (TEC): a framework for perception and action planning. Behav Brain Sci 2001;24:849–78 [discussion 78-937]. [78] Elsner B, Hommel B. Effect anticipation and action control. J Exp Psychol Hum 2001;27:229–40. [79] Hommel B. Ideomotor action control: on the perceptual grounding of voluntary actions and agents. In: Prinz W, Beizert M, Herwig A, editors. Action science: foundations of an emerging discipline. MIT University Press; 2013. p. 113–36. [80] Rosenbaum DA, Chapman KM, Weigelt M, Weis DJ, van der We R. Cognition, action, and object manipulation. Psychol Bull 2012;138:924–46. [81] Paulus M, Hunnius S, Bekkering H. Neurocognitive mechanisms underlying social learning in infancy: infants’ neural processing of the effects of others’ actions. Soc Cogn Affect Neurosci 2013. [82] Paulus M, Hunnius S, van Elk M, Bekkering H. How learning to shake a rattle affects 8-month-old infants’ perception of the rattle’s sound: electrophysiological evidence for action-effect binding in infancy. Dev Cogn Neuros-Neth 2012;2:90–6. [83] van Elk M, van Schie HT, van den Heuvel R, Bekkering H. Semantics in the motor system: motor-cortical Beta oscillations reflect semantic knowledge of end-postures for object use. Front Human Neurosci 2010;4:8. [84] Boulenger V, Roy AC, Paulignan Y, Deprez V, Jeannerod M, Nazir TA. Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. J Cogn Neurosci 2006;18:1607–15. [85] Creem SH, Proffitt DR. Grasping objects by their handles: a necessary interaction between cognition and action. J Exp Psychol Hum 2001;27:218–28. [86] Gentilucci M, Gangitano M. Influence of automatic word reading on motor control. Eur J Neurosci 1998;10:752–6. [87] Glover S, Dixon P. Semantics affect the planning but not control of grasping. Exp Brain Res 2002;146:383–7. [88] Glover S, Rosenbaum DA, Graham J, Dixon P. Grasping the meaning of words. Exp Brain Res 2004;154:103–8. [89] Nazir TA, Boulenger V, Roy A, Silber B, Jeannerod M, Paulignan Y. Language-induced motor perturbations during the execution of a reaching movement. Q J Exp Psychol 2008;61:933–43. [90] Fischer MH, Zwaan RA. Embodied language: a review of the role of the motor system in language comprehension. Q J Exp Psychol 2008;61:825–50. [91] Glenberg AM, Kaschak MP. Grounding language in action. Psychon Bull Rev 2002;9:558–65. [92] Zwaan RA, Taylor LJ. Seeing, acting, understanding: motor resonance in language comprehension. J Exp Psychol Gen 2006;135:1–11. [93] Glover S, Dixon P. Semantics affect the planning but not control of grasping. Exp Brain Res 2002;146:383–7. [94] Boulenger V, Roy AC, Paulignan Y, Deprez V, Jeannerod M, Nazir TA. Cross-talk between language processes and overt motor behavior in the first 200 ms of processing. J Cogn Neurosci 2006;18:1607–15. [95] Chainay H, Humphreys GW. Privileged access to action for objects relative to words. Psychon Bull Rev 2002;9:348–55. [96] Yoon EY, Humphreys GW. Direct and indirect effects of action on object classification. Mem Cogn 2005;33:1131–46. [97] Helbig HB, Graf M, Kiefer M. The role of action representations in visual object recognition. Exp Brain Res 2006;174:221–8. [98] McNair NA, Harris IM. Disentangling the contributions of grasp and action representations in the recognition of manipulable objects. Exp Brain Res 2012;220:71–7. [99] van Elk M, van Schie HT, Bekkering H. The N400-concreteness effect reflects the retrieval of semantic information during the preparation of meaningful actions. Biol Psychol 2010;85:134–42.
JID:PLREV AID:428 /REV
26
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.26 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
[100] Gallagher S. How the body shapes the mind. New York: Oxford University Press; 2006. [101] Zoia S, Blason L, D’Ottavio G, Bulgheroni M, Pezzetta E, Scabar A, et al. Evidence of early development of action planning in the human foetus: a kinematic study. Exp Brain Res 2007;176:217–26. [102] Fagard J, von Spelke E, Hofsten C. Reaching and grasping a moving object in 6-, 8-, and 10-month-old infants: laterality and performance. Infant Behav Dev 2009;32:137–46. [103] McCarty ME, Clifton RK, Ashmead DH, Lee P, Goubet N. How infants use vision for grasping objects. Child Dev 2001;72:973–87. [104] McCarty ME, Clifton RK, Collard RR. The beginnings of tool use by infants and toddlers. Infancy 2001;2:233–56. [105] Claxton LJ, Keen R, McCarty ME. Evidence of motor planning in infant reaching behavior. Psychol Sci 2003;14:354–6. [106] Bourgeois KS, Khawar AW, Neal SA, Lockman JJ. Infant manual exploration of objects, surfaces, and their interrelations. Infancy 2005;8:233–52. [107] Fein G. Pretend play in childhood: an integrated review. Child Dev 1981;52:1095–118. [108] Piaget J. The origin of intelligence in the child. London: Routledge & Kegan Paul; 1953. [109] Capirci O, Contaldo A, Caselli MC, Volterra V. From action to language through gesture: a longitudinal perspective. Gesture 2005;5:155–77. [110] Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychol Sci 2005;16:367–71. [111] Acredolo L, Goodwyn S. Symbolic gesturing in normal infants. Child Dev 1988;59:450–66. [112] Werner H, Kaplan B. Symbol formation. New York: Wiley & Sons; 1963. [113] Oreilly AW. Using representations – comprehension and production of actions with imagined objects. Child Dev 1995;66:999–1010. [114] Overton WF, Jackson JP. The representation of imagined in action sequences: a developmental study. Child Dev 1973;44:309–14. [115] Marshall PJ, Meltzoff AN. Neural mirroring systems: exploring the EEG mu rhythm in human infancy. Dev Cogn Neuros-Neth 2011;1:110–23. [116] Pfurtscheller G, da Silva FHL. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol 1999;110:1842–57. [117] Caetano G, Jousmaki V, Hari R. Actor’s and observer’s primary motor cortices stabilize similarly after seen or heard motor actions. Proc Natl Acad Sci USA 2007;104:9058–62. [118] van Elk M, van Schie HT, Hunnius S, Vesper C, Bekkering H. You’ll never crawl alone: neurophysiological evidence for experiencedependent motor resonance in infancy. Neuroimage 2008;43:808–14. [119] Nystrom P, Ljunghammar T, Rosander K, von Hofsten C. Using mu rhythm desynchronization to measure mirror neuron activity in infants. Dev Sci 2011;14:327–35. [120] Southgate V, Begus K. Motor activation during the prediction of nonexecutable actions in infants. Psychol Sci 2013;24:828–35. [121] Southgate V, Johnson MH, El Karoui I, Csibra G. Motor system activation reveals infants’ on-line prediction of others’ goals. Psychol Sci 2010;21:355–9. [122] Southgate V, Johnson MH, Osborne T, Csibra G. Predictive motor activation during action observation in human infants. Biol Lett 2009;5:769–72. [123] Stapel JC, Hunnius S, van Elk M, Bekkering H. Motor activation during observation of unusual versus ordinary actions in infancy. Soc Neurosci 2010;5:451–60. [124] Marshall PJ, Saby JN, Meltzoff AN. Infant brain responses to object weight: exploring goal-directed actions and self-experience. Infancy 2013:1–19. [125] Koslowski B, Bruner JS. Learning to use a lever. Child Dev 1972;43:790–9. [126] Willatts P. Development and rapid adjustment of means-ends behavior in infants aged 6-months to 8-months. Can Psychol Cogn 1985;5:248–9. [127] Willatts P. Adjustment of means ends coordination and the representation of spatial relations in the production of search errors by infants. Br J Dev Psychol 1985;3:259–72. [128] Willatts P. Adjustment of means-ends coordination by stage-Iv infants on tasks involving the use of supports. B Br Psychol Soc 1983;36:259–72. [129] Willatts P. Development of means-end behavior in young infants: pulling a support to retrieve a distant object. Dev Psychol 1999;35:651–67. [130] Bates E, Carlsonluden V, Bretherton I. Perceptual aspects of tool using in infancy. Infant Behav Dev 1980;3:127–40. [131] Chen Z, Sanchez RP, Campbell T. From beyond to within their grasp: the rudiments of analogical problem solving in 10- and 13-month-olds. Dev Psychol 1997;33:790–801. [132] Gergely G, Csibra G. Teleological reasoning in infancy: the naive theory of rational action. Trends Cogn Sci 2003;7:287–92. [133] Greenfield PM. Language tools, and brain – the ontogeny and phylogeny of hierarchically organized sequential behavior. Behav Brain Sci 1991;14:531–50. [134] Elsner B, Hommel B. Contiguity and contingency in action-effect learning. Psychol Res 2004;68:138–54. [135] Hauf P, Aschersleben G. Action-effect anticipation in infant action control. Psychol Res 2008;72:203–10. [136] Hauf P, Elsner B, Aschersleben G. The role of action effects in infants’ action control. Psychol Res 2004;68:115–25. [137] Hofer T, Hauf P, Aschersleben G. Infant’s perception of goal-directed actions performed by a mechanical device. Infant Behav Dev 2005;28:466–80. [138] Gergely G, Nadasdy Z, Csibra G, Biro S. Taking the intentional stance at 12 months of age. Cognition 1995;56:165–93. [139] Philips AT, Wellman HM. Infants’ understanding of object-directed action. Cognition 2005;98:137–55. [140] Meltzoff AN. Understanding the intentions of others: re-enactment of intended acts by 18-month-old children. Dev Psychol 1995;31:1–16. [141] Carpenter M, Akhtar N, Tomasello M. Fourteen through 18-month-old infants differentially imitate intentional and accidental actions. Infant Behav Dev 1998:315–30. [142] Sommerville JA, Woodward AL, Needham A. Action experience alters 3-month-old infants’ perception of others’ actions. Cognition 2005;96:B1–11.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.27 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
[143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186]
27
Woodward AL. Infants selectively encode the goal object of an actor’s reach. Cognition 1998;69:1–34. Biro S, Leslie AM. Infants’ perception of goal-directed actions: development through cue-based bootstrapping. Dev Sci 2007;10:379–98. Falck-Ytter T, Gredeback G, von Hofsten C. Infants predict other people’s action goals. Nat Neurosci 2006;9:878–9. Hunnius S, Bekkering H. The early development of object knowledge: a study of infants’ visual anticipations during action observation. Dev Psychol 2010;46:446–54. Kochukhova O, Gredeback G. Preverbal infants anticipate that food will be brought to the mouth: an eye tracking study of manual feeding and flying spoons. Child Dev 2010;81:1729–38. Reid VM, Csibra G, Belsky J, Johnson MH. Neural correlates of the perception of goal-directed action in infants. Acta Psychol 2007;124:129–38. Reid VM, Striano T. N400 involvement in the processing of action sequences. Neurosci Lett 2008;433:93–7. Kilner J, Friston KJ, Frith CD. Predictive coding: an account of the mirror neuron system. Cogn Process 2007;8:159–66. Reid VM, Hoehl S, Grigutsch M, Groendahl A, Parise E, Striano T. The neural correlates of infant and adult goal prediction: evidence for semantic processing systems. Dev Psychol 2009;45:620–9. Kutas M, Federmeier KD. Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annu Rev Psychol 2011;62. 62:621–47. McCarty ME, Clifton RK, Collard RR. Problem solving in infancy: the emergence of an action plan. Dev Psychol 1999;35:1091–101. Barrett TM, Davis EF, Needham A. Learning about tools in infancy. Dev Psychol 2007;43:352–68. Mccunenicolich L. Toward symbolic functioning – structure of early pretend games and potential parallels with language. Child Dev 1981;52:785–97. Contaldo A, Cola E, Minichilli F, Crecchi A, Carboncini MC, Rossi B, et al. Object use affects motor planning in infant prehension. Hum Mov Sci 2013;32:498–510. Killen M, Uzgiris IC. Imitation of actions with objects – the role of social meaning. J Genet Psychol 1981;138:219–29. Geschwind N, Damasio AR. Apraxia. In: Frederiks JAM, editor. Handbook of clinical neurology. Amsterdam: Elsevier; 1985. p. 423–32. Heilman KM, Rothi LJG. Apraxia. In: Heilman KM, Valenstein E, editors. Clinical neuropsychology. New York: Oxford University Press; 1993. p. 141–64. Goldenberg G. Apraxia and the parietal lobes. Neuropsychologia 2009;47:1449–59. Liepmann H. Drei aufsatze aus dem apraxiegebiet. Berlin: Karger; 1908. Heilman KM, Rothi LJ, Valenstein E. Two forms of ideomotor apraxia. Neurology 1982;32:342–6. Sirigu A, Cohen L, Duhamel JR, Pillon B, Dubois B, Agid Y. A selective impairment of hand posture for object utilization in apraxia. Cortex 1995;31:41–55. Ochipa C, Rothi LJG, Heilman KM. Ideational apraxia – a deficit in tool selection and use. Ann Neurol 1989;25:190–3. Belanger SA, Duffy RJ, Coelho CA. The assessment of limb apraxia: an investigation of task effects and their cause. Brain Cogn 1996;32:384–404. Schnider A, Hanlon RE, Alexander DN, Benson DF. Ideomotor apraxia: behavioral dimensions and neuroanatomical basis. Brain Lang 1997;58:125–36. Buxbaum LJ. Ideomotor apraxia: a call to action. Neurocase 2001;7:445–58. Buxbaum LJ, Kyle K, Grossman M, Coslett HB. Left inferior parietal representations for skilled hand–object interactions: evidence from stroke and corticobasal degeneration. Cortex 2007;43:411–23. Goldenberg G. Apraxia and beyond: life and work of Hugo Liepmann. Cortex 2003;39:509–24. Goldenberg G, Hagmann S. The meaning of meaningless gestures: a study of visuo-imitative apraxia. Neuropsychologia 1997;35:333–41. Tessari A, Canessa N, Ukmar M, Rumiati RI. Neuropsychological evidence for a strategic control of multiple routes in imitation. Brain 2007;130:1111–26. Tessari A, Rumiati RI. The strategic control of multiple routes in imitation of actions. J Exp Psychol Hum 2004;30:1107–16. Barsalou LW. Simulation, situated conceptualization, and prediction. Philos Trans R Soc B 2009;364:1281–9. Kiefer M, Pulvermuller F. Conceptual representations in mind and brain: theoretical developments, current evidence and future directions. Cortex 2012;48:805–25. Warrington EK, Shallice T. Category specific semantic impairments. Brain 1984;107:829–54. Fodor JA. The language of thought. Cambridge, MA: Harvard University Press; 1975. Kintsch W. The representation of knowledge in minds and machines. Int J Psychol 1998;33:411–20. Mahon BZ, Caramazza A. A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. J Physiol 2008;102:59–70. Shallice T. From neuropsychology to mental structure. Cambridge: Cambridge University Press; 1988. Visser M, Jefferies E, Ralph MAL. Semantic processing in the anterior temporal lobes: a meta-analysis of the functional neuroimaging literature. J Cogn Neurosci 2010;22:1083–94. Wong C, Gallate J. The function of the anterior temporal lobe: a review of the empirical evidence. Brain Res 2012;1449:94–116. Schwartz MF, Reed ES, Montgomery M, Palmer C, Mayer NH. The quantitative description of action disorganization after brain-damage – a case-study. Cogn Neuropsychol 1991;8:381–414. Humphreys GW, Forde EME. Disordered action schema and action disorganisation syndrome. Cogn Neuropsychol 1998;15:771–811. Schwartz MF, Montgomery MW, Buxbaum LJ, Lee SS, Carew TG, Coslett HB, et al. Naturalistic action impairment in closed head injury. Neuropsychology 1998;12:13–28. Buxbaum LJ, Schwartz MF, Carew TG. The role of semantic memory in object use. Cogn Neuropsychol 1997;14:219–54. Forde EME, Humphreys GW. The role of semantic knowledge and working memory in everyday tasks. Brain Cogn 2000;44:214–52.
JID:PLREV AID:428 /REV
28
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.28 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
[187] Schwartz MF. Re-examining the role of executive functions in routine action production. Struct Funct Hum Prefront Cortex 1995;769:321–35. [188] Hartmann K, Goldenberg G, Daumuller M, Hermsdorfer J. It takes the whole brain to make a cup of coffee: the neuropsychology of naturalistic actions involving technical devices. Neuropsychologia 2005;43:625–37. [189] Luria AR. Higher cortical functions in man. New York: Basic Books; 1966. [190] Morady K, Humphreys GW. Comparing action disorganization syndrome and dual-task load on normal performance in everyday action tasks. Neurocase 2009;15:1–12. [191] Cooper RP, Shallice T. Contention scheduling and the control of routine activities. Cogn Neuropsychol 2000;17:297–338. [192] Norman DA, Shallice T. Attention to action – willed and automatic-control of behavior. Bull Psychon Soc 1983;21:354. [193] Zalla T, Plassiart C, Pillon B, Grafman J, Sirigu A. Action planning in a virtual context after prefrontal cortex damage. Neuropsychologia 2001;39:759–70. [194] Humphreys GW, Forde EME. Hierarchies, similarity, and interactivity in object recognition: “category-specific” neuropsychological deficits. Behav Brain Sci 2001;24:453. [195] Botvinick M, Plaut DC. Doing without schema hierarchies: a recurrent connectionist approach to normal and impaired routine sequential action. Psychol Rev 2004;111:395–429. [196] Warrington EK. The selective impairment of semantic memory. Q J Exp Psychol 1975;27:635–57. [197] Hamanaka T, Matsui A, Yoshida S, Nakanishi M, Fujita K, Banno T, et al. Cerebral laterality and category-specificity in cases of semantic memory impairment with PET-findings and associated with identification amnesia for familiar persons. Brain Cogn 1996;30:368–72. [198] Hodges JR, Bozeat S, Lambon Ralph MA, Patterson K, Spatt J. The role of conceptual knowledge in object use evidence from semantic dementia. Brain 2000;123(Pt 9):1913–25. [199] Lauro-Grotto R, Piccini C, Shallice T. Modality-specific operations in semantic dementia. Cortex 1997;33:593–622. [200] Mion M, Patterson K, Acosta-Cabronero J, Pengas G, Izquierdo-Garcia D, Hong YT, et al. What the left and right anterior fusiform gyri tell us about semantic memory. Brain 2010;133:3256–68. [201] Nestor PJ, Fryer TD, Hodges JR. Declarative memory impairments in Alzheimer’s disease and semantic dementia. Neuroimage 2006;30:1010–20. [202] Rogers TT, Lambon Ralph MA, Garrard P, Bozeat S, McClelland JL, Hodges JR, et al. Structure and deterioration of semantic memory: a neuropsychological and computational investigation. Psychol Rev 2004;111:205–35. [203] Chao LL, Haxby JV, Martin A. Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nat Neurosci 1999;2:913–9. [204] Martin A, Haxby JV, Lalonde FM, Wiggs CL, Ungerleider LG. Discrete cortical regions associated with knowledge of color and knowledge of action. Science 1995;270:102–5. [205] Thompson-Schill SL, Aguirre GK, D’Esposito M, Farah MJ. A neural basis for category and modality specificity of semantic knowledge. Neuropsychologia 1999;37:671–6. [206] Grafton ST, Fadiga L, Arbib MA, Rizzolatti G. Premotor cortex activation during observation and naming of familiar tools. Neuroimage 1997;6:231–6. [207] Martin A, Wiggs CL, Ungerleider LG, Haxby JV. Neural correlates of category-specific knowledge. Nature 1996;379:649–52. [208] Allport DA. Distributed memory, modular systems and dysphasia. In: Newman SK, Epstein R, editors. Current perspectives in dysphasia. Edinburgh: Churchill Livingstone; 1985. [209] Caspers S, Zilles K, Laird AR, Eickhoff SB. ALE meta-analysis of action observation and imitation in the human brain. Neuroimage 2010;50:1148–67. [210] Grabowski TJ, Damasio H, Damasio AR. Premotor and prefrontal correlates of category-related lexical retrieval. Neuroimage 1998;7:232–43. [211] Grezes J, Decety J. Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia 2002;40:212–22. [212] Grezes J, Tucker M, Armony J, Ellis R, Passingham RE. Objects automatically potentiate action: an fMRI study of implicit processing. Eur J Neurosci 2003;17:2735–40. [213] Creem-Regehr SH, Lee JN. Neural representations of graspable objects: are tools special?. Cogn Brain Res 2005;22:457–69. [214] Price CJ, Devlin JT, Moore CJ, Morton C, Laird AR. Meta-analyses of object naming: effect of baseline. Hum Brain Mapp 2005;25:70–82. [215] Noppeney U, Josephs O, Kiebel S, Friston KJ, Price CJ. Action selectivity in parietal and temporal cortex. Cogn Brain Res 2005;25:641–9. [216] Okada T, Tanaka S, Nakai T, Nishizawa S, Inui T, Sadato N, et al. Naming of animals and tools: a functional magnetic resonance imaging study of categorical differences in the human brain areas commonly used for naming visually presented objects. Neurosci Lett 2000;296:33–6. [217] Murata A, Gallese V, Luppino G, Kaseda M, Sakata H. Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J Neurophysiol 2000;83:2580–601. [218] Sakata H, Taira M, Murata A, Mine S. Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey. Cereb Cortex 1995;5:429–38. [219] Beauchamp MS, Lee K, Haxby J, Martin A. Parallel visual motion processing streams in lateral temporal cortex for manipulable objects and human movements. J Cogn Neurosci 2002:92–3. [220] Beauchamp MS, Lee KE, Haxby JV, Martin A. fMRI responses to video and point-light displays of moving humans and manipulable objects. J Cogn Neurosci 2003;15:991–1001. [221] Kable J, Chatterjee A. The role of lateral occipitotemporal cortex in action recognition: evidence from repetition suppression using fMRI. J Cogn Neurosci 2005:116. [222] Kable J, Wilson A, Chatterjee A. Neural substrates of verb knowledge: motion and movement. J Cogn Neurosci 2002:108–9. [223] Phillips JA, Humphreys GW, Price CJ. The neural substrates of action retrieval: an examination of semantic and visual routes to action. J Cogn Neurosci 2000:124. [224] Born RT, Bradley DC. Structure and function of visual area MT. Annu Rev Neurosci 2005;28:157–89.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.29 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
29
[225] Beauchamp MS, Martin A. Grounding object concepts in perception and action: evidence from fMRI studies of tools. Cortex 2007;43:461–8. [226] Mahon BZ, Milleville SC, Negri GA, Rumiati RI, Caramazza A, Martin A. Action-related properties shape object representations in the ventral stream. Neuron 2007;55:507–20. [227] Mizelle JC, Kelly RL, Wheaton LA. Ventral encoding of functional affordances: a neural pathway for identifying errors in action. Brain Cogn 2013;82:274–82. [228] Vingerhoets G, Acke F, Vandemaele P, Achten E. Tool responsive regions in the posterior parietal cortex: effect of differences in motor goal and target object during imagined transitive movements. Neuroimage 2009;47:1832–43. [229] Pelgrims B, Olivier E, Andres M. Dissociation between manipulation and conceptual knowledge of object use in the supramarginalis gyrus. Hum Brain Mapp 2011;32:1802–10. [230] Rushworth MFS, Behrens TEJ, Johansen-Berg H. Connection patterns distinguish 3 regions of human parietal cortex. Cereb Cortex 2006;16:1418–30. [231] Zhong YM, Rockland KS. Inferior parietal lobule projections to anterior inferotemporal cortex (area TE) in macaque monkey. Cereb Cortex 2003;13:527–40. [232] Watson CE, Cardillo ER, Ianni GR, Action CA. Concepts in the brain: an activation likelihood estimation meta-analysis. J Cogn Neurosci 2013;25:1191–205. [233] Kiefer M, Sim EJ, Liebich S, Hauk O, Tanaka J. Experience-dependent plasticity of conceptual representations in human sensory-motor areas. J Cogn Neurosci 2007;19:525–42. [234] Pulvermuller F. Semantic embodiment, disembodiment or misembodiment? In search of meaning in modules and neuron circuits. Brain Lang 2013. [235] Binkofski F, Buccino G, Posse S, Seitz RJ, Rizzolatti G, Freund HJ. A fronto-parietal circuit for object manipulation in man: evidence from an fMRI-study. Eur J Neurosci 1999;11:3276–86. [236] Binkofski F, Buccino G, Stephan KM, Rizzolatti G, Seitz RJ, Freund HJ. A parieto-premotor network for object manipulation: evidence from neuroimaging. Exp Brain Res 1999;128:210–3. [237] Binkofski F, Dohle C, Posse S, Stephan KM, Hefter H, Seitz RJ, et al. Human anterior intraparietal area subserves prehension – a combined lesion and functional MRI activation study. Neurology 1998;50:1253–9. [238] Kalenine S, Shapiro AD, Buxbaum LJ. Dissociations of action means and outcome processing in left-hemisphere stroke. Neuropsychologia 2013;51:1224–33. [239] Graziano MS, Aflalo TN. Mapping behavioral repertoire onto the cortex. Neuron 2007;56:239–51. [240] Mushiake H, Saito N, Sakamoto K, Itoyama Y, Tanji J. Activity in the lateral prefrontal cortex reflects multiple steps of future events in action plans. Neuron 2006;50:631–41. [241] Shima K, Isoda M, Mushiake H, Tanji J. Categorization of behavioural sequences in the prefrontal cortex. Nature 2007;445:315–8. [242] Shima K, Tanji J. Neuronal activity in the supplementary and presupplementary motor areas for temporal organization of multiple movements. J Neurophysiol 2000;84:2148–60. [243] Majdandzic J, Bekkering H, van Schie HT, Toni I. Movement-specific repetition suppression in ventral and dorsal premotor cortex during action observation. Cereb Cortex 2009;19:2736–45. [244] Fogassi L, Ferrari PF, Gesierich B, Rozzi S, Chersi F, Rizzolatti G. Parietal lobe: from action organization to intention understanding. Science 2005;308:662–7. [245] Snyder LH, Batista AP, Andersen RA. Intention-related activity in the posterior parietal cortex: a review. Vis Res 2000;40:1433–41. [246] Hamilton AF, Grafton ST. Goal representation in human anterior intraparietal sulcus. J Neurosci 2006;26:1133–7. [247] Hamilton AF, Grafton ST. Action outcomes are represented in human inferior frontoparietal cortex. Cereb Cortex 2008;18:1160–8. [248] Gallivan J, McLean A, Valyear K, Culham J. Decoding the neural mechanisms of human tool use. Can J Exp Psychol 2012;66:323. [249] Choi SH, Na DL, Kang E, Lee KM, Lee SW, Na DG. Functional magnetic resonance imaging during pantomiming tool-use gestures. Exp Brain Res 2001;139:311–7. [250] Hermsdorfer J, Terlinden G, Muhlau M, Goldenberg G, Wohlschlager AM. Neural representations of pantomimed and actual tool use: evidence from an event-related fMRI study. Neuroimage 2007;36:T109–T18. [251] Johnson-Frey SH, Newman-Norlund R, Grafton ST. A distributed left hemisphere network active during planning of everyday tool use skills. Cereb Cortex 2005;15:681–95. [252] Kroliczak G, Frey SH. A common network in the left cerebral hemisphere represents planning of tool use pantomimes and familiar intransitive gestures at the hand-independent level. Cereb Cortex 2009;19:2396–410. [253] Moll J, de Oliveira-Souza R, Passman LJ, Cunha FC, Souza-Lima F, Andreiuolo PA. Functional MRI correlates of real and imagined tool-use pantomimes. Neurology 2000;54:1331–6. [254] Ohgami Y, Matsuo K, Uchida N, Nakai T. An fMRI study of tool-use gestures: body part as object and pantomime. NeuroReport 2004;15:1903–6. [255] Peeters RR, Rizzolatti G, Orban GA. Functional properties of the left parietal tool use region. Neuroimage 2013;78:83–93. [256] Valyear KF, Cavina-Pratesi C, Stiglick AJ, Culham JC. Does tool-related fMRI activity within the intraparietal sulcus reflect the plan to grasp?. Neuroimage 2007;36:T94–108. [257] Valyear KF, Gallivan JP, McLean DA, Culham JC. fMRI repetition suppression for familiar but not arbitrary actions with tools. J Neurosci 2012;32:4247–59. [258] Boronat CB, Buxbaum LJ, Coslett HB, Tang K, Saffran EM, Kimberg DY, et al. Distinctions between manipulation and function knowledge of objects: evidence from functional magnetic resonance imaging. Cogn Brain Res 2005;23:361–73. [259] Canessa N, Borgo F, Cappa SF, Perani D, Falini A, Buccino G, et al. The different neural correlates of action and functional knowledge in semantic memory: an fMRI study. Cereb Cortex 2008;18:740–51.
JID:PLREV AID:428 /REV
30
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.30 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
[260] Ebisch SJH, Babiloni C, Del Gratta C, Ferretti A, Perrucci MG, Caulo M, et al. Human neural systems for conceptual knowledge of proper object use: a functional magnetic resonance imaging study. Cereb Cortex 2007;17:2744–51. [261] Kan IP, Kable JW, Van Scoyoc A, Chatterjee A, Thompson-Schill SL. Fractionating the left frontal response to tools: dissociable effects of motor experience and lexical competition. J Cogn Neurosci 2006;18:267–77. [262] Binder JR, Desai RH, Graves WW, Conant LL. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb Cortex 2009;19:2767–96. [263] Visser M, Embleton KV, Jefferies E, Parker GJ, Ralph MAL. The inferior, anterior temporal lobes and semantic memory clarified: novel evidence from distortion-corrected fMRI. Neuropsychologia 2010;48:1689–96. [264] Gainotti G. The format of conceptual representations disrupted in semantic dementia: a position paper. Cortex 2012;48:521–9. [265] Gainotti G. The organization and dissolution of semantic-conceptual knowledge: is the ‘amodal hub’ the only plausible model?. Brain Cogn 2011;75:299–309. [266] Amoruso L, Gelormini C, Aboitiz F, Gonzalez MA, Manes F, Cardona JF, et al. N400 ERPs for actions: building meaning in context. Front Human Neurosci 2013;7. [267] Bach P, Gunter TC, Knoblich G, Prinz W, Friederici AD. N400-like negativities in action perception reflect the activation of two components of an action representation. Soc Neurosci 2009;4:212–32. [268] Balconi M, Caldiroli C. Semantic violation effect on object-related action comprehension. N400-like event-related potentials for unusual and incorrect use. Neuroscience 2011;197:191–9. [269] Balconi M, Vitaloni S. The tDCS effect on alpha brain oscillation for correct vs. incorrect object use. The contribution of the left DLPFC. Neurosci Lett 2012;517:25–9. [270] Proverbio AM, Riva F. RP and N400 ERP components reflect semantic violations in visual processing of human actions. Neurosci Lett 2009;459:142–6. [271] Sitnikova T, Holcomb PJ, Kiyonaga KA, Kuperberg GR. Two neurocognitive mechanisms of semantic integration during the comprehension of visual real-world events. J Cogn Neurosci 2008;20:2037–57. [272] Sitnikova T, Kuperberg G, Holcomb PJ. Semantic integration in videos of real-world events: an electrophysiological investigation. Psychophysiology 2003;40:160–4. [273] van Elk M, van Schie HT, Bekkering H. Semantics in action: an electrophysiological study on the use of semantic knowledge for action. J Physiol 2008;102:95–100. [274] Nobre AC, Allison T, Mccarthy G. Word recognition in the human inferior temporal-lobe. Nature 1994;372:260–3. [275] Nobre AC, Mccarthy G. Language-related field potentials in the anterior-medial temporal-lobe. 2. Effects of word type and semantic priming. J Neurosci 1995;15:1090–8. [276] Bellebaum C, Tettamanti M, Marchetta E, Della Rosa P, Rizzo G, Daum I, et al. Neural representations of unfamiliar objects are modulated by sensorimotor experience. Cortex 2013;49:1110–25. [277] Cross ES, Cohen NR, Hamilton AF, Ramsey R, Wolford G, Grafton ST. Physical experience leads to enhanced object perception in parietal cortex: insights from knot tying. Neuropsychologia 2012;50:3207–17. [278] Weisberg J, van Turennout M, Martin A. A neural system for learning about object function. Cereb Cortex 2007;17:513–21. [279] Vingerhoets G. Knowing about tools: neural correlates of tool familiarity and experience. Neuroimage 2008;40:1380–91. [280] Newman-Norlund R, van Schie HT, van Hoek MEC, Cuijpers RH, Bekkering H. The role of inferior frontal and parietal areas in differentiating meaningful and meaningless object-directed actions. Brain Res 2010;1315:63–74. [281] Cooper RP, Shallice T. Hierarchical schemas and goals in the control of sequential behavior. Psychol Rev 2006;113:887–916. [282] de Lange FP Spronk M, Willems RM, Toni I, Bekkering H. Complementary systems for understanding action intentions. Curr Biol 2008;18:454–7. [283] Ondobaka S, Bekkering H. Hierarchy of idea-guided action and perception-guided movement. Front Psychol 2012;3:1–5. [284] Uithol S, van Rooij I, Bekkering H, Haselager P. Hierarchies in action and motor control. J Cogn Neurosci 2012;24:1077–86. [285] Pacherie E. The phenomenology of action: a conceptual framework. Cognition 2008;107:179–217. [286] Barsalou LW. Grounded cognition. Annu Rev Psychol 2008;59:617–45. [287] Ralph MAL, Patterson K. Generalization and differentiation in semantic memory – insights from semantic dementia. Year Cogn Neurosci 2008;1124:61–76. [288] Pulvermuller F. Brain mechanisms linking language and action. Nat Rev Neurosci 2005;6:576–82. [289] Louwerse MM. Symbol interdependency in symbolic and embodied cognition. Top Cogn Sci 2011;3:273–302. [290] Louwerse MM, Jeuniaux P. The linguistic and embodied nature of conceptual processing. Cognition 2010;114:96–104. [291] van Schie HT, Wijers AA, Mars RB, Benjamins JS, Stowe LA. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials. Cogn Neuropsychol 2005;22:364–86. [292] Damasio AR. Category-related recognition defects as a clue to the neural substrates of knowledge. Trends Neurosci 1990;13:95–8. [293] Barsalou LW. Perceptual symbol systems. Behav Brain Sci 1999;22:577. [294] Ryle G. Knowing how and knowing that. New York: Barnes and Nobles; 1971. [295] Wheeler M. Reconstructing the cognitive world: the next step. Cambridge, MA: MIT Press; 2005. [296] Ondobaka S, de Lange FP, Newman-Norlund RD, Wiemers M, Bekkering H. Interplay between action and movement intentions during social interaction. Psychol Sci 2012;23:30–5. [297] Badre D. Cognitive control, hierarchy, and the rostro-caudal organization of the frontal lobes. Trends Cogn Sci 2008;12:193–200. [298] Fuster JM. Upper processing stages of the perception–action cycle. Trends Cogn Sci 2004;8:143–5. [299] Petrides M. Lateral prefrontal cortex: architectonic and functional organization. Philos Trans R Soc B 2005;360:781–95. [300] Gerloff C, Corwell B, Chen R, Hallett M, Cohen LG. Stimulation over the human supplementary motor area interferes with the organization of future elements in complex motor sequences. Brain 1997;120:1587–602.
JID:PLREV AID:428 /REV
[m3SC+; v 1.183; Prn:22/01/2014; 16:06] P.31 (1-31)
M. van Elk et al. / Physics of Life Reviews ••• (••••) •••–•••
31
[301] Mummery CJ, Patterson K, Wise RJS, Vandenbergh R, Price CJ, Hodges JR. Disrupted temporal lobe connections in semantic dementia. Brain 1999;122:61–73. [302] Gloor P. The temporal lobe and the limbic system. Oxford: Oxford University Press; 1997. [303] Catani M, Jones DK, Donato R, ffytche DH. Occipito-temporal connections in the human brain. Brain 2003;126:2093–107. [304] Rizzolatti G, Luppino G, Matelli M, Ffytche DH. The organization of the cortical motor system: new concepts. Electroencephalogr Clin Neurophysiol 1998;106:283–96. [305] Reynolds JR, O’reilly RC. Developing PFC representations using reinforcement learning. Cognition 2009;113:281–92. [306] Rempel-Clower NL, Barbas H. The laminar pattern of connections between prefrontal and anterior temporal cortices in the rhesus monkey is related to cortical structure and function. Cereb Cortex 2000;10:851–65. [307] Friederici AD. Pathways to language: fiber tracts in the human brain. Trends Cogn Sci 2009;13:175–81. [308] Barsalou LW. Grounded cognition: past, present, and future. Top Cogn Sci 2010;2:716–24. [309] Rizzolatti G, Fabbri-Destro M. The mirror system and its role in social cognition. Curr Opin Neurobiol 2008;18:179–84. [310] Jacob P. The tuning-fork model of human social cognition: a critique. Conscious Cogn 2009;18:229–43. [311] Rizzolatti G, Sinigaglia C. The functional role of the parieto-frontal mirror circuit: interpretations and misinterpretations. Nat Rev Neurosci 2010;11:264–74. [312] Rueschemeyer SA, Lindemann O, van Rooij D, van Dam W, Bekkering H. Effects of intentional motor actions on embodied language processing. Exp Psychol 2010;57:260–6. [313] van Elk M, van Schie HT, Neggers SF, Bekkering H. Neural and temporal dynamics underlying visual selection for action. J Neurophysiol 2010;104:972–83. [314] van Dam WO, van Dongen EV, Bekkering H, Rueschemeyer SA. Context-dependent changes in functional connectivity of auditory cortices during the perception of object words. J Cogn Neurosci 2012;24:2108–19. [315] Poljac E, van Schie HT, Bekkering H. Understanding the flexibility of action–perception coupling. Psychol Res 2009;73:578–86. [316] Ondobaka S, Wittmann M, de Lange F, Bekkering H. Context-dependent expectations influence neural processing of observed goal-directed action. J Cogn Neurosci 2013:262. [317] van Schie HT, van Waterschoot BM, Bekkering H. Understanding action beyond imitation: reversed compatibility effects of action observation in imitation and joint action. J Exp Psychol Hum Percept Perform 2008;34:1493–500. [318] Sakreida K, Scorolli C, Menz MM, Heim S, Borghi AM, Binkofski F. Are abstract action words embodied? An fMRI investigation at the interface between language and motor cognition. Front Human Neurosci 2013;7. [319] Schuil KDI, Smits M, Zwaan RA. Sentential context modulates the involvement of the motor cortex in action language processing: an fMRI study. Front Human Neurosci 2013;7. [320] Andersen RA, Cui H. Intention, action planning, and decision making in parietal-frontal circuits. Neuron 2009;63:568–83.