Constraining aphasia

Constraining aphasia

Update Monitor Vision helps touch Previous studies have demonstrated that orienting towards the site of somatosensory stimulation can facilitate perc...

38KB Sizes 0 Downloads 83 Views

Update Monitor

Vision helps touch Previous studies have demonstrated that orienting towards the site of somatosensory stimulation can facilitate perception of a tactile stimulus. As facilitation can occur even when visual information about the site of stimulation is unavailable, such as when stimuli are delivered in the dark, these effects have been attributed to the influence of proprioception on somatosensory processing. A recent study by Tipper and colleagues1, however, provides compelling evidence for the independent effects of vision on somatosensation. Tipper et al. examined subjects’ ability

to detect and report tactile stimulation applied, with equal probability, to one or other hand. They demonstrate that when proprioceptive biases are eliminated, perception of a tactile stimulus is enhanced by vision of the site of stimulation (i.e. the hand). Importantly, they show that this is the case even when visual cues are indirect, such as where the visual image of the stimulated hand is presented at a different location to the site of stimulation (by the use of a video camera which projected the image onto a monitor at the subject’s midline), or where the view of the stimulated

Constraining aphasia A common view of aphasia is that Broca’s aphasics have difficulty with grammatical analysis, whereas Wernicke’s aphasics have a lexical or interpretative deficit. Now, however, Grodzinsky and Finkel1 show that both groups of aphasics are impaired with respect to grammatical analysis. Furthermore, this syntactic analysis deficit is restricted in nature. Grodzinsky and Finkel asked a group of four Broca’s aphasics and a group of seven Wernicke’s aphasics to make grammaticality judgments to grammatical sentences and to sentences in which various kinds of syntactic constraints were violated. Both Broca’s and Wernicke’s aphasics performed poorly on the task. However, the errors mostly concerned a particular type of syntactic constraint, namely, constraints that apply to syntactic movement of a full phrase [e.g. ‘Which woman did David think that saw John’ (cf. ‘…that John saw’; trace effect of the word ‘that’) or ‘I don’t know what who saw’ (cf. ‘I don’t know who saw what’; superiority effects)] were incorrectly judged as grammatical. On the other hand, the aphasics performed better on constraints on movement of a smaller syntactic category (verb or

negation) [e.g. ‘Have they could leave town?’ (cf. ‘Could they have left town?’); ‘John sat not’ (cf. ‘John didn’t sit’)], or violations that do not involve movement [such as ‘Who did John see Joe?’ (cf. ‘Who did John see?’)]. Although both Broca’s and Wernicke’s aphasics show this pattern of performance, the Broca’s aphasics were more selectively impaired with respect to the first type of constraint. Grodzinky and Finkel are the first to use a grammatical-judgment task with aphasics. The fact that their results tie in with earlier comprehension studies suggests that the aphasics’ impairment is not dependent on the kind of task, but is a general linguistic impairment. On the other hand, the finding that the impairment concerns only a specific type of constraint suggests that the aphasic’s deficit is structuredependent, and hence functionally more restricted than has been previously assumed. Reference 1 Grodzinsky, Y. and Finkel, L. (1998) The neurology of empty categories: aphasic’s failure to detect ungrammaticality J. Cogn. Neurosci. 10, 281–292

Learning virtually The medial temporal region, and in particular the hippocampus, has been associated with memory of the spatial layout of familiar environments, and this has been supported by experiments in rats and in non-human primates. Cognitive models of space representation assume that salient landmarks play a key role. A recent human PET study1 examined brain activity while subjects explored a virtualreality environment. Learning to explore an environment containing various objects in different rooms activated a network of occipital, parietal and

occipito-temporal brain regions, including the right parahippocampal gyrus. This region was not activated, however, when exploring an environment of empty rooms of different shape. These results suggest that the encoding specifically of object locations in topographical memory requires parahippocampal involvement in humans.

hand is that which might be obtained by a third party viewing the hand. The authors admit that it is not clear whether the results represent a facilitation of tactile processing by vision or an inhibition without vision, but add that it is also unclear what would constitute a true control condition that would determine which of these was occurring. Further work comparing detection of visual and tactile stimuli at the location of an occluded hand will build on these results, and advance our understanding of the integration of sensory inputs. Reference 1 Tipper S.P. et al. (1998) Vision influences tactile perception without proprioceptive orienting NeuroReport 9, 1741–1744

‘What’ and ‘where’ Many studies on primates and on human patients with brain lesions have supported the notion that visual recognition and spatial localization of objects are processed in distinct temporal and parietal areas or ‘what’ and ‘where’ pathways, respectively (e.g. Ref. 1). Now, Sereno and Maunsell2 have reported that neurons in macaque posterior parietal cortex (part of the ‘where’ pathway) show discrimination of different 2-D geometrical shapes. Unlike previous studies, this did not involve any visually guided reaching for, or manipulation of, test objects but only a visual matching task. Monkeys fixated a central spot while a shape was presented within the (peripheral) area to which the neuron being recorded responded. Many neurons showed different firing rates to different shapes presented. Such shape selectivity is equivalent to any previously shown in the temporal pathway, and suggests that posterior parietal cortex contributes to pure object recognition as well as localization; in other words ‘what’ and ‘where’ functions are integrated in the same brain area. It may be that the shape selectivity in the posterior parietal cortex is more specialized than that in the temporal areas; for example, because the parietal cortex is known to be important for hand–eye coordination and object manipulation, it would not be too surprising if the shapes to which parietal neurons responded were restricted to those a monkey could grasp and manipulate. Further research will show whether this is in fact the case. However, it is already clear that a simple dichotomy of visual functional specialization into ‘what’ and ‘where’ is probably too simple. References 1 Ungerleider, L.G. and Haxby, J.V. (1993) ‘What’ and ‘where’ in the human brain Curr.

Reference 1 Maguire, E.A. et al. (1998) Knowing where

Opin. Neurobiol. 4, 157–165

things are: parahippocampal involvement in

2 Sereno, A.B. and Maunsell, J.H.R. (1998) Shape

encoding object locations in virtual large-

selectivity in primate lateral intraparietal

scale space J. Cogn. Neurosci. 10, 61–76

cortex Nature, 395, 500–503

1364-6613/99/$ – see front matter © 1999 Elsevier Science. All rights reserved. Trends in Cognitive Sciences – Vol. 3, No. 2,

February 1999

47