Visual and motor constraints on trajectory planning in pointing movements

Visual and motor constraints on trajectory planning in pointing movements

Neuroscience Letters 372 (2004) 235–239 Visual and motor constraints on trajectory planning in pointing movements R. Palluel-Germaina , F. Boya , J.P...

189KB Sizes 13 Downloads 70 Views

Neuroscience Letters 372 (2004) 235–239

Visual and motor constraints on trajectory planning in pointing movements R. Palluel-Germaina , F. Boya , J.P. Orliagueta,∗ , Y. Coellob a

Laboratoire de Psychologie et NeuroCognition CNRS UMR 5105, Universit´e Pierre Mendes France, Grenoble II, 1251, Avenue Centrale-BP 47, 38040 Grenoble Cedex 9, France b Laboratoire URECA, Universit´ e Charles de Gaulle, Lille, France Received 8 July 2004; received in revised form 13 September 2004; accepted 21 September 2004

Abstract The aim of the present study was to show that planning and controlling the trajectory of a pointing movement is influenced not solely by physical constraints but also by visual constraints. Subjects were required to point towards different targets located at 20◦ , 40◦ , 60◦ and 80◦ of eccentricity. Movements were either constrained (i.e. two-dimensional movements) or unconstrained (i.e. three-dimensional movements). Furthermore, movements were carried out either under a direct or a remote visual control (use of a video system). Results revealed that trajectories of constrained movements were nearly straight whatever the eccentricity of the target and the type of visual control. A different pattern was revealed for unconstrained movements. Indeed, under direct vision the trajectory curvature increased as the eccentricity augmented, whereas under indirect vision, trajectories remained nearly straight whatever the eccentricity of the target. Thus, movements controlled through a remote visual feedback appear to be planned in extrinsic space as constrained movements. © 2004 Elsevier Ireland Ltd. All rights reserved. Keywords: Pointing movements; Visual constraint; Trajectory planning

In order to understand the fundamental principles of movement planning, behavioural studies lie in the postulate that the spatiotemporal invariances observed for a same movement executed under different situations reflect the nature of movement generation (for a review see ref. [3]). For instance, more than two decades ago Morasso [11] reported that when subjects carried out pointing movements in various directions, trajectories were variable in joint coordinates irrespective of the initial and final location of the hand whereas hand paths were consistently straight. This extrinsic stability was also reported in other studies and led several authors to the conclusion that goal-directed movements are planned in extrinsic space [6,16,17]. According to this theory a straight path is selected in extrinsic space between the initial position of the hand and the goal of the movement, and then this intended trajectory is transformed into body coordinates. Recently, it was however suggested that this conclusion holds ∗

Corresponding author. Tel.: +33 4 76 82 56 73; fax: +33 4 76 82 78 34. E-mail address: [email protected] (J.P. Orliaguet). 0304-3940/$ – see front matter © 2004 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.neulet.2004.09.045

merely for a certain class of movement [2,12]. For instance Desmurget et al. [2] showed that external forces may induce changes in movement planning. They observed that when pointing movements were constrained (i.e. executed in two dimensions) hand path evidenced a straight shape. They concluded that under this condition, movement was planned in extrinsic space. For movements executed with no constraint (i.e. executed in three dimensions), they found that hand path curvature depended on movement direction. These results suggest that unconstrained movements are planned in intrinsic space in which the control parameter is the position of each of the joints contributing to the movement [1,7,9]. These data indicate that movement planning depends to some extent on the physical constraints. In addition, several studies show that visual constraints may also have an influence on movement production. For instance, when visual feedback of movement is presented on a screen, movement accuracy decreases [10], movement time augments [4,15] and velocity profiles are modified [4]. Moreover, Pennel et al. [13] demonstrated that conditions of remote control modified the way spatial information is coded. In this task, though propri-

236

R. Palluel-Germain et al. / Neuroscience Letters 372 (2004) 235–239

oceptive information was found to influence the origin of the system of reference used, hand to target vector was mainly coded on the basis of visual information. Therefore, we may hypothesize that the visual constraints generated in situations of remote visual control could affect trajectory planning in the same way as physical constraints. In this view unconstrained movements executed with a two-dimensional visual feedback (remote visual control) might exhibit straight paths irrespectively of movement direction. The aim of the present study was to test this hypothesis by asking healthy adults to carry out spontaneous or constrained pointing movements controlled from either direct or indirect visual feedback. Twenty subjects participated in the experiment (age 18–35 years, students at the University of Grenoble). All subjects were right handed, and naive as to the purpose of the experiment. None of them had experienced visual or neurobiological deficit. The experimental device is presented in Fig. 1. Subjects sat in front of an untextured white table positioned above the navel. They were asked to perform pointing movements towards four coloured dots (19 mm circles—red, blue, yellow and green) located at 20◦ , 40◦ , 60◦ and 80◦ to the right with respect to the sagittal axis and at 20 cm from the starting point (see Fig. 1C). Each subject performed pointing movements in either an indirect or direct visual feedback condition (respectively, IVF and DVF conditions, see Fig. 1A and B). In the IVF condition, a Sony video camera placed 1.10 m above the table recorded arm displacements (1:1 spatial relationship) and transmitted continuously and in real time movement images on the video screen. Therefore, targets were displayed on a colour video screen (0.5 m × 0.40 m) located approximately 0.85 m from the head. The use of a video system guaranteed a maximal two-dimensional perception of movement. The starting position and an additional target disposed at 0◦ of eccentricity were in alignment with the subject’s mid-sagittal axis. Direct vision of the hand was precluded by an occluding board but the starting position of hand, targets location and the displacement of the hand were continuously available on the video screen. In the DVF condition (i.e. in the absence of the occluder), subjects directly viewed a similar arrangement of the workspace. All subjects performed the IVF and DVF conditions but were assigned either to the constrained (C) or the unconstrained (U) condition. In the C condition, subjects had to perform the pointings towards the targets by maintaining the index fingertip on the table. In order to minimize eventual frictions, the table was coated with liquid paraffin. In the U condition subjects were informed that the contact of the index fingertip was required only at the start and the hand of the movement. Whatever the conditions, subjects initially positioned their right index fingertip on the starting location and were instructed to look at the additional target (0◦ ). They were then instructed as to which target to point to as quickly and accurately as possible. No instruction was given about the form of the trajectory. In each group (U or C), subjects performed movements in both IFV and DFV conditions in two independent sessions

Fig. 1. Schematic representation of the experimental setup. (A) Experimental situation. Targets, hand position and movement were visible on the video screen. (B) Control situation. Targets were directly visible in the workspace. (C) View from above of the workspace. The fixation point was at 0◦ , the targets were located 20◦ , 40◦ , 60◦ or 80◦ to the right of the sagittal axis at a distance of 20 cm from the starting position.

that were counterbalanced across participants. Ten pointing movements were performed towards each target. The order of the targets presentation was randomly varied to prevent motor learning. The x, y, z movements of an ultra-sound emitting diode placed upon the index fingertip were recorded at 100 Hz (spatial accuracy = 0.1 mm) with a movement registration system (ZEBRIS® , Isny, Germany). Data were then processed and analysed under MATLAB 6.5 (Mathworks® ). Positional data were filtered using a second-order Butter-

R. Palluel-Germain et al. / Neuroscience Letters 372 (2004) 235–239

worth dual-pass filter (cut-off frequency: 10 Hz). Movement onset was defined as the time when the index finger tangential velocity first exceeded 3 cm/s. Likewise, the end of the movement was defined as the first time the tangential velocity fell below a 3 cm/s threshold. In order to assess movement curvature we used the path curvature index (PCI) [1]. For each trial, the maximum distance between the trajectory and the straight line joining starting and end points in the xy plane was divided by the amplitude of the movement. Therefore, this measure increases with curvature (a straight line path would have a PCI of zero). To calculate average paths in each condition, each trajectory was first resampled at 30 evenly spaced points over the duration of the movement. The mean of these resampled trajectories defined the mean trajectories. A threeway analysis of variance (ANOVA) was performed with repeated measures on the visual feedback (IVF and DVF) and the target eccentricity (20◦ , 40◦ , 60◦ and 80◦ ) and with independent measures on the execution constraint (U or C).

237

Results concerning hand path curvature are shown in Fig. 2. Fig. 2A shows the average hand paths for each condition in the xy plane. We observe that hand path curvature in the U-DVF condition increases with target eccentricity. In the other conditions, trajectories are straighter and seem to be less affected by target eccentricity. These observations are consistent with a statistical analysis (see Fig. 2B). The path curvature index in the xy plane showed a significant interaction between the execution constraint, the visual feedback and the target eccentricity, F(3,54) = 11.47, p < 0.001. The epsilon-value (0.59) did not result in significant changes in p-value after Greenhouse-Geisser’s correction. This three-way interaction indicates that the target eccentricity influenced movement path solely for unconstrained movements performed with a direct visual feedback. Indeed, in the C condition, no significant difference was observed between DVF and IVF conditions, F(1,18) = 1.49, p > 0.05, and no interaction between visual feedback and target eccentricity was observed,

Fig. 2. Hand path curvature. (A) Mean hand paths according to the experimental session. The straight dashed lines represent the line joining starting and end position of the hand. Each trajectory is shown in the xy plane. (B) Mean hand path curvature index and standard deviations for each experimental condition.

238

R. Palluel-Germain et al. / Neuroscience Letters 372 (2004) 235–239

F(3,54) = 0.373, p > 0.7. Furthermore, in the C condition, linearity and quadratic analyses on target eccentricity showed no significant effect (linearity testing: F(1,18) = 3.95, p > 0.05; quadratic testing: F(1,18) = 3.45, p > 0.05). By contrast, a significant interaction between visual feedback and target eccentricity was observed in the U condition, F(3,54) = 19.04, p < 0.001. The calculated epsilon-value (0.51) did not result in significant changes in p-value after Greenhouse-Geisser’s correction. A significant difference was observed between U-DVF and U-IVF, F(1,18) = 13.71, p < 0.002, and a significant interaction was found between the visual feedback and a linearity testing on target eccentricity (linearity testing: F(1,18) = 52.63, p < 0.001; quadratic testing: F(1,18) = 1.87, p > 0.18). Whereas linearity testing was significant for U-DVF (linearity testing: F(1,18) = 89.96, p < 0.001; quadratic testing: F(1,18) = 0.82, p > 0.35), no effect of target eccentricity was found in U-IVF (linearity testing: F(1,18) = 2.64, p > 0.1; quadratic testing: F(1,18) = 2.72, p > 0.1). Therefore, path curvature was influenced by the target eccentricity only in the U-DVF condition. Finally, it is worth noting that no significant difference was observed between C-IVF and U-IVF conditions (F(1,18) = 2.01, p > 0.05). Besides, it is to note that movement time (MT) was shorter in U (418 ms) than in C condition (543 ms), F(1,18) = 16.16, p < 0.001. A significant effect was also observed between IVF (515 ms) and DVF (446 ms), F(1,18) = 21.16, p < 0.001. In the present study, we investigated whether a visual constraint (use of a video system to guide the movement) could affect trajectory planning in the same way as a physical constraint. As concerned physical constraints, our results confirm those obtained by Desmurget et al. [2]. Indeed under direct vision, unconstrained movements evidenced a curvature that increased as a function of target eccentricity. Taken for granted that hand path shape or rather changes in hand trajectories observed in movement production give insight into how visually guided movements are planned and controlled [3,11], one may consider that these movements are planned and controlled in intrinsic space. Indeed, numerous previous studies concluded that curved trajectories in the Cartesian space are the result of the movement planning processes [2,9,12]. Furthermore, we observed that in constrained movements hand paths were consistently straight irrespective of the movement direction. As suggested by several authors [6,11,16] this extrinsic stability tends to demonstrate that goal-directed arm movements are planned in extrinsic space. It is to note that unlike Desmurget et al.’s study [2] the visual feedback was available during the entire movement. Therefore, our data confirm the influence of the execution context on the movement planning and the increase of movement duration for two-dimensional movements else show that in constrained movements the whole trajectory is defined and controlled in the task space. The present data further extend Desmurget et al.’s finding [2] by showing that imposing either a visual constraint

(two-dimensional visual feedback) or a physical constraint (performing 2D movement) can have a similar effect on the shape of the trajectory. Indeed, for unconstrained movements carried out under indirect visual feedback, the shape of the trajectory tended to be straight whatever the direction of the movement. Thus, movements in the above condition are not curved as the trajectories evident in the unconstrained conditions carried out under direct vision. Biomechanical constraints are not likely to explain these results because the starting position, the target position and the type of movement (constrained versus unconstrained) were maintained constant across the conditions of visual feedback (direct versus indirect). Therefore, we may suppose that unconstrained movements executed with an indirect visual feedback are planned and controlled in the same way as those performed in two dimensions. In a situation in which the visual feedback of the movement is provided in two dimensions, movements are planned and controlled in extrinsic space whatever the execution constraints. The fact that trajectories are straight in a remote control situation, even with unconstrained movements, shows that subjects do not map visual information on the screen into a centrally-generated estimate of target location in the limb’s workspace. In these situations such straightness suggests that subjects estimate the desired trajectory by using hand and target position displayed on the screen. The idea of a visual hand-centred perception of target position is in agreement with studies concerned with the frame of reference used in adaptation to directional bias in a video controlled reaching task [13,14]. For instance, in such situation Pennel et al. [13] have shown that for most of the subjects the initial orientation of hand trajectory in the first trial corresponded to the directional bias introduced. The authors concluded that in this last situation, movement vector was mainly determined from the information displayed on the video screen without including the rotation of the visual information. Thus, this is in agreement with our data and further confirms that in situations of remote-controlled reaching movement the motor system uses vectorial planning. In this case information about the target is combined with information about hand position to form a simplified hand-centred plan of the intended movement trajectory in extrinsic space [5,6,8,17]. The fact that trajectory planning in extrinsic space not only concerns planar movements but also unconstrained movements controlled via a video screen, indicates that movement planning integrates not only the execution constraints but also the visual constraints imposed by the situation. References [1] C.G. Atkeson, J.M. Hollerbach, Kinematic features of unrestrained vertical arm movements, J. Neurosci. 5 (1985) 2318–2330. [2] M. Desmurget, M. Jordan, C. Prablanc, M. Jeannerod, Constrained and unconstrained movements involve different control strategies, J. Neurophysiol. 77 (1997) 1644–1650.

R. Palluel-Germain et al. / Neuroscience Letters 372 (2004) 235–239 [3] M. Desmurget, D. Pelisson, Y. Rossetti, C. Prablanc, From eye to hand: planning goal-directed movements, Neurosci. Biobehav. Rev. 22 (1998) 761–788. [4] C. Ferrel, D. Leifflen, J.P. Orliaguet, Y. Coello, Pointing movement visually controlled through a video display: adaptation to scale change, Ergonomics 43 (2000) 461–473. [5] M.F. Ghilardi, J. Gordon, C. Ghez, Learning a visuomotor transformation in a local area of workspace produces directional biases in other areas, J. Neurophysiol. 73 (1995) 2535–2539. [6] J. Gordon, M.F. Ghilardi, C. Ghez, Accuracy of planar reaching movements. I. Independence of direction and extent variability, Exp. Brain Res. 99 (1994) 97–111. [7] H. Grea, M. Desmurget, C. Prablanc, Postural invariance in threedimensional reaching and grasping movements, Exp. Brain Res. 134 (2000) 155–162. [8] J.W. Krakauer, Z.M. Pine, M.F. Ghilardi, C. Ghez, Learning of visuomotor transformations for vectorial planning of reaching trajectories, J. Neurosci. 20 (2000) 8916–8924. [9] F. Lacquaniti, J.F. Soechting, S.A. Terzuolo, Path constraints on point-to-point arm movements in three-dimensional space, Neuroscience 17 (1986) 313–324. [10] J. Messier, J.F. Kalaska, Differential effect of task conditions on errors of direction and extent of reaching movements, Exp. Brain Res. 115 (1997) 469–478.

239

[11] P. Morasso, Spatial control of arm movements, Exp. Brain Res. 42 (1981) 223–227. [12] R. Osu, Y. Uno, Y. Koike, M. Kawato, Possible explanations for trajectory curvature in multijoint arm movements, J. Exp. Psychol. Hum. Percept. Perform. 23 (1997) 890–913. [13] I. Pennel, Y. Coello, J.P. Orliaguet, Frame of reference and adaptation to directional bias in a video-controlled reaching task, Ergonomics 45 (2002) 1047–1077. [14] I. Pennel, Y. Coello, J.P. Orliaguet, Visuokinesthetic realignment in a video-controlled reaching task, J. Mot. Behav. 35 (2003) 274– 284. [15] T. Smith, K. Smith, Human factors of workstation telepresence, in: S. Grffin (Ed.), Proceedings of the Third Annual Workshop on SOAR’89, NASA, Houston, Conference Publication 3059, 1990, pp. 235–250. [16] P. Vindras, P. Viviani, Frames of reference and control parameters in visuomanual pointing, J. Exp. Psychol. Hum. Percept. Perform. 24 (1998) 569–591. [17] P. Vindras, P. Viviani, Altering the visuomotor gain. Evidence that motor plans deal with vector quantities, Exp. Brain Res. 147 (2002) 280–295.