Assessing automaticity in the audiovisual integration of motion

Assessing automaticity in the audiovisual integration of motion

Acta Psychologica 118 (2005) 71–92 www.elsevier.com/locate/actpsy Assessing automaticity in the audiovisual integration of motion Salvador Soto-Farac...

345KB Sizes 3 Downloads 94 Views

Acta Psychologica 118 (2005) 71–92 www.elsevier.com/locate/actpsy

Assessing automaticity in the audiovisual integration of motion Salvador Soto-Faraco

a,*

, Charles Spence b, Alan Kingstone

c

a

b

Department de Psicologia Ba`sica, Universitat de Barcelona, Pg. Vall d’Hebro´n 171, 08035 Barcelona, Spain Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK c Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, B.C., Canada V6T 1Z4 Available online 23 November 2004

Abstract This study examined the multisensory integration of visual and auditory motion information using a methodology designed to single out perceptual integration processes from postperceptual influences. We assessed the threshold stimulus onset asynchrony (SOA) at which the relative directions (same vs. different) of simultaneously presented visual and auditory apparent motion streams could no longer be discriminated (Experiment 1). This threshold was higher than the upper threshold for direction discrimination (left vs. right) of each individual modality when presented in isolation (Experiment 2). The poorer performance observed in bimodal displays was interpreted as a consequence of automatic multisensory integration of motion information. Experiment 3 supported this interpretation by ruling out task differences as the explanation for the higher threshold in Experiment 1. Together these data provide empirical support for the view that multisensory integration of motion signals can occur at a perceptual level. Ó 2004 Elsevier Ltd. All rights reserved. Keywords: Multisensory integration; Motion; Auditory; Visual; Perception; Attention

*

Corresponding author. Tel.: +34 93 312 5158; fax: +34 93 402 1363. E-mail address: [email protected] (S. Soto-Faraco).

0001-6918/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.actpsy.2004.10.008

72

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

1. Introduction 1.1. Background Brain mechanisms of multisensory integration exploit regularities across inputs from different sensory modalities to achieve accurate representations of the environment (Spence & Driver, 2004; Stein & Meredith, 1993). The consequences of multisensory integration in the spatial domain have frequently been investigated at a behavioral level in situations of intersensory conflict. For example, when people are presented with synchronized sounds and light flashes at disparate locations, they often judge the location of the sounds as being closer to the visual events than is actually the case (the so-called ‘‘ventriloquist’’ illusion, Howard & Templeton, 1966; see also Frissen, Vroomen, de Gelder, & Bertelson, 2005). Research into the role of multisensory integration in perception has a long history, but in contrast to the dynamic nature of real environments, laboratory research has traditionally focused on the perception of static events rather than on moving stimuli. Current research using behavioral measures suggests the existence of strong congruency effects when motion in different sensory modalities is presented in conflicting directions. For instance, when asked about the direction in which a target sound appears to move, judgments can be dramatically influenced by the direction of irrelevant motion in vision (e.g., Anstis, 1973; Kitagawa & Ichihara, 2002; Meyer & Wuerger, 2001; Soto-Faraco, Lyons, Gazzaniga, Spence, & Kingstone, 2002; Zapparoli & Reatto, 1969) and even touch (Soto-Faraco, Kingstone, & Spence, 2003; see Soto-Faraco & Kingstone, 2004; Soto-Faraco, Spence, Lloyd, & Kingstone, 2004a, for reviews). However, the actual processing stage (or stages) at which these effects occur is still a matter of some debate. One possibility is that, when two motion signals are presented simultaneously to different senses, the two sources of directional information are combined during perception in an automatic fashion. One consequence of this would be that the integration of motion cues occurs in an obligatory and unavoidable manner, such that each unimodal motion signal would not be available individually for conscious inspection. An alternative explanation for many of the strong congruency effects found in the literature is that the two sources of directional information are combined at a later processing stage, after each motion signal has been processed and identified within its corresponding sensory modality. According to this post-perceptual explanation, even if the two motion signals were perceived independently and became available for conscious inspection separately, interactions at later processing stages, such as decision making and/or response execution, might produce any congruency effects observed (e.g., Choe, Welch, Gilford, & Juola, 1975; Welch, 1999). Importantly, perceptual and post-perceptual mechanisms need not be mutually exclusive. Automatic integration could occur on certain occasions or to some degree, and yet post-perceptual processes could still influence oneÕs response (e.g., see Wohlschla¨ger, 2000, for one such example). However, while researchers generally agree on the role that post-perceptual mechanisms may play in determining perfor-

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

73

mance in multisensory motion integration experiments (e.g., Meyer & Wuerger, 2001), current evidence for (or against) the involvement of early perceptual mechanisms in the integration of motion information across different sensory modalities is weak. The goal of the present study was to address the perceptual basis of the integration of visual and auditory motion information. One limiting factor present in many of the previous studies arises when there is a compatibility relationship between irrelevant motion information (e.g., a visual stimulus moving left-to-right or right-to-left) and the available responses in the experiment (e.g., left or right). The finding of a congruency effect under these conditions can be explained by the post-perceptual influences of motion cues present in the irrelevant sensory modality on the response/decision mechanisms to the target modality. In such situations, the conclusion that the influence of irrelevant information affects the perception of target information is not necessarily justified. Indeed, because a post-perceptual effect can express itself in the form of criterion shifts regarding different response alternatives, particularly when the target stimulus is ambiguous, it may influence the outcome even when using adaptation after-effects (see Choe et al., 1975, for a discussion). One solution to circumvent this potential confound is to use a response dimension that is orthogonal to the information contained in the displays (e.g., Spence & Driver, 1997), so irrelevant information cannot provide obvious cues toward one response or the other. A second, perhaps more subtle, form of post-perceptual influence is still possible even in the absence of a direct compatibility relationship between irrelevant information and the response alternatives. For instance, some decisional biases may arise merely by the fact that the observer becomes aware that one information source (i.e., a stimulus signal in one modality) will sometimes be congruent and sometimes incongruent with the other information source (i.e., a stimulus signal in another modality; see Bertelson, 1998; Bertelson & Aschersleben, 1998; Choe et al., 1975; Welch, 1999, on this point). In this case, the observer might use a different response strategy for each of the two different types of display (congruent or incongruent) leading to response-based explanations of performance. Mateeff, Hohnsbein, and Noack (1985) studied multisensory integration of motion signals using a methodology that was designed to control for this second type of post-perceptual influence. In their study, participants were asked to judge the direction in which a sound appeared to move (either to the left or right), while looking at or tracking with their eyes a visual target that moved from left-to-right or right-to-left. The sound was presented briefly just as the visual target traversed the centre of its trajectory. Mateeff et al. used psychophysical staircases to demonstrate that the sound direction was ambiguous (i.e., eliciting as many left responses as right responses), when it was presented in the opposite direction at 25–50% the velocity of the visual stimulus. This use of a psychophysical staircase methodology allowed Mateeff et al. to measure the point of perceptual uncertainty. That is, the point at which observers could not tell if the sound was moving to the left or to the right. It is in this region of uncertainty where information about the conflict between the direction of sounds and the direction of lights is unavailable to observers. Hence, by definition, the presence or absence of conflict cannot influence their decisional

74

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

processes under such conditions. It is worth noting, however, that although Mateeff et al.Õs study dealt with the second type of post-perceptual confound mentioned above, the presence of the visual stimulus which was clearly moving to the left or to the right 1 could have influenced the participantÕs responses to the direction of the auditory target stimulus because the response task was also a left/right decision (i.e., the first type of post-perceptual confound mentioned). Therefore, Mateeff et al.Õs study still remains vulnerable to a post-perceptual interpretation. 1.2. Scope of the present study We addressed the perceptual basis of the multisensory integration of motion information while attempting to overcome these potential post-perceptual confounds. We used the method of psychophysical staircases (e.g., Cornsweet, 1962; see Bertelson & Aschersleben, 1998; Caclin, Soto-Faraco, Kingstone, & Spence, 2002, for applications in the study of static ventriloquism effects) based on the gradual modification of the stimulus onset asynchrony (SOA) between two stimuli presented successively at two locations. The displays contained two such sequences (one visual and the other auditory) presented concurrently and with the same SOA, and participants were required to make a two-alternative forced choice decision regarding whether the direction of the sequence in one modality was the same or different as the direction of the sequence in the other sensory modality. Note that responses (same vs. different direction) were orthogonal to the direction of the stimuli (left vs. right), and therefore this manipulation does not contain the responsecompatibility confound that compromised the interpretation of earlier studies (e.g., Kitagawa & Ichihara, 2002; Mateeff et al., 1985; Soto-Faraco et al., 2004a). The SOA between the two elements of each of the sequences was adjusted from trial to trial after the participantÕs responses, according to psychophysical staircases. Since the SOA of the apparent motion streams used in previous studies where a clear crossmodal dynamic capture effect has been reported is 150 ms (Soto-Faraco et al., 2002; Soto-Faraco, Spence, & Kingstone, 2004b), we decided to use staircases with SOAs ranging between 75 ms and 1000 ms. This range included the ‘‘typical’’ value for crossmodal dynamic capture, as well as SOAs at which apparent motion was unlikely to be experienced by participants. The staircases were designed to home in on the point at which participants responded with each of the two possible alternatives (same vs. different) with an equal probability (otherwise known as the point of perceptual uncertainty). At this point, knowledge concerning the congruent vs. incongruent nature of the displays was, by definition, unavailable to the observers, and therefore the use of different response criteria to each type of display was controlled. In this way, the second type of post-perceptual confound, discussed above, could also be eliminated.

1

Note that despite the fact that in some of their experiments participants were tracking the visual stimulus with their eyes and therefore the location of the stimulus on the retina may have been more or less fixed, proprioceptive cues presumably still provided a clear sense of a moving visual target.

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

75

In a baseline experiment (Experiment 2), we measured the threshold SOA for directional discrimination of visual and auditory apparent motion streams when each was presented in isolation, using the exact same values used in the bimodal condition described above. If, as suggested by previous research, observers in Experiment 1 erroneously perceived concurrent motion signals that physically moved in opposite directions as if they moved in the same direction, then we should observe that the threshold SOA to discriminate same-different direction streams when presented simultaneously in Experiment 1 should be much higher than the threshold SOA to make direction discriminations to each modality when presented in isolation in Experiment 2. In Experiment 3, we addressed the possibility that a higher threshold in Experiment 1 might occur simply because of the task differences (i.e., same/different judgments vs. left/right judgments).

2. Experiment 1 The goal of Experiment 1 was to assess the threshold SOA for direction discrimination when auditory and visual motion streams were presented simultaneously. Participants judged whether concurrent auditory and visual streams of stimuli moved in the same or opposite directions. If multisensory integration between motion signals occurs in an automatic fashion, one would expect that the thresholds in this task would be higher than the threshold at which participants are usually able to determine the direction of stimuli in each modality when presented in isolation. This is because conflicting motion signals would be incorrectly perceived as moving in the same direction. Moreover, according to previous findings (Soto-Faraco et al., 2002, 2004a, 2004b) this ‘‘capture’’ phenomenon should occur only (or mainly) at SOAs at which the sequence of stimuli is perceived as being in apparent motion. That is, no such capture is expected to occur between the isolated components of the streams when they are perceived as isolated static events (i.e., when the SOA is long enough to eliminate the impression of motion). Note again, that contrary to the majority of studies designed to investigate the multisensory integration of motion information, response-compatibility concerns were not an issue in the present experiment because the response dimension (same/different) was orthogonal to the left/right directions of motion. That is, under the assumption that participants were actually able to segregate the motion cues independently in each sensory modality (i.e., if there was no perceptual integration), then the direction of each unimodal motion signal (right or left) would not provide any hint as to one response or the other (same or different). In addition, because the staircases were designed to home in on the point of perceptual uncertainty, when the SOA was around threshold values (once the staircase had reached asymptotic behavior), observers were not aware of whether the stimuli were in conflict or not (as in Mateeff et al.Õs, 1985, study). Thus, possible confounds stemming from decisional criteria based on display congruency were controlled for in the present study.

76

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

2.1. Method 2.1.1. Participants Twelve undergraduate students from the University of British Columbia participated in this experiment in exchange for course credit. All had normal hearing and normal or corrected-to-normal vision. 2.1.2. Apparatus and materials The auditory stimuli were presented through two loudspeaker cones separated by 30 cm and arranged horizontally. One orange LED (64.3 cd/m2) was positioned in front of each loudspeaker cone. Each auditory apparent motion stream consisted of two 50 ms tones (1750 Hz; 84 dB(A) SPL) presented in succession from alternate locations. Each visual apparent motion stream consisted of two 50 ms flashes presented in succession from alternate locations. A computer controlled the LEDs and loudspeaker cones through a custom-made relay box, and responses were collected via the PC keyboard. 2.1.3. Procedure Participants rested their chin on a chin-rest placed 40 cm from the center of the set up, i.e., between the two loudspeakers/LEDs. The room remained dark throughout the experiment. Each trial contained a visual stream consisting of two successive lights that moved from left to right, or right to left; and an auditory stream consisting of two successive sounds that moved from left to right, or right to left. The two streams (one in each sensory modality) were presented at the same time and with the same SOA (that is, each component element of the visual sequence occurred at the same time as each component element of the auditory sequence). The participants judged whether the lights and the sounds moved in the same direction or not (using the keys ÔYÕ for SAME, and ÔNÕ for DIFFERENT). The staircases controlling the displays were divided into 32 steps determining a gradation of SOAs (between the first and second event in the stream) ranging from 1000 ms (right-to-left direction) to +1000 ms (left-to-right direction). The step size in the staircases was variable, with smaller steps around the center and bigger steps at the extremes: steps of 25 ms from ±75 ms to ±150 ms, steps of 50 ms from ±150 ms to ±500 ms, and steps of 100 ms from ±500 ms to ±1000 ms (see Fig. 1a). Half of the auditory streams within one staircase had one direction (for example, left-to-right from 1000 ms to 75 ms SOA), while the other half of auditory streams moved in the other direction (for example, right-to-left from +75 ms to +1000 ms SOA). The visual streams accompanying the auditory streams were arranged so that their direction was the same as the sounds in half of the staircase steps, and opposite in the other half. The visual streams were always presented at the same absolute SOA as the sounds (in either the same or opposite spatial sequences, depending on their location within the staircase). Four staircases were used in this experiment, combining the relative direction of the visual stream (left-to-right vs. right-to-left) and the auditory stream (left-to-right or right-to-left) at the starting point of the staircase. According to these combinations, two of the staircases started with same-direction displays, and two of the stair-

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92 (a)

77

(b)

Opposite-direction extreme Sound +1000 +900

Light

1000 800

+600 +500 +450

Opposite-direction staircases Same-direction staircases

600 400 SOA (ms)

+200 +150 +125 +100 +75 -75 -100 -125 -150 -200

200 0 -200 -400

-450 -500 -600

-600 -800

-900 -1000

-1000

Same-direction extreme

Fig. 1. (a) Schematic outline of two of the staircases used in Experiment 1 (in the other two staircases, the directions of motion in each modality was reversed). Auditory motion is represented by black arrows and visual motion is represented by gray arrows. The arrowheads indicate the direction of motion, and the length of the lines indicates the SOA (the slight spatial mismatch in the figure has been introduced for clarity, as there was no spatial or temporal mismatch in the actual displays presented). The curved arrows link the starting point of each staircase with the corresponding line in the graphic representation of the results in graph b. (b) Individual performance of one participant in Experiment 1. The SOA (y-axis) at successive trials (x-axis) is depicted for each of the four staircases separately. Reversals can be clearly seen as peaks in the sequence of trials. Some of them happened quite early in the sequence. However, the measure adopted in this study was to average the SOA points at which reversals occurred once the staircase reached the asymptote (last six reversals). In this way, we were able to minimize the influence of typing errors or momentary distractions on a participantÕs responses.

cases started with opposite-direction displays. The staircases starting at the samedirection extreme (same-direction staircases from now on) moved one step down after each ÔSAMEÕ response and one step up after each ÔDIFFERENTÕ response. The staircases starting at the opposite-direction extreme (opposite-direction staircases, henceforth) moved one step down after each ÔDIFFERENTÕ response and one step up after each ÔSAMEÕ response. The initial modification rule remained the same throughout a given staircase. The four staircases were intermixed randomly from trial to trial to avoid potential biases introduced by the participantÕs history of previous responses (Cornsweet, 1962) and continued for 60 trials each. Before the experiment, participants completed a training block whereby the four staircases ran for 10 trials each. 2.1.4. Data analysis To estimate the point of perceptual uncertainty, we used the SOAs at which the last six response reversals had occurred in each staircase. A reversal occurred when

78

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

the response to a given staircase was different from the response given in the previous trial of that staircase. In this and the following experiments, all staircases had reached asymptote by the last six reversals (we submitted the SOA values where the last six reversals had occurred to an ANOVA with reversal number as a factor, and found no significant effects). Thus the thresholds provided a measure of perceptual uncertainty that can be compared across staircases. We reversed the sign of the SOA values in staircases ending in negative SOA regions, in order to avoid positive and negative SOAs from staircases starting at opposite ends canceling each other out. 2.2. Results The discrimination threshold for staircases starting at the opposite-direction extreme (opposite staircases) was 346 ms (SD = 165). In these staircases, participants started to use the ‘‘SAME’’ response well before the staircase had entered the region where the two sequences were effectively presented in the same direction (see Fig. 1b for an individual example). The threshold SOA for the staircases starting at the same-direction extreme (same-direction staircases) was 279 ms (SD = 171). Note that in the same-direction staircases, participants consistently judged the two streams as having the same direction well after the staircases had entered the range of SOAs where the lights and sounds had begun to move in conflicting directions (see Fig. 1b for an individual example). Finally, there were no differences between the absolute threshold of the same-direction staircases and the threshold of opposite-direction staircases [F(1, 11) = 2.1, p = .179], indicating that the two types of staircases converged on the same threshold value (average SOA of 312 ms), despite starting at opposite extremes (see Fig. 2). 500

Threshold SOA (ms)

400 Visual apparent motion threshold

300 Auditory apparent motion threshold

200

100

0 (a) Bimodal (same/diff. task)

Experiment 1

(c) (b) Unimodal Unimodal visual (left/right auditory task) (left/right task) Experiment 2

(d) Bimodal (same/diff. task) Spatiallymisaligned Experiment 3

Fig. 2. Average 50% discrimination thresholds (+S.E.) in Experiments 1–3, together with a description of the type of display and task used in each experiment (horizontal axis). The thresholds are averaged across same- and opposite-direction staircases in Experiments 1 and 3, and across staircases starting on the left and right in Experiment 2. The gray horizontal lines represent the 50% threshold of visual (thin dashed line) and auditory (thick dotted line) apparent motion perception for the stimuli used in Experiments 1–3 (see methods in Appendix A). The perception of apparent motion was weak or absent at SOAs above these lines.

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

79

2.3. Discussion The directional discrimination thresholds found in Experiment 1 under bimodal conditions seem to be much larger than expected if, in fact, participants were able to evaluate the direction of each modality separately. When the SOA of the two sequences was below 312 ms, it seems that multisensory integration led to the experience that sounds and lights moved in the same direction, even when objectively they did not. Nowhere is this interpretation more strongly suggested than in the same direction staircases. Here, the sounds and the lights were presented in the same direction at shorter and shorter SOAs until the midpoint was crossed (i.e., from 75 ms SOA to +75 ms SOA). At this point, the auditory and visual motion streams began to be presented in opposite directions at longer and longer SOAs; however, participants continued to judge these two streams as having the same direction 2 until the SOA approached nearly 300 ms. As already described, this is the threshold at which the participants were uncertain as to whether auditory and visual streams had the same or opposite directions. In this situation, decisions cannot be contaminated by postperceptual factors associated with an awareness of the conflicting or congruent nature of the displays, which may have been present in previous investigations addressing multisensory integration of motion information. This erroneous judgment regarding the direction of different-direction streams can be attributed to an automatic perceptual integration process that resulted in the participantsÕ inability to evaluate the motion signal in one sensory modality independently from that presented in the other modality. Indeed, an apparent motion control (reported in Appendix A) demonstrated that the thresholds obtained in Experiment 1 encapsulated and converged with the range of SOAs at which participants experienced apparent motion in each modality when presented individually (see Fig. 2). In particular, the bimodal threshold in Experiment 1 was not significantly higher (nor lower) than the upper thresholds for apparent motion (see Appendix A); For vision [F < 1, for both types of staircases]; and, for audition [F < 1, for same-direction staircases, F(1, 36) = 1.6, p = .213, for opposite-direction staircases].

3. Experiment 2 Experiment 1 revealed two important findings. First, that people will judge two stimuli as moving in the same direction when they are moving in opposite directions 2

According to the inverse effectiveness rule, originally described in the context of single cell neurophysiological studies on animals (e.g., Stein & Meredith, 1993), the weaker the signals the stronger the effects of multisensory integration should be. In light of recent human studies suggesting that this rule may not apply to humans (Odgaard, Arieh, & Marks, 2003; Stein, London, Wilkinson, & Price, 1996) or even to animals under all conditions (Populin & Yin, 2002), it would be interesting to address in future studies whether the integration of motion conforms to the inverse effectiveness rule or not. However, this is not the intention of this study. Moreover, it is difficult to see how it would be even possible to undertake this test, given that there is no information available regarding the strength of directional information at the different SOAs used here.

80

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

even in the absence of distractor-response compatibility issues. Second, the SOA range over which this capture starts to occur (approximately ±300 ms) dovetails with the SOA range over which visual and auditory motion is experienced. The implication of these data, which converge with the phenomenological experiences reported by the participants in previous studies (e.g., Zapparoli & Reatto, 1969), is that within this critical SOA range, auditory motion was perceived to occur in the same direction as the visual motion even when the streams in the two sensory modalities were, in fact, moving in opposite directions. The goal of Experiment 2 was to make sure that the threshold level at which participants failed to discriminate the relative directions of streams in bimodal displays was actually higher (discrimination is poorer) than the threshold at which directional judgments could be made in unimodal displays. 3.1. Methods 3.1.1. Participants Twenty-six naı¨ve undergraduates from the University of British Columbia with normal hearing, and normal or corrected vision were tested. They participated in exchange for course credit and were naı¨ve as to the purpose of the experiment. These participants were also tested on an experiment to assess the apparent motion thresholds for the visual and auditory stimuli used in this study (see Appendix A). The order in which participants completed these two experiments was counterbalanced across participants. 3.1.2. Materials and procedure The methods were the same as those used in Experiment 1, except for the following. Each trial contained either an auditory or a visual two-stimulus sequence, and participants responded to its direction (left or right) using the keys Ô<Õ or Ô>Õ with their preferred hand. The sequence of SOAs within a staircase contained 50% streams in each direction, with the change in the sign of the SOA across the midpoint corresponding to a switch in the direction of the stimulus (see Fig. 3a). Participants were tested with two staircases (one starting at the +1000 ms SOA and the other starting at the 1000 ms SOA) for each modality (auditory and visual), yielding four staircases in total. Staircases starting at the +1000 ms SOA extreme (right-to-left sequence) moved one step down (toward the opposite extreme) after a ÔLEFTÕ response, and one step up toward the +1000 ms SOA extreme after a ÔRIGHTÕ response. The complementary modification rule was applied to staircases starting at the 1000 ms SOA point (left-to-right extreme). Note that, as in Experiment 1, the modification rule for a given staircase was maintained throughout even if the staircase happened to cross the midpoint. For instance, in the +1000 ms SOA staircase for lights, an observer might correctly respond LEFT all the way to +75 ms SOA. A ÔLEFTÕ response on the next trial for that staircase would move the SOA across the midpoint to the 75 ms SOA, at which point the lights would move from left-to-right at the next step of that staircase. If the observer correctly made a ÔRIGHTÕ response on that trial, the staircase moved one step back, i.e., to the +75 ms SOA.

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

(a)

81

(b) 1000

Right-to-left-direction extreme

500 250

SOA (ms)

+600 +500 +450

Visual staircases

750

+1000 +900

0 -250

0

10

20

30

40

-500 -750

-450 -500 -600

-1000 Trial number

(c) 1000 750

Auditory staircases

500 SOA (ms)

+200 +150 +125 +100 +75 -75 -100 -125 -150 -200

250 0 -250

0

10

20

30

40

-500 -750

-900 -1000

-1000 Trial number

Left-to-right-direction extreme

Fig. 3. (a) Schematic outline of the staircase procedure used in Experiment 2. The arrows represent each step of the staircases, where the length is proportional to the SOA at that step and the arrowhead indicates the direction of motion (Note that the steps are smaller at shorter SOAs, and the direction of motion switches at the change of sign). Each staircase began from one of the two extremes, and initially moved step-by-step toward the opposite extreme, as long as the same response was maintained over successive trials in that staircase. When a response to a trial was different from the previous response in that staircase (i.e., a response reversal occurred), the staircase moved one step up (back toward the original extreme) and continued moving in the same direction until a new response reversal occurred. (b) shows the individual performance of one participant in the visual staircases of Experiment 2, and (c) shows the same participantÕs performance in the auditory staircases of Experiment 2. The SOA at successive trials (y-axis) is plotted as a function of the trial number (x-axis) for each of the auditory (top graph) and the visual staircases (bottom graph).

3.2. Results and discussion The average threshold for auditory directional discrimination was 59 ms (SD = 21), and the average threshold for visual motion discrimination was 60 ms (SD = 27). All staircases descended rapidly toward the central region and settled around the central values (Fig. 3b shows an individual example), indicating that unimodal directional discrimination was more accurate here than the thresholds obtained in Experiment 1 for bimodal displays (see Fig. 2). In particular, both the auditory and the visual unimodal thresholds were significantly lower

82

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

than the bimodal thresholds observed in the opposite-direction [t(36) = 8.9, p < .001 (auditory); and t(36) = 9.0, p < .001 (visual)], and the same-direction [t(36) = 6.6, p < .001 (auditory) and t(36) = 6.7, p < .001 (visual)] staircases of Experiment 1. The participants were clearly able to discriminate directionality in each modality even at the shortest SOA used (i.e., 75 ms), and so the values alternated around the midpoint from one step to the next by the end of the staircase. Thus, the thresholds obtained in this experiment are possibly limited by the minimum value used in the study (note that much lower thresholds have typically been obtained in studies addressing this particular question using a finer gradation of SOAs; e.g., Perrott & Strybel, 1994). It must be noted, however, that it was not the goal of the present experiment to find the exact unimodal thresholds, nor was it possible to do so without including finer steps in the staircases (and therefore changing the parameters with respect to those used in Experiment 1). Crucially, the present data suffice to show that the thresholds in the bimodal displays of Experiment 1 are higher (i.e., discrimination is poorer) than the performance levels obtained in unimodal direction judgments. Our argument is that the poorer discrimination performance in directional judgments observed in Experiment 1 was caused by the perceptual nature of multisensory integration amongst motion signals in the two sensory modalities. However, before considering the full implications of this finding, there is an alternative explanation that must be considered and addressed. Namely, that the increased threshold observed Experiment 1 might merely reflect the fact that the same/different discrimination used was not identical to, and perhaps more difficult than, the left/right discrimination used in Experiment 2. In order to address this issue we conducted a final experiment also using the same-different task but now modifying the set-up to reduce the likelihood of multisensory integration taking place.

4. Experiment 3 The goal of Experiment 3 was to examine whether the same-different task, by itself, can account for the increased threshold for directional discrimination observed in Experiment 1. We reproduced the conditions of Experiment 1 in terms of task and response mapping, but reduced the likelihood of multisensory integration taking place by spatially misaligning the locations from which the stimuli in each sensory modality were presented. By doing so, we introduced cues (like spatial location and speed of motion) that helped to segregate the two signals and therefore greatly diminish the opportunity for multisensory integration to occur. For instance, the relative spatial location of stimuli presented in different sensory modalities has often been shown to modulate the extent of multisensory integration effects taking place at both the behavioral and single-cell levels (e.g., Bertelson, 1998; Lewald & Guski, 2003; Slutsky & Recanzone, 2001; Soto-Faraco et al., 2002; Stein & Meredith, 1993; Welch & Warren, 1986; Zampini, Guest, Shore, &

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

83

Spence, in press; Zampini, Shore, & Spence, 2003). Indeed, Soto-Faraco et al. (2004a, 2004b) recently showed that this modulation also applies within the particular case of motion processing (see also Meyer & Wuerger, 2001; Sanabria, SotoFaraco, Chan, & Spence, 2004). If it was the same-different task that produced the increased bimodal thresholds in Experiment 1 compared to Experiment 2, then Experiment 3 should produce a similar threshold as Experiment 1 because participants were again required to perform same/different judgments regarding concurrent motion stimuli presented in different sensory modalities. However, if the threshold increase in Experiment 1 was due to auditory and visual motion being integrated, then misaligning the auditory and visual sensory inputs in Experiment 3 should result in a reduction of multisensory integration because of the extra segregation cues present. Thus, as a consequence, we should see a reduction in the threshold compared to Experiment 1 (reflecting a greater ability for directional discrimination). 4.1. Method 4.1.1. Participants Twelve new undergraduate students from the University of British Columbia participated in this experiment in exchange for course credit. All reported having normal hearing and normal or corrected-to-normal vision. 4.1.2. Materials, apparatus and procedure All aspects of the method were exactly as in Experiment 1 except for the location of the LEDs used to generate the visual apparent motion streams. In this experiment, the two LEDs were placed next to each other (about 1 cm apart; 1.33° of visual angle) at the center of the setup, midway between the two loudspeaker cones which were placed 30 cm (40° of visual angle) from each other, as in the previous experiments. 4.2. Results The threshold obtained with the opposite-direction staircases was 112 ms (SD = 115) and the threshold obtained with the same-direction staircases was 145 ms (SD = 131). There was no difference between the thresholds obtained with the same-direction staircases and the thresholds obtained with the opposite-direction staircases [F(1, 11) = 1.5, p = .249], indicating that the two types of staircase converged on about the same region. The data from Experiment 3 (mismatching spatial location) and Experiment 1 (matching spatial location) were submitted to an ANOVA with the between-participants factor of spatial match (matching vs. mismatching) and the within-participants factor staircase type (same- vs. opposite-direction). Critically, the main effect of spatial match was significant [F(1, 22) = 13.1, p = .002], reflecting the

84

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

fact that the average threshold obtained in Experiment 3 (129 ms) was significantly lower (i.e., performance was more accurate) than that obtained in Experiment 1 (312 ms, respectively; see Fig. 2). Neither the main effect of staircase type [F < 1], nor the interaction between these two terms were significant 3 [F(1, 22) = 3.4, p = .077]. The thresholds obtained in Experiment 3 were also compared to the unimodal thresholds of Experiment 2. Regarding the opposite-direction staircases, the threshold in Experiment 3 was significantly higher than the unimodal thresholds obtained in Experiment 2 [t(36) = 2.5, p < .05 (auditory); and t(36) = 2.6, p < .05 (visual)]. This was also the case for the same-direction staircases [t(36) = 3.7, p = .001 (auditory); and t(36) = 3.8, p < .001 (visual)]. 4.3. Discussion In Experiment 3, the nature of the stimuli (two simultaneous streams in different modalities) and task demands (same-different judgments) were the same as in Experiment 1. Yet, in the present experiment there was a significant reduction in the threshold (i.e., an increase in directional discrimination accuracy) compared to Experiment 1. This reduction in the threshold can be best understood as reflecting the fact that less than optimal conditions for multisensory integration were present in Experiment 3 (i.e., the auditory and visual stimuli were presented from different spatial locations, thus creating the basis for a more efficient segregation between the two modalities in terms of spatial mismatch and speed of motion, see also Sanabria et al., 2004). The present results therefore strongly suggest that multisensory integration, rather than the task per se, is primarily responsible for the increase in threshold observed when Experiment 1 is compared to Experiment 2. Another potential manipulation that may help address the role of task difficulty in the differences between Experiments 1 and 2 would be to introduce the left–right discrimination task on one modality with bimodal displays like those used in Experiment 1 (i.e., participants would focus on the auditory motion signal while ignoring the visual motion signal, or vice versa). We carried out precisely this manipulation in another control experiment not reported here (see Soto-Faraco & Kingstone, 2004) and found a threshold of 386 ms (SD = 246) when participants made auditory direction judgments and attempted to ignore visual motion, and 82 ms (SD = 8.1) in visual direction judgments while attempting to ignore sound motion. These data reinforce the claim that the left–right judgment task can be just as difficult as the same-different judgment task, providing that the necessary conditions for multisensory integration apply. The asymmetry found (a strong effect of vision on

3

Because the interaction between spatial match and staircase type was close to significance, we performed individual analyses for each type of staircase in order to confirm that the reduction in threshold seen from Experiments 1 to 3 was reliable for each type of staircase individually. As expected, the reduction was significant for each type of staircase when analyzed separately [t(22) = 4.2, p < .001, for opposite direction staircases; and t(22) = 2.2, p < .05, for same-direction staircases].

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

85

auditory judgments but no effect of sound on visual judgments) is the usual result in studies of crossmodal dynamic capture (see Soto-Faraco et al., 2004a, 2004b). 4 It should be noted that, in Experiment 3, the threshold under spatial misaligned conditions (Experiment 3) was still slightly higher than the unimodal thresholds obtained in Experiment 2 when participants made left/right judgments for the auditory and visual sequences presented in isolation. There are several potential explanations for this result. First, that the same-different task did indeed have an influence on the thresholds, although not sufficient to explain the difference between Experiments 1 and 2. As noted above, this is unlikely. Second, because the spatial coincidence rule is not a discrete all-or-nothing criterion, it is likely that some residual, albeit reduced multisensory integration may still have occurred in Experiment 3 (see Meyer & Wuerger, 2001; Soto-Faraco et al., 2002, for a similar view). Indeed, it is interesting to note that in Mateeff et al.Õs (1985) study, the visual motion stimulus traversed a longer trajectory than the auditory motion stimulus, suggesting that their experiment may have underestimated the true extent of multisensory integration (see also Sanabria et al., 2004, on this issue). In any case, because the difference in thresholds between Experiments 1 and 3 was clear, the main point stands that task changes alone cannot explain the dramatically higher motion discrimination thresholds observed in Experiment 1 as compared to Experiments 2 and 3. One could perhaps argue that there is another potential difference between Experiments 1 and 3 in terms of attentional costs. For despite every effort being made to ensure that the two experiments required exactly the same demands in terms of dividing attention across sensory modalities, it might be argued that because of the spatial segregation cues available in Experiment 3, the division of attention between the auditory and visual motion streams was easier. Nevertheless, note that it is precisely the fact that auditory and visual cues are more readily integrated into a single percept that ultimately underlies the difficulty of segregating the two sources of information in the spatially aligned conditions of Experiment 1. Therefore, even if the difference in results between Experiments 1 and 3 were to be explained in terms of attentional costs, one would still need to resort to multisensory integration between audiovisual cues in order to mount such an explanation.

5. General discussion Taken together, the results of the experiments reported here highlight the importance of perceptual factors in the integration of motion information across sensory modalities. In Experiment 1, we showed the difficulty that people experience when trying to discriminate the relative directions of two apparent motion streams presented concurrently in different modalities. Specifically, participants tended to judge 4 Unfortunately, the design of this follow-up experiment necessarily fails to control for one of the confounds we have attempted to rule out in this study (i.e., there is a compatibility relationship between the distractor and the response set). Therefore, its interpretation must be limited in the context of this study to serve as a secondary control for the change in task.

86

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

streams that were moving in opposite directions as if they had the same direction at SOAs that were one full order of magnitude higher than the thresholds obtained for directional discrimination of these same sequences when presented in isolation (Experiment 2). Experiment 3 ruled out task differences as the major cause of this decrement. Taken together, these data indicate that when concurrent auditory and visual streams are presented at SOAs of approximately 300 ms or less, auditory stimuli are frequently perceived to move in the same direction as the visual stimuli, even when the two streams physically move in opposite directions. These results demonstrate that perceptual processes play a significant role in the visual capture of the perceived direction of auditory motion. That is because potential response biases related to stimulus-response compatibility, and potential criterion shifts related to awareness about the conflict situation were ruled out in our study. 5.1. Static vs. dynamic cues One point of contention running through the previous research in this area has been whether potential crossmodal effects during the processing of motion information, especially when apparent motion streams are used, could be accounted for by processes occurring between static events (see Allen & Kolers, 1981, on this point). Soto-Faraco et al. (2002, 2004b) have recently provided evidence supporting the view that the interactions between dynamic properties (i.e., direction of motion) exist over and above any interactions between static events (i.e., spatial location) such as ventriloquism. The present results provide further empirical support for this conclusion. In the present study, crossmodal capture (in Experiment 1) occurred within a restricted range of SOAs that fell within the boundaries of the thresholds where apparent motion was experienced by our participants (see Fig. 2). Yet an explanation based purely on static ventriloquism effects would predict that each individual sound should be mislocalized toward the position of the corresponding light flash regardless of the SOA between the first and second pairs of sound/light events. Therefore, the present results are consistent with the idea that the experience of motion is necessary for crossmodal dynamic capture to occur. Determining the temporal order in which two stimuli are presented becomes increasingly difficult as the SOA between the stimuli decreases (e.g., Spence, Shore, & Klein, 2001, for a review). One might therefore wonder whether in Experiment 1 participants may actually have been performing a TOJ regarding the side on which each modality was presented first (at least at some of the longer SOAs in the experiment) and that errors started to become more and more frequent as the SOA decreased. However, this potential argument is weakened in the light of several considerations. First, it is difficult to account for the results of Experiment 3, where the spatial distance between the two stimulus locations was dramatically reduced. This type of spatial manipulation has been shown to make TOJs harder, at least in crossmodal tasks (e.g., Spence et al., 2001; Swisher & Hirsh, 1972; Zampini et al., 2003), and not easier as proved to be the case in our study. Second, there is no principled reason to assume that the errors committed because of the inability to establish temporal order would be systematic. Yet it is clear that in same-direction

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

87

staircases participants responded ‘‘same’’ systematically to different-direction trials (i.e., once the staircase had crossed the midpoint) up to an SOA of approximately 300 ms. Finally, it is important to note that the inferences drawn from the present data are based on what happens at the SOAs of perceptual uncertainty. Thus, although the analysis of what happens at other SOAs may be of interest, performance in the region of perceptual uncertainty should, in principle, be relatively independent of whatever strategies participants based their responses on the larger SOAs. 5.2. Levels of processing Investigators have used several research strategies to address the nature of any perceptual crossmodal biases obtained in intersensory conflict situations. These include the use of adaptation after-effects (e.g., Botvinick & Cohen, 1998; Kitagawa & Ichihara, 2002; Radeau & Bertelson, 1974), the desynchronization of the cues presented from each sensory modality (e.g., Guest, Catmur, Lloyd, & Spence, 2002; Soto-Faraco et al., 2002, 2004b), and the use of subjective reports (e.g., Bertelson & Radeau, 1976; Zapparoli & Reatto, 1969). However, potential confounds arise when participants are exposed to situations of stimulus-response compatibility and/or are aware of the presence of obvious conflicting cues across sensory modalities. In these cases, explanations based on output interference or cognitive factors can be tailored to account for the data, thereby compromising any purely perceptual account (e.g., Bertelson & Aschersleben, 1998; Choe et al., 1975). These potential confounds can be present even when using adaptation after-effects. For example, in their recent demonstration of multisensory effects in motion perception, Kitagawa and Ichihara (2002) found that participantsÕ judgments of whether a sound was decreasing or increasing in intensity depended on the direction of a visual adaptor that could be either looming (increasing in size) or receding (decreasing in size). In a recent study, Vroomen and de Gelder (2003) used the contingent auditory motion after-effect (see Dong, Swindale, & Cynader, 1999) to study multisensory integration effects. They found that this auditory after-effect (left or right) was contingent upon the direction (left or right) of the visual component of a bimodal motion adaptor rather than upon the auditory component of the adapting stimulus, thus suggesting that the visual stimulus modulated the perceived direction of the auditory adaptor. These methods still contain a compatibility relationship between the adaptor displays and the response set used to evaluate the direction of the target stimulus. Therefore, their data may be subject to the same post-perceptual explanations highlighted by several authors in the past (e.g., see Bertelson & Aschersleben, 1998; Choe et al., 1975; Welch, 1999, for discussions of the issue). There is, however, one piece of data in Vroomen and de GelderÕs study that suggests the genuinely perceptual nature of at least some of their effects. A condition where the adaptor displays contained stationary sounds combined with moving visual stimuli did not lead to any adaptation after-effects. Because the visual displays still contained response-compatible/incompatible information, it would appear that the role of response-compatibility was probably quite low in their experiment.

88

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

The use of an orthogonal response dimension in Experiments 1 and 3 allowed us to obtain a measure of the intersensory bias independent of this kind of stimulus-response compatibility effect. In addition, in these two critical experiments, several staircases with different starting points were intermixed and the thresholds were measured at the point of perceptual uncertainty about whether the trial contained conflicting or congruent apparent motion streams in audition and vision. That is, the asymptotes in the staircase reflected the stimulus parameters at which participants could no longer classify the streams as being either conflicting or congruent thus, by definition, biases based on the knowledge about the conflicting or congruent nature of the displays could not have affected performance. Even if no biases regarding response compatibility or conflict awareness could possibly operate in Experiment 1, one could still argue that the present results are open to the concern that participants tended to use the SAME response more often than the DIFFERENT response. However, Experiment 3 provides an independent empirical control for this possibility. In Experiment 3, all of the conditions were equivalent to the conditions of Experiment 1 except for the spatial arrangement of the stimuli, and yet, the thresholds obtained were dramatically lower (i.e., performance was more accurate). Given that the only change from Experiments 1 to 3 was the spatial arrangement of the stimuli, the difference in results cannot be attributed to a ‘‘respond-same’’ bias, which must have been equivalent across these two experimental manipulations. The present results therefore provide strong support for the view that crossmodal dynamic capture effects reflect the consequences of automatic perceptual integration. This is consistent with recent results demonstrating the perceptual nature of other static crossmodal phenomena, such as the audiovisual (Bertelson & Aschersleben, 1998; see also Frissen et al., 2005) and audiotactile ventriloquism effects (Caclin et al., 2002). It is important to note again however that the present results do not negate the possibility that post-perceptual factors might play a role in crossmodal integration of motion. Indeed, according to recent studies it is likely that they do (e.g., see Meyer & Wuerger, 2001). However, the data obtained in the present study demonstrate that perceptual processes also play a significant role during the integration of motion across different sensory modalities. 5.3. Conclusions The classic subject of inquiry regarding multisensory contributions to spatial information processing has revolved around the perception of static events but, as highlighted in Section 1 of the present study, it is also important to consider multisensory contributions in motion processing. The introduction of motion into our investigation is key to understanding spatial processing in real environments, where many of the objects move with respect to one another or with respect to the observer. Here, we have pointed out the critical as well as difficult-to-draw distinction between the integration of motion-direction information at a perceptual level, from that occurring at later stages, perhaps reflecting decisional processes not necessarily involved in motion processing. The present findings provide strong support for the

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

89

hypothesis that multisensory integration of motion signals can occur at a perceptual level of processing (in addition to the already detected effects occurring at later stages). Here we addressed two important sources of confound that have rendered many previous studies inconclusive concerning the dissociation between perceptual and post-perceptual processing stages. Finally, the present results support the dissociation between the integration of perceived location between spatially static events and the integration of perceived direction of motion between dynamic events.

Acknowledgments This work was supported by a postdoctoral award from the Killam Trust to S.S.-F., and by grants from the National Science and Engineering Research Council of Canada, the Human Frontier Science Programme, and the Michael Smith Foundation for Medical Sciences to A.K. We thank Jessica Lyons for help in the testing phase of the project.

Appendix A. Apparent motion thresholds In a separate experiment, we assessed the SOA at which participants started to experience apparent motion (i.e., the apparent motion threshold) for the visual and auditory streams used in the present study. The phenomenon of apparent motion has been primarily studied in the visual modality (e.g., Kolers, 1964; Wertheimer, 1912), but it has been known for many years that it also occurs for other modalities as well, such as audition (e.g., Burtt, 1917a) or touch (e.g., Burtt, 1917b). Moreover, apparent motion phenomena in different modalities seem to follow similar spatiotemporal constraints (e.g., Lakatos & Shepard, 1997). The participants in this manipulation also participated in Experiment 2, with the order of experiments counterbalanced across individuals. Participants judged whether the stream presented in a given trial produced the impression of motion or not (using the keys ÔYÕ and ÔNÕ). We used the same continua of stimuli as in Experiment 2, except that the 0 ms SOA step was also included (i.e., with the two events—tones or flashes, depending of the condition—being presented in synchrony). The experiment was divided into two blocks of trials (one auditory and the other visual) containing four staircases each. One of the staircases in each modality started at 1000 ms SOA, another at 1000 SOA, and two more at 0 ms SOA. The staircases starting at ±1000 ms descended one step toward the opposite extreme after each ÔNOÕ response, and ascended one step after each ÔYESÕ response. The staircases starting at 0 ms SOA, moved away from 0 ms (one toward the 1000 ms extreme and the other toward the 1000 ms extreme) after each ÔNOÕ response, and stepped back toward 0 ms after each ÔYESÕ response. Each staircase was stopped after 12 response reversals or after 50 trials. Before testing, the experimenter described the phenomenon of apparent motion to the participants. All remaining aspects of the design were as described in Experiment 1.

90

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

The apparent motion threshold for staircases starting from ±1000 ms SOA (upper apparent motion threshold) in the auditory block was 291 ms (SD = 234), and in the visual block it was 352 ms (SD = 192). This result establishes the upper boundary of SOAs over which motion is perceived for our particular set-up and materials. Although we did not attempt to differentiate between the distinct possible categories of apparent motion in this experiment (we wanted to ascertain the boundary SOA at which participants start to experience motion), previous literature suggest that at these SOA values, the experience of our participants would have been ‘‘broken’’ motion, rather than the continuous apparent motion seen when stimuli are presented at smaller SOAs and from smaller spatial separations (e.g., Kolers, 1964; Strybel & Neale, 1994). The apparent motion thresholds for staircases starting at 0 ms SOA (lower apparent motion thresholds) were 29 ms (SD = 32) and 48 ms SOA (SD = 37) for auditory and visual streams, respectively. These data indicate that the impression of motion persisted even when the stimuli were presented at the shortest non-zero SOAs ( 75 ms and +75 ms). Note though that because of the range of SOAs used (that had to coincide with the values used in the main experiment), the lower threshold for apparent motion cannot be assessed with precision here. References Allen, P. G., & Kolers, P. A. (1981). Sensory specificity of apparent motion. Journal of Experimental Psychology: Human Perception and Performance, 7, 1318–1326. Anstis, S. M. (1973). Hearing with the hands. Perception, 2, 337–341. Bertelson, P. (1998). Starting from the ventriloquist: The perception of multimodal events. In M. Sabourin, F. I. M. Craik, & M. Robert (Eds.), Advances in psychological science. Biological and cognitive aspects (Vol. 2, pp. 419–439). Hove, England: Psychology Press. Bertelson, P., & Aschersleben, G. (1998). Automatic visual bias of perceived auditory location. Psychonomic Bulletin and Review, 5, 482–489. Bertelson, P., & Radeau, M. (1976). Ventriloquism, sensory interaction, and response bias: Remarks on the paper by Chloe, Welch, Gilford, and Juola. Perception and Psychophysics, 19, 531–535. Botvinick, M., & Cohen, J. (1998). Rubber hands ÔfeelÕ touch that eyes see. Nature, 391, 756. Burtt, H. E. (1917a). Auditory illusions of movement—A preliminary study. Journal of Experimental Psychology, 2, 63–75. Burtt, H. E. (1917b). Tactile illusions of movement. Journal of Experimental Psychology, 2, 371–385. Caclin, A., Soto-Faraco, S., Kingstone, A., & Spence, C. (2002). Tactile ÔcaptureÕ of audition. Perception and Psychophysics, 64, 616–630. Choe, C. S., Welch, R. B., Gilford, R. M., & Juola, J. F. (1975). The ÔVentriloquist effectÕ: Visual dominance or response bias? Perception and Psychophysics, 18, 55–60. Cornsweet, T. N. (1962). The staircase method in psychophysics. American Journal of Psychology, 75, 485–491. Dong, C., Swindale, N. V., & Cynader, M. S. (1999). A contingent aftereffect in the auditory system. Nature Neuroscience, 2, 863–865. Frissen, I., Vroomen, J., de Gelder, B., & Bertelson, P. (2005). The aftereffects of ventriloquism: Generalization across sound-frequencies. Acta Psychologica, 118, 93–100. Guest, S., Catmur, C., Lloyd, D., & Spence, C. (2002). Audiotactile interactions in roughness perception. Experimental Brain Research, 146, 161–171. Howard, I. P., & Templeton, W. B. (1966). Human spatial orientation. New York: Wiley.

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

91

Kitagawa, N., & Ichihara, S. (2002). Hearing visual motion in depth. Nature, 416, 172–174. Kolers, P. A. (1964). The illusion of movement. Scientific American, 211(4), 98–106. Lakatos, S., & Shepard, R. N. (1997). Constraints common to apparent motion in visual, tactile and auditory space. Journal of Experimental Psychology: Human Perception and Performance, 23, 1050–1060. Lewald, J., & Guski, R. (2003). Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli. Cognitive Brain Research, 16, 468–478. Mateeff, S., Hohnsbein, J., & Noack, T. (1985). Dynamic visual capture: Apparent auditory motion induced by a moving visual target. Perception, 14, 721–727. Meyer, G. F., & Wuerger, M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12, 2557–2560. Odgaard, E. C., Arieh, Y., & Marks, L. E. (2003). Cross-modal enhancement of perceived brightness: sensory interaction versus response bias. Perception and Psychophysics, 65, 123–132. Perrott, D. R., & Strybel, T. Z. (1994). Some observations regarding motion-without-direction. In T. Gilkey & R. Anderson (Eds.), Binaural and spatial hearing in real and virtual environments. Populin, L. C., & Yin, T. C. (2002). Bimodal interactions in the superior colliculus of the behaving cat. Journal of Neuroscience, 22, 2826–2834. Radeau, M., & Bertelson, P. (1974). The after-effects of ventriloquism. Quarterly Journal of Experimental Psychology, 26, 63–71. Sanabria, D., Soto-Faraco, S., Chan, J. S., & Spence, C. (2004). When does visual perceptual grouping affect multisensory integration? Cognitive, Affective, and Behavioral Neuroscience, 4, 218–229. Slutsky, D. A., & Recanzone, G. H. (2001). Temporal and spatial dependency of the ventriloquism effect. Neuroreport, 12, 7–10. Soto-Faraco, S., & Kingstone, A. (2004). Multisensory integration of dynamic information. In G. Calvert, C. Spence, & B. E. Stein (Eds.), The handbook of multisensory processes (pp. 49–67). Cambridge, MA: MIT Press. Soto-Faraco, S., Kingstone, A., & Spence, C. (2003). Multisensory contributions to the perception of motion. Neuropsychologia, 41, 1847–1862. Soto-Faraco, S., Lyons, J., Gazzaniga, M. S., Spence, C., & Kingstone, A. (2002). The ventriloquist in motion: Illusory capture of dynamic information across sensory modalities. Cognitive Brain Research, 14, 139–146. Soto-Faraco, S., Spence, C., Lloyd, D., & Kingstone, A. (2004a). Moving multisensory research along: Motion perception across sensory modalities. Current Directions in Psychological Science, 13, 29– 32. Soto-Faraco, S., Spence, C., & Kingstone, A. (2004b). Crossmodal dynamic capture: Congruency effects in the perception of motion across sensory modalities. Journal of Experimental Psychology: Human Perception and Performance, 30, 330–345. Spence, C., & Driver, J. (1997). Audiovisual links in exogenous covert spatial orienting. Perception and Psychophysics, 59, 1–22. Spence, C., & Driver, J. (Eds.). (2004). Crossmodal space and crossmodal attention. Oxford, UK: Oxford University Press. Spence, C., Shore, D. I., & Klein, R. M. (2001). Multisensory prior entry. Journal of Experimental Psychology: General, 130, 799–832. Stein, B. E., & Meredith, M. A. (1993). The merging of the senses. Cambridge, MA: MIT Press. Stein, B. E., London, N., Wilkinson, L. K., & Price, D. D. (1996). Enhancement of perceived visual intensity by auditory stimuli: a psychophysical analysis. Journal of Cognitive Neuroscience, 8, 497–506. Strybel, T. Z., & Neale, W. (1994). The effect of burst duration, interstimulus onset interval, and loudspeaker arrangement on auditory apparent motion in the free field. Journal of the Acoustical Society of America, 96, 3463–3475. Swisher, L., & Hirsh, I. J. (1972). Brain damage and the ordering of two temporally successive stimuli. Neuropsychologia, 10, 137–152. Vroomen, J., & de Gelder, B. (2003). Visual motion influences the contingent auditory motion aftereffect. Psychological Science, 14, 357–361.

92

S. Soto-Faraco et al. / Acta Psychologica 118 (2005) 71–92

Welch, R. B. (1999). Meaning, attention, and the ÔUnity AssumptionÕ in the intersensory bias of spatial and temporal perceptions. In G. Aschersleben, T. Bachmann, & J. Mu¨sseler (Eds.), Cognitive contributions to the perception of spatial and temporal events (pp. 371–393). Amsterdam, Netherlands: Elsevier Science B.V. Welch, R. B., & Warren, D. H. (1986). Intersensory interactions. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance. Sensory processes and perception (Vol. 1, pp. 25.1–25.36). New York: Wiley. Wertheimer, M. (1912). Experimentelle Studien u¨ber das Sehen von Bewegung. [Experimental studies on the visual perception of movement]. Zeitschrift fu¨r Psychologie, 61, 161–265. Wohlschla¨ger, A. (2000). Visual motion priming by invisible actions. Vision Research, 40, 925–930. Zampini, M., Guest, S., Shore, D.I., & Spence, C. Audiovisual simultaneity judgments. Perception and Psychophysics, in press. Zampini, M., Shore, D. I., & Spence, C. (2003). Audiovisual temporal order judgments. Experimental Brain Research, 152, 198–210. Zapparoli, G. C., & Reatto, L. L. (1969). The apparent movement between visual and acoustic stimulus and the problem of intermodal relations. Acta Pychologica, 29, 256–267.