Deconvolution of magnetic acoustic change complex (mACC)

Deconvolution of magnetic acoustic change complex (mACC)

Clinical Neurophysiology xxx (2014) xxx–xxx Contents lists available at ScienceDirect Clinical Neurophysiology journal homepage: www.elsevier.com/lo...

1MB Sizes 1 Downloads 58 Views

Clinical Neurophysiology xxx (2014) xxx–xxx

Contents lists available at ScienceDirect

Clinical Neurophysiology journal homepage: www.elsevier.com/locate/clinph

Deconvolution of magnetic acoustic change complex (mACC) Fabrice Bardy a,b,c,d,e,⇑, Catherine M. McMahon a,b,e, Shu Hui Yau d,e, Blake W. Johnson d,e a

HEARing Co-operative Research Centre, VIC, Australia Department of Linguistics, Macquarie University, NSW, Australia c National Acoustic Laboratories, NSW, Australia d Department of Cognitive Science, Macquarie University, NSW, Australia e ARC Centre of Excellence in Cognition and its Disorders, Australia b

a r t i c l e

i n f o

Article history: Accepted 4 March 2014 Available online xxxx Keywords: Magnetoencephalography Overlapping responses Least-squares deconvolution Rapid acoustic change complex

h i g h l i g h t s  We developed a novel experimental approach to objectively measure discrimination of rapidly chang-

ing sounds.  We investigated the feasibility of disentangling three sorts of overlapping cortical responses elicited

by synthesized speech sounds in normal hearing adults.  Cortical responses recovered using the LS-deconvolution can potentially be used as biomarker of

spectro-temporal processing mechanisms at the level of auditory cortex.

a b s t r a c t Objective: The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Methods: Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135 ms) and long (1500 ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Results: Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. Conclusions: The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. Significance: This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright Ó 2014 Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology. All rights reserved.

⇑ Corresponding author at: Australian Hearing Hub, 16 University Avenue, Macquarie University, NSW 2109, Australia. Tel.: +61 2 94 12 68 14; fax: +61 2 94 12 67 69. E-mail address: [email protected] (F. Bardy). http://dx.doi.org/10.1016/j.clinph.2014.03.003 1388-2457/Crown Copyright Ó 2014 Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology. All rights reserved.

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

2

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

1. Introduction The ability of neurons in the auditory cortex to decode rapidly changing acoustic stimuli is assumed to be necessary for the acquisition of language. Deficiencies in processing rapid auditory information have been linked to disorders of language, presumably because these result in less precise phonological representations of speech (Hämäläinen et al., 2012). Deficits in the perception of time varying sounds (representing spectro-temporal changes in speech) are commonly observed in a variety of clinical populations, including developmental dyslexia, the hearing impaired and cochlear implant users and the children with specific language impairment (SLI) (Dawes and Bishop, 2009; Hämäläinen et al., 2012; Steinbrink et al., 2009). Further investigation is needed to understand which underlying neural mechanisms contribute to inefficient cortical encoding of rapidly presented or changing acoustic signals (Weber-Fox et al., 2010). The neural bases of speech discrimination, including the perception of consonant–vowels (Dimitrijevic et al., 2012; Ganapathy et al., 2013), can be studied using objective and non-invasive methods such as event-related field/potential techniques. Brain responses recorded using electroencephalography (EEG) or magnetoencephalography (MEG) are especially well suited for studying auditory perception, because of the excellent temporal resolution of these techniques. Moreover, auditory responses can be measured without requiring participants to generate responses or focus their attention on a specific task. These passively measured responses are practical, especially for children, infants, and clinical populations, where behavioral testing of auditory discrimination is difficult and can be confounded by language and attentional difficulties. EEG and MEG can be used to measure auditory cortical responses elicited by the onset and offset of a sound, as well as to changes in the properties of a continuous sound. Responses to the latter are termed the acoustic change complex (ACC), and have been found to be elicited by changes in sound intensity, pitch, timbre, phase and/or spectrum (Dimitrijevic et al., 2008; He et al., 2012; Jones and Perez, 2001; Martin and Boothroyd, 1999; Martin and Boothroyd, 2000; Ross et al., 2007). The auditory cortical responses recorded from the human scalp 50–350 ms after the stimulus onset/offset/acoustic change are characterized by the P50–N100–P200 complex when measured using EEG, and P50m– N100m–P200m complex when measured using MEG. This largely reflects the neural encoding of the sound signal in the primary auditory cortex, although it represents the sum of responses from spatially separate generators (Yamashiro et al., 2011). Significant correlations between electrophysiological and psychophysical thresholds suggest that the ACC has the potential to be used as an index of auditory discrimination ability (He et al., 2012; Ross et al., 2007). Specifically, changes in the ACC amplitude correlate with changes in the acoustic properties of the stimulus, such as intensity, frequency, and gap length, making this response a neurophysiological marker of perceived sound changes (He et al., 2012; Witton et al., 2012). The better reliability, sensitivity and efficiency of this measure at an individual level compared to the Mismatch Negativity (MMN) component, an alternative objective measure of the brain’s ability to discriminate sounds, give this technique greater potential to be used clinically (Bishop and Hardiman, 2010; Martin and Boothroyd, 1999; Tremblay et al., 2003). The majority of studies investigating ACC have used longduration stimuli, typically longer than the length of a cortical auditory evoked response, which is approximately 350 ms (Dimitrijevic et al., 2008; Martin and Boothroyd, 2000; Martin et al., 2010). Recently, there has been increased interest in investigating how rapidly changing stimuli are represented in the auditory cortex (Dimitrijevic et al., 2012; Ganapathy et al., 2013). However, as

highlighted by previous studies, temporal overlapping of onset and ACC responses to rapidly presented stimuli complicates its analysis and interpretation. For example, Ganapathy et al. (2013) used tonal and speech stimuli to investigate the minimum time necessary to generate an ACC. In these experiments, the time between the stimulus onset and the subsequent acoustic change ranged between 50 and 150 ms. Visual inspection of their data for transitions ranging between 110 and 150 ms show a clear overlap of the onset response (OnR) and the ACC for tonal and speech stimuli, characterized by a ‘‘w-shaped’’ wave morphology. Similar findings were reported by Tremblay et al. (2003), using naturallyproduced consonant–vowel (CV) syllables /shi/ and /si/, and Pihko et al. (2008), using the CV syllable /su/. Dimitrijevic et al. (2012) used CV and VCV to investigate the neural representation in the cortex of rapid sound transitions. Similarly, overlap of multiple responses present in the recording complicates the interpretation of the results. As acknowledged by these authors, the use of a technique to separate overlapping responses is needed to allow a more appropriate interpretation of the results of these studies. This study aims to investigate the morphology of cortical responses elicited by long and short stimulus onset asynchronies (SOAs) for formant transitions of neighboring vowels that can be observed in every day conversational speech. The responses recorded for short SOA have the potential to be used as an objective measure of spectro-temporal discrimination. Responses were recorded from normal-hearing subjects using semi-synthesized vowel transitions for a long SOA of 1500 ms or a short SOA jittered between 100 and 170 ms. For the short SOA condition, we applied the least squares (LS) deconvolution technique on three different types of overlapping cortical responses (Bardy et al., 2014a). The three different cases of overlap consist of: (1) an overlap of an OnR with the magnetic counterpart of the ACC (called mACC); (2) an overlap of two mACCs and finally (3) an overlap of a mACC with an offset response (OffR). Given the success of the deconvolution technique on these responses, we also explored the effects of short SOA on the morphology of cortical responses to better understand the adaptation properties of these responses. Further, as evidence in the literature exists for lateralization of magnetic field amplitude of the auditory cortical responses e.g. (Howard and Poeppel, 2009; Johnson et al., 2010), we have also explored whether differences in cerebral lateralization can be measured for long and short SOAs. We hypothesized that the underlying neural mechanisms for onset, mACCs, and OffR utilize common neural structures. If this hypothesis is true, deconvolution of the three different types of overlapping responses should show morphological similarities between one another as well as similarity with the overlap of two OnR which have already been investigated (Bardy et al., 2014b).

2. Materials and methods 2.1. Subjects and experimental condition Seventeen healthy right-handed (Oldfield, 1971) adults with no reported history of hearing loss or neurological problems participated in this study (6 males, 11 females, mean age 28.2 (SD 8.0)). The data of one subject had to be rejected because of the presence of large artifacts in the recording. All subjects passed audiometric screening with hearing levels <20 dB for 500–4000-Hz pure tone octave frequencies. Psychophysical tests, frequency pattern and temporal order discrimination, were administered and all subjects showed normal abilities in all tasks (Grassi and Soranzo, 2009; McArthur and Bishop, 2005). During recording, the subjects were instructed to ignore the stimuli and watch a subtitled silent movie. The alertness of the subject was monitored by continuous

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

observation and short conversations every 15 min between the sound sequences. This study was approved by and conducted under oversight of the Macquarie University Human Research Ethics Committee. All participants signed a consent form and were paid a nominal amount for their participation. 2.2. Stimuli For the objective test paradigm, sound sequences were developed to provide a spectro-temporal structure that is similar to formant transition of neighboring vowel that can be observed in natural speech. Three synthetic vowels were generated in Praat software (Boersma, 2002). The first two stimuli were /e/ vowels with different fundamental frequencies (/e/-low = 125 and /e/high = 138 Hz) and with formants F1, F2 and F3 set at 280, 2620 and 3380 Hz. The third stimulus was a /u/ vowel, with the same fundamental frequency as /e/-low (F0 = 125, F1 = 280, F2 = 920 and F3 = 2200). Three sound sequences were played randomly to the participants. The first two sound sequences consisted of five synthetic vowel stimuli were concatenated, and were separated by a silent gap of 1500 ms (see Fig. 1). The third sequence was an MMF oddball paradigm sequence which was analyzed in a related study and results are not shown here. The order of the sounds in the sequences was as follows: /e/low, /e/-high, /e/-low, /u/, /e/-low. This allowed recording of an OnR to /e/-low, mACCs to transitions /e/-low to /e/-high (annotated mACCF0"); /e/-high to /e/-low (annotated mACCF0;), /e/-low to /u/ (annotated mACCF2;), /u/ to /e/-low (annotated mACCF2") and an OffR to /e/-low (see Fig. 1). Note that the annotation ‘‘mACCF2’’ is used to represent the F2 and F3 formant transition. Response waveforms were time-locked to the 6 triggers (referred to as Trig. 1–6), which each were associated with different sound stimuli. In sequence 1, the SOA was long and constant at 1500 ms, whereas in sequence 2, the SOA of /e/-low sounds (Trig. 1, 3, and 5 in the sound sequence) was short and jittered between 100 and 172 ms to allow for the LS deconvolution technique to be performed (see Fig. 1). For the short SOA condition, the duration of the sound (i.e. between 100 and 172 ms) was long enough to avoid the risk for the participants to perceive the formant transitions as a stop consonant (Van Son et al., 1993). Moreover, to avoid spectral splatter and produce a smooth transition, each stimulus was windowed with a 10 ms rise-fall ramp and the stimuli were concatenated with 5 ms of overlap. The recording time for the three sequences was 15 min with a total recording time of 45 min which enabled the presentation of 96 sound sequences for the long SOA condition and 183 sound sequences for the short SOA condition. Stimuli were calibrated at 75 dB SPL and delivered binaurally through custom insert earphones, using pneumatic tubes to deliver sound to the subject, with a frequency response relatively flat between 500 and 8 kHz and an approximate 10 dB/octave roll-off for frequencies below 500 Hz (Raicevich et al., 2010). 2.3. MEG recordings The MEG data were acquired using a whole-head MEG system (Model PQ1160R-N2, KIT, Kanazawa, Japan) consisting of 160 coaxial first-order gradiometers with a 50 mm baseline (Kado et al., 1999; Uehara et al., 2003). MEG data were acquired with a sampling rate of 1000 Hz with a bandpass filter of 0.1–200 Hz and a 50 Hz notch filter. The MEG data were spatially co-registered using five marker coils placed on the participants’ head. The participants’ head shape was measured with a pen digitizer (Polhemus Fastrack, Colchester, VT). Head position was measured by energizing the marker coils in the MEG dewar immediately before and after the recording session.

3

2.4. MEG data processing and analysis Correction of the raw data for eye-blink artifact was achieved using the artifact scan tool in BESA 5.3 (MEGIS Software GmbH, Gräfelfing, Germany) by taking into account the blink topographies for each participant. Additional artifact removal from MEG data included signals exceeding amplitude (>2700 fT/cm) and magnetic gradient (>800 fT/cm/sample) criteria (Yetkin et al., 2004). Averaging and band-pass filtering between 3 Hz (6 dB/octave, forward) and 30 Hz (48 dB/ octave, zero-phase) was performed for each trigger condition using the non-contaminated epochs. The high-pass filtering at 3 Hz is equivalent to subtracting the baseline from the raw data collected. In sequence 2 where the SOA was short and responses overlapped, corrections were applied using the LS deconvolution technique (Bardy et al., 2014a). The LS deconvolution relies on the timing characteristic of the stimulus sequence. The acoustic transitions were jittered which resulted in an unequal spacing of the sound transitions. Another main characteristic of the deconvolution is the use of a least-squared error approach (Bardy et al., 2014a). The accepted epochs after artifact rejection were exported from BESA 5.3 into MatLab (MathWorks) and downsampled to 100 Hz. Deconvolution was performed for each of the 160 sensors using the jitter in each condition to disentangle the overlapping responses. For each condition, recovered responses were defined by epochs of 100 ms pre-stimulus to 380 ms post-stimulus. For source analysis, downsampled and deconvolved data were re-imported to BESA 5.3 and modeled with two symmetric equivalent current dipoles (ECDs). The location of each ECD was fitted individually for each condition around the peak of the N100 m, from about 80 to 120 ms relative to stimulus onset. For the offset response, due to low SNR values, source location was strongly influenced by the noise and was less reliable. Therefore, the dipole location of the OnR was used for this condition and the orientation of the dipole was fitted again. We calculated dipole parameters of (x,y,z) coordinates and extracted the source waveform which contains the dipole moment of the sources calculated for each time point. From the resulting source waveforms, we extracted the peak maxima for the N100m and P200m, for each subject, hemisphere and condition using an automatic peak detection algorithm implemented in Matlab (MathWorks). Before statistical analysis, we applied the asinh transform on the amplitude data to stabilize the variance across condition (Matysiak et al., 2013). Firstly, the algorithm identified the N100m and P200m latencies of the grand mean waveform for each conditions of the experiment (indicated in Fig. 2). Secondly, N100m and P200m peaks for each individual were identified as the most negative and most positive peak in a time window interval 50 to +55 ms around the grand mean latency of each condition. The peaks were confirmed by visual inspection in all conditions by two highly experienced electrophysiologists. For statistical analysis, repeated measure three-way analysis of variance (ANOVA), were performed on the peak latency, the transformed peak amplitude of dipole moments of the N100m and P200m and the RMS amplitude value of the source waveforms in a 0–300 ms time interval.

3. Results 3.1. Source location of extracted source waveforms The grand-mean source locations of the equivalent current dipoles (ECDs) fitted on the N100m are shown for both sequence 1 (Fig. 2a) and sequence 2 (Fig. 2b). Only the right hemisphere is represented, as a symmetry constraint was used to stabilize the fits of the dipoles. They are plotted in the horizontal and vertical plane

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

4

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

Seq.1 Long SOA only

Frequency (Hz)

Sound Sequence

Silent interval (1500 ms)

/e/ low

/e/ high

/e/ low

/u/

/e/ low

F3 = 3380 Hz F3 = 2200 Hz

F2 = 2620 Hz

F2 = 920 Hz F1 = 280 Hz F0 = 138 Hz F0 = 125 Hz Trig. 1 OnR

Trig. 2 mACC F0

Trig. 3 mACC F0

Trig. 4 mACC F2

Trig. 5 mACC F2

Trig.6 OffR

Time (ms)

SOA = 1.5s x 5

Seq.2 Long & short SOAs

Frequency (Hz)

/e/ low

/e/ high

/e/ low

/u/

/e/ low

F3 = 3380 Hz F3 = 2200 Hz

F2 = 2620 Hz

F2 = 920 Hz F1 = 280 Hz F0 = 138 Hz F0 = 125 Hz Trig. 1 Trig. 2 OnR mACC F0

SOA

~0.135s

Trig. 3 mACC F0

1.5s

Trig. 4 mACC F2

~0.135s

Trig. 5 Trig.6 mACC F2 OffR

1.5s

Time (ms)

~0.135s

Fig. 1. Schematic representation of the two stimulus sequences each composed of 5 vowels interleaved by a silence interval of 1500 m. In both sequences, the order of sound in the sequence were similar and as followed: /e/-low, /e/-high, /e/-low, /u/, /e/-low. The responses recorded were an onset response (OnR), mACCs for pitch increase (mACCF0"), pitch decrease (mACCF0;), F2 formant decrease (mACCF2;), F2 formant increase (mACCF2") and finally an offset response (OffR). In Seq. 1 (long SOA only), sound transitions between vowels occurred every 1.5 s. In Seq. 2 (long & short SOAs), the length of the /e/-vowel stimulus at Trig. 1, 3 and 5 was shortened and jittered between 100 and 172 ms resulting in faster sound transitions.

with the X-coordinate (medial to lateral direction), Y-coordinate (posterior to anterior direction) and Z-coordinate (inferior to superior). Source localization was not the focus of this study and the statistical analysis is therefore not reported.

(Trig. 6) in 2 conditions, for a long and a short sound duration to understand the effect of the duration of a preceding stimulus on the offset response.

3.2. Grand average source waveforms

3.3. Effect of different preceding stimuli on the /e/-low auditory evoked responses

The grand average source waveforms for the two sequences, sequence 1 (long SOA only) (Fig. 3a) and sequence 2 (long & short SOAs) (Fig. 3b) for the right hemisphere (upper graphs) and left hemisphere (lower graphs) are shown. These are characterized by three peaks (P50m, N100m and P200m) for the OnR to /e/low and for the four mACCs (/e/-low to /e/-high (mACCF0"), /e/-high to /e/-low (mACCF0;), /e/-low to /u/ (mACCF2;), /u/ to /e/-low (mACCF2")). The OffR in most cases only showed a double peak characterized by earlier latencies (which we have named N80m and P150m). The data were analyzed in three phases: (i) comparison of the responses to the stimulus (/e/-low) for both long and short sound duration across three different preceding stimuli (silence preceding Trig. 1, /e/-high preceding Trig. 3 and /u/ preceding Trig. 5) to understand the effect of a preceding sound on the evoked response; (ii) comparison of mACCF0" (Trig. 2) and mACCF2; (Trig. 4) in the long and short SOAs conditions to understand the effects of slow and rapid F0 and F2–F3 formant transitions on the adaptation properties of the response; and (iii) comparison of the offset

The difference in the morphology of the transient P50m– N100m–P200m response to the stimulus /e/-low at Trig. 1 (OnR), Trig. 3 (mACCF0;) and Trig. 5 (mACCF2") in both sound sequences (see Fig. 3) was investigated using an ANOVA 2 (SOA)  2 (HEM)  3 (Position). It is important to note that the only difference between sequence 1 and 2 is the length of the stimulus /e/-low. In sequence 2, this resulted in a rapid acoustic change between the initial /e/-low and another vowel. Peak amplitudes and latencies are displayed in Fig. 4. Statistical analysis of the RMS amplitude shows that responses were significantly larger in sequence 1 compared to sequence 2 (F(1,15) = 31.4, p < 0.0001). This trend was also significant for the amplitude of the P200m (F(1,15) = 9.24, p < 0.01). As demonstrated in other studies, this observation shows that the amplitude of the cortical response was not only affected by the last preceding SOA in the stimulus sequence but also by the history of stimulation (Zacharias et al., 2012). Interestingly, statistical analysis on the peak latencies show that the P200m were significantly shorter in sequence 2

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

5

Fig. 2. Projection of the grand-mean Talairach coordinates of equivalent current dipoles (ECD) on X, Y and Z spatial axes of N100m peak for (a) Seq. 1 (long SOA only), (b) Seq. 2 (long & short SOAs), for the 6 conditions (1 OnR, 4 mACCs, 1 OffR). Only the right hemisphere is represented as dipoles in both hemispheres were fitted with a symmetrical constraint. The lines denote standard errors between participants.

(F(1,15) = 5.20, p < 0.05). Moreover, the latencies of the N100m and P200m were significantly different depending on the type of preceding stimulus (N100m; F(2,30) = 19.69, p < 0.0001, P200m; F(2,30) = 17.34, p < 0.0001). Overall, planned comparisons show significantly longer latencies for the mACCF0; compared to OnR and mACCF2" for both N100m and P200m (p < 0.001) (see Fig. 4). Finally, a significant interaction between the preceding stimulus and hemisphere was observed for the P200m. Planned comparison showed significant longer latencies in the left hemisphere for the mACCF2" (p < 0.01). 3.4. mACCs for rapid and slow F0 and F2–F3 formant transitions The difference in morphology of the transient response to the stimuli from /e/-low to /e/-high (mACCF0") and /e/-high to /u/ (mACCF2;) was investigated in both sequence 1 (long SOA) and sequence 2 (short SOA) using an ANOVA 2 (SOA)  2 (HEM)  2 (mACC). In sequence 2, the aim of the deconvolution was to disentangle: (i) overlapping responses evoked by onset and mACC; and (ii) the overlapping responses of two mACCs. The amplitude of the N100m and P200m, as well as the RMS amplitude, were significantly larger in sequence 1 when preceded by long SOA of 1500 ms (RMS; F(1,15) = 50.43, p < 0.00001; N100m; F(1,15) = 27.25, p < 0.001, P200m; F(1,15) = 34.86, p < 0.001) (Fig. 5). An interesting observation was that there was a significant interaction between SOA and mACC for the N100m. A larger decrease of the N100m amplitude was observed from slow

to rapid transitions for the mACCF0" compared to the mACCF2;. (N100m; F(1,15) = 9.55, p < 0.01). This interaction was also found for both the latencies of the N100m and P200m (N100m; F(1,15) = 11.40, p < 0.01, P200m; F(1,15) = 17.38, p < 0.001) (Fig. 5). For the N100m latency, a main effect of hemisphere (F(1,15) = 8.21, p < 0.05) was observed. The latency of the response was significantly shorter in the right hemisphere. Moreover, a significant interaction was found between SOA and hemisphere (F(1,15) = 10.99, p < 0.01). It results to more symmetric latencies in the rapid mACC condition (i.e. sequence 2). There was a trend towards significance for SOA  hemisphere interaction for the P200m amplitude (F(1,15) = 4.36, p = 0.054). With a larger sample size, this interaction could well have been significant. These findings support the hypothesis that hemispherical differences exist in the encoding of rapid and slow acoustic changes in the brain. 3.5. Auditory evoked offset responses (OffR) The differences in morphology and peak latency of the OffR compared with the OnR and mACC responses suggest that it is not appropriate to compare all conditions in the same ANOVA. Instead an ANOVA 2 (SOA)  2 (HEM) was performed to analyze the OffR alone. The source waveform showed two peaks with shorter latency in sequence 1 (see Fig. 3). The identification of the peak was more difficult in sequence 2, where the time between the last mACC and the OffR was only 100–172 ms. As shown in Fig. 6, a significantly larger amplitude of the N80m, P150m and RMS value in

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

6

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

Fig. 3. Grand mean source waveforms for (a) Seq. 1 (long SOA only), (b) Seq. 2 (long & short SOAs), for right and left hemisphere and for the 6 conditions (1 OnR, 4 mACCs, 1 OffR). Short SOA conditions in Seq. 2 are denoted with asterisks. Latencies of the grand mean source waveforms for the N100m and P200m peak are indicated for each condition.

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

7

Fig. 4. Grand mean amplitudes and latencies of dipole moments of N100m and P200m components. Results are shown for /e/-low (i.e. Trig. 1, 3 and 5) across 16 subjects for the long SOA condition of (a) Seq. 1 and (b) Seq. 2. The lines denote standard deviations between participants.

sequence 1 were demonstrated by the ANOVA statistics (RMS; F(1,15) = 19.71, p < 0.001; N80m; F(1,15) = 13.42, p < 0.01, P150m; F(1,15) = 34.86, p < 0.001). This suggests that the adaptation of the OffR depends on the duration between the last mACC and the offset of the sound stimulus. Moreover, a larger amplitude in the right hemisphere was found for the N80m and the RMS value (RMS; F(1,15) = 6.30, p < 0.001; N80m; F(1,15) = 6.73, p < 0.05). No significant differences were found for N80 and P150 latencies.

4. Discussion The purpose of this study was to study how the human auditory cortex processes rapid changes in the acoustic signal and to investigate the feasibility of unwrapping three different types of overlapping auditory cortical responses using the LS deconvolution technique. The three types of responses consist of: (1) overlap of an OnR and a mACC; (2) overlap of two mACC; and (3) overlap of a mACC with an OffR. The second aim of the study was to compare the morphology of the extracted cortical response for rapid sound transitions (short SOA 135 ms) to responses evoked using a longer SOA of 1500 ms. Comparison of the cortical responses generated by long and short SOAs, for the three combinations of overlapping responses, showed similar components and morphology, suggesting that the LS deconvolution technique can be used on MEG signals. Both spectral transitions (F0 and F2) for the short SOA range evoked a distinct auditory response with N100m and

P200m field distributions (Fig. 3), but were smaller in size compared to those evoked by a long SOA. Only a limited number of studies have attempted to investigate the characteristics of ACC using a short delay between transitions (Ganapathy et al., 2013), although none have used correction techniques to account for response overlap, limiting the interpretation of the results. The significantly smaller responses recorded for short transition times might reflect the adaptation properties of the generators involved in the response. In addition to the sensitivity of the response to the timing with the previous transition, a functionally significant interaction between SOA and mACC for the N100m and P200m amplitude was identified. It is possible that these results demonstrate that F0 pitch transitions are encoded more optimally at slow transition rates while F2–F3 formant transitions are better suited for fast transition rates, although no current literature exists to support this. Moreover, a change of the lateralization of the response (discussed later) was observed for these mACCs when comparing the long versus short SOA in the stimulus sequence, presumably reflecting the characteristics of human auditory cortical processing of speech-like acoustic stimuli. Using MEG, these results provide further evidence that the N100m–P200m reflect the detection of an acoustic change in the auditory environment and can be recorded for rapid transitions (Eggermont and Ponton, 2002). The synchronized activity within certain neuronal populations recorded using non-invasive methods represents physiological mechanisms likely to be necessary for discrimination of rapid spectro-temporal changes. These

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

8

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

Fig. 5. Grand mean amplitudes and latencies of dipole moments of N100m and P200m components. Results are shown for /e/-high and /u/ (i.e. Trig. 2 and 4) across 16 subjects for the (a) long SOA condition (i.e. Seq. 1) and (b) short SOA condition (i.e. Seq. 2). The lines denote standard deviations between participants.

findings gain further relevance when considered with studies which have used intra-cranial recordings investigating temporal response patterns in the auditory cortex of humans and primates (Liégeois-Chauvel et al., 1999; Steinschneider et al., 2005; Trébuchon-Da Fonseca et al., 2005). These studies have reported that coding of fast transitions in the auditory cortex, such as stop consonants or within syllables (e.g. voice-onset time), are based on a temporal processing mechanism of time-locked firing of cortical neurons. The temporal code was found to be accurately maintained for both speech and non-speech stimuli (Liégeois-Chauvel et al., 1999; Steinschneider et al., 2005). By investigation of the time-locked activity in the auditory cortex to consonant release and voicing onset, findings from Steinschneider et al. (2005), support the idea that a temporal code in the primary auditory cortex might be a mechanism of transferring information into the secondary auditory areas for more complex auditory processing. In their study, the averaged neural activity across electrodes was correlated with phonetic perception for cues, such as voice-onset time (Steinschneider et al., 2005). Importantly, a good agreement exists between the obligatory temporal response recorded using invasive intra-cranial and non-invasive surface electrophysiological recordings in normal hearing subjects (Trébuchon-Da Fonseca et al., 2005). Therefore it is likely that the use of the LS deconvolution technique will enable further studies to be conducted with rapidly presented speech-like sounds, such as syllables which have SOAs around 100–200 ms (Weismer and Hesketh, 1993). Our experiment to our knowledge is the first to directly show the possibility of disentangling these three different kinds of obligatory responses. Used in conjunction with electrophysiological

techniques with fine temporal resolution, such as MEG, our study allows for a better understanding and tracking of neural mechanisms underpinning spectro-temporal transitions in speech-like stimuli. 4.1. Lateralization of hemisphere processing/hemisphere asymmetry We assume that the earliest stage of cortical speech processing involves some form of spectro-temporal analysis, conducted in the bilateral auditory cortex in the supra-temporal plane. Previous experiments using positron emission tomography (PET) (Zatorre and Belin, 2001), suggests a better sensitivity of the left hemisphere for temporal processing and right for spectral processing. The present results showed a bilateral symmetric activation for most of the experimental conditions. A right-biased asymmetry was observed for the N100m latencies of mACCs (for Trig. 2: mACCF0" and 4: mACCF2;), characterized by shorter latencies in the right hemisphere. Such a rightward hemispheric lateralization to diotic presentation of vowels, tones, clicks, noise and certain syllables has been consistently reported (Howard and Poeppel, 2009; Shaw et al., 2013). The right greater than left asymmetry is in agreement with the spectral composition of the stimulus which has been shown to be preferentially processed in the right auditory cortex (Boemio et al., 2005). Our study also showed a significant interaction between hemisphere and SOA (for Trig. 2: mACCF0" and 4: mACCF2;) for the latency of the N100m component, suggesting that there is an advantage for the right hemisphere to process slow frequency transitions (long SOA) but a more symmetric activation to process syllabic rates (short SOA). Left and right auditory

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

9

Fig. 6. Grand mean amplitudes and latencies of dipole moments of N100m and P200m components. Results are shown for OffR /e/-low (i.e. Trig. 6) across 16 subjects for the (a) long SOA condition (i.e. Seq. 1) and (b) short SOA condition (i.e. Seq. 2). The lines denote standard deviations between participants.

cortices may have distinct temporal integration windows. The asymmetric sampling in time (AST) hypothesis by Poeppel (2003) posits a time-based view of hemispheric differences, such that the left hemisphere is biased to extracting information over shorter temporal windows (<25 ms) than the right hemisphere. Vowel sounds are spectrally rich and it might therefore seem reasonable to expect a lateralization towards regions of the auditory cortex specialized in spectral processing. However, we demonstrated here that varying the duration of formant transitions can drive lateralization. Therefore, the change in hemisphere lateralization might be due to the spectro-temporal structure of the stimulus which is similar to the vowel-vowel speech syllables found in running speech. A leftward shift of hemispheric balance for rapid transitions is likely to be tied together with a large volume of cortical connecting fibers in the left auditory cortex (Penhune et al., 1996). Experiments similar to the current study might provide critical information about the spectro-temporal processing ability of acoustic stimuli occurring in the timescale of speech syllables (150 ms). Using semisynthesized speech in this experiment, there is a risk of losing some of the natural speech features. An aim of another study would be to investigate whether there is any difference in hemispheric laterality between natural speech sounds and semisynthesized speech sounds. While differences between hemispheres can be observed, it is important to notice that, in general, there seem to be more similarities in processing sound information in each auditory cortex than differences (Poeppel, 2003). At the group level, hemispheric differences were observed

however, these were not consistently observed at the individual level. That is, out of the 16 subjects tested, 8 showed a balanced activation in each hemisphere, 5 showed a rightward prominence while 3 showed left hemisphere dominance. Given that all individuals tested had normal speech perception abilities and were all right-handed, then this may represent typical inter-hemispheric variation in the cortical architecture and topography in the population (Shaw et al., 2013). Therefore group analyses of data may not provide sufficiently sensitive information about individuals. 4.2. Lack of mACC amplitude enhancement for short SOA The morphology of the auditory cortical response in humans using short SOA ranges was only reported in few studies. For OnR, these studies showed a non-monotonic variation of the response amplitude with SOA (Bardy et al., 2014b; Budd and Michie, 1994; Loveless et al., 1989; Loveless et al., 1996; Wang et al., 2008). A decrease of the response amplitude was observed when decreasing the SOA from 1000 to 400 ms, followed by a response enhancement for shorter ranges of SOAs between 400 and 100 ms. Using the same technique for correction of overlapping responses shown in this study (i.e. LS-deconvolution), we have demonstrated similar results in a paired paradigm using EEG (Bardy et al., 2014b). It suggests that the neural response to a tone can be enhanced by previous stimulation in a paired paradigm. The response enhancement was affected by the temporal separation (i.e. SOA) and frequency of the first tone of the pair (Bardy et al., 2014b). The response to

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

10

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

the second tone was largest when the SOA was jittered around 150 ms and when the frequency of the second tone was different from the frequency of the first tone (in this case 500 Hz compared with 2000 Hz). These findings are in agreement with animal studies where response enhancement for short SOA were reported in 2/3 of neurons from the auditory cortex in the macaque monkey (Brosch et al., 1999). The absence of mACC amplitude enhancement for short SOA in the present study suggests that the ongoing sound and/or the absence of a silence gap have an influence on the magnitude of adaptation of the response. It is in agreement with recent findings (Lanting et al., 2013) where the authors suggest that response adaptation is influenced by the ongoing sound activity. However, it is important to note that the significant amplitude increase from a short to long SOA condition disproves the idea that the adaptation is principally caused by ongoing sound activity. Further studies are needed to investigate the change of the temporal properties of adaptation of the mACC. 4.3. Shorter latency for short transitions Another interesting finding in this study is the significantly shorter latencies and reduced amplitudes of the P200m in the short SOA compared to the long SOA condition (for Trig. 2: mACCF0" and 4: mACCF2;). The significant decrease of the P200m latencies in the short SOA condition probably reflects a short-term synaptic enhancement of cortical neurons involved in the processing of the temporal structure of the sound (Metherate and Ashe, 1994). However, the neural mechanisms underlying the synaptic enhancement are unclear at this stage. 4.4. Morphology difference between OnR and mACC responses The question of whether the responses elicited by an OnR, or a mACC are mediated by the same or different physiological mechanisms is controversial in the auditory evoked response literature. One hypothesis is that they represent a common mechanism that registers changes in the acoustical environment (Hillyard and Picton, 1978). The dipole localization represented in Fig. 2 show that the neural generators of the N100m for OnR, and mACCs are located in neighboring areas (Pantev et al., 1996; Yamashiro et al., 2011). It suggests that slightly different neural populations located in the primary and secondary auditory cortex are encoding changes in the acoustic property of a sound. The results of the current study show that the difference in neural generators is also reflected in the slight morphology difference of the auditory responses. The difference in transition frequency between the mACCF0; in Trig. 3 (10% pitch increase from 138 to 125 Hz) and the mACCF2" in Trig. 5 where the F2-F3 formant transition consisted of a spectral change from 920 to 2620 Hz (F2) and from 2200 to 3380 kHz (F3) are probably the reason for the change of morphology between the two responses (Elberling et al., 1982). That is, longer latencies were observed for Trig. 3 (mACCF0;) compared to Trig. 5 (mACCF2"). These findings are in agreement with a recent EEG study investigating frequency increment for low and high frequency tone burst (Dimitrijevic et al., 2008). The authors report delayed latencies for the N100 for a low frequency shift compared to a high frequency shift and highlight the potential relationship between the spectral components of the sound and the tonotopical organization of neural structures encoding it. Generally, the morphological similarities of the response to the sound /e/-low evoked either an OnR or a mACC support the hypothesis that the evoked responses are generated by similar cortical neurons. It also indicates that the temporal properties of the sound sequence (i.e. how quickly sounds are presented) govern the magnitude of the response amplitude. Statistical analysis shows that the amplitude of the cortical responses for Trig. 1

(OnR), Trig. 3 (mACCF0;) and Trig. 5 (mACCF2") were significantly smaller in sequence 2 (short SOA), even if the time preceding an OnR or a mACC was similar in both sequences. There are two possible reasons for this effect. Firstly, the duration of the stimuli (changing from 1500 to 100–172 ms) could potentially influence the response amplitude. However, in another study investigating OnR, Ostroff et al. (1998) have shown that cortical responses to a 25 ms and a 2s tone were comparable in amplitude. It is therefore unlikely that the duration of the stimulus had an influence on the response amplitude. More likely, these results are in agreement with recent research showing the influence of the history of stimulation on the response strength. Zacharias et al. (2012) showed that the decline in response amplitude of the cortical response with increasing the average stimulus number per minute in the sequence can represent the build-up of adaptation of the auditory cortex to transient response. It supports the view that the response amplitude is not only determined by the time interval with the immediately preceding sound transition but of the entire sound presentation history. The long-term adaptation is believed to originate principally from synaptic depression (Asari and Zador, 2009; Wehr and Zador, 2005) and slow after-hyperpolarization (Faber and Sah, 2003; Schwindt et al., 1988) in the auditory cortex. 4.5. Morphology of offset response The number of studies investigating the morphology of OffR has remained negligible compared to the obligatory OnR. In addition to previous findings (Hari et al., 1987; Lanting et al., 2013; Ostroff et al., 1998; Pantev et al., 1996) which demonstrate that OnR and OffR responses are not physiologically independent and that the amplitude of the OffR is dependent of the duration of a stimulus, the present results demonstrated that the OffR has smaller amplitudes and shorter latencies than the OnR. Moreover, the OffR is not only dependent on the duration of the sound but, more specifically, on the time since the last change in the acoustic environment. An example is the lack of identification of OffR in sequence 2 for the majority of subjects. It can be suggested that the short duration before the last mACC is insufficient to elicit an OffR. Another interesting finding in the current study concerns the rightward lateralization of the OffR as well as the interaction between sequence and hemisphere (see Fig. 6). Using whole cell recordings of primary auditory cortical neurons in anesthetized rats, Scholl et al. (2010) demonstrated that OffR are not generated by a rebound from sustained synaptic inhibition and that different synapse populations in the auditory cortex encode OnR and OffR. The difference in the latency of the OffR which tends to increase with reduced SOA compared with the mACC might be attributed to the different excitatory-inhibitory balance of the offset-type neurons compared with onset-type neurons (Scholl et al., 2010). If the mACC is considered to be the superposition of an OffR and a OnR, the significantly larger mACC amplitudes in the long SOA compared to the short SOA condition support the idea that OffR are not causing adaptation on OnR which are themselves believed to cause transient inhibition (Scholl et al., 2010). 4.6. Clinical implications Previous research has shown that an abnormal ability to process rapidly changing acoustic signals present in everyday speech might be an underlying factor limiting the development of phonological representations that are needed for language acquisition (Pantev et al., 1986) (Rapid Auditory Temporal Processing, RATP) (Tallal, 2004; Tallal and Piercy, 1973). A bottom-up auditory processing disorder has been supported from behavioral data for children with SLI (Wright et al., 1997), and dyslexia (Farmer and Klein, 1995).

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

Evidence of a link between deficits in processing of rapidly changing stimuli and language development in infants, children and adolescents is supported by EEG studies (Bishop and McArthur, 2004; Pihko et al., 2008; Teder et al., 1993). The responses for SLI children reported by Pihko et al. (2008) are characterized by broader responses (with a single peak for the /pa/ sound stimulus) and lower amplitudes. This could potentially represent a deficit of neural populations in the auditory cortex to synchronize with the rapidly changing temporal cues of the auditory stimuli or be the result of smearing of multiple temporally overlapping responses. Another hypothesis is that the poor response morphology could be the result of an incomplete maturation of the central auditory system (Bishop and McArthur, 2004). Once again, the cortical waveforms in these different studies displayed a high degree of overlapping of multiple responses for which no correction was applied. Therefore, more research is needed using correction technique such as the LS deconvolution to understand better these responses. The mACC response is believed to index neural detection of change, which is typically reflected as a simultaneous behavioral capability of change detection (He et al., 2012). Both frequency and temporal changes of formant transitions are essential for the identification of consonant–vowel and vowel–consonant transitions. The auditory contrasts tested in this paradigm (F0 and F2–F3 transitions) are important features for both stream segregation and speech discrimination. The fundamental frequency of a talker’s speech conveys pitch information and contribute to perceived gender and age (Mackersie et al., 2011). These cues are also important to segregate different acoustic streams and to understand speech in noise (Cameron and Dillon, 2007). If auditory temporal processing development plays an important role in language and reading proficiency, then one might expect these two measures to be correlated. Futures studies are needed to experimentally assess the correlation between individual behavioral and neurophysiological measures in populations including language impaired and / or hearing impaired children to characterize the neural mechanisms underlying speech discrimination. 5. Conclusions The lack of objective methods to evaluate a person’s ability to discriminate rapidly changing sounds shows the need to design new methodologies. Using reduced SOA to record cortical responses, the response recorded from the scalp elicited by a change in the spectral content of synthesized vowels represents a process of automatic sensory discrimination which may be important for analyzing the complex sound environment. Using the LS deconvolution on MEG data, our study shows that it is possible to separate three different type of overlapping obligatory cortical responses evoked by rapid spectral changes of semi-synthesized speech stimuli. Acknowledgments This work was supported in part by the HEARing CRC, established and supported under the Australian Cooperative Research Centres Program, an Australian Government Initiative, by the Oticon Foundation, by the Australian Research Council and by the Centre of Excellence in Cognition and its Disorders. The authors gratefully thank Yatin Mahajan and Jon Brock for their help during the pilot experiment as well as Robert Cowan for his suggestions for improving the manuscript and for his meticulous editing work. All authors declared no conflict of interest. References Asari H, Zador AM. Long-lasting context dependence constrains neural encoding models in rodent auditory cortex. J Neurophysiol 2009;102:2638–56.

11

Bardy F, Dillon H, Van Dun B. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions. Clin Neurophysiol 2014a;125:727–37. Bardy F, Van Dun B, Dillon H, McMahon CM. Deconvolution of overlapping cortical auditory evoked potentials (CAEPs) recorded using short stimulus onsetasynchrony (SOA) ranges. Clin Neurophysiol 2014b;125:814–26. Bishop DV, Hardiman M. Measurement of mismatch negativity in individuals: a study using single-trial analysis. Psychophysiol 2010;47:697–705. Bishop DV, McArthur G. Immature cortical responses to auditory stimuli in specific language impairment: evidence from ERPs to rapid tone sequences. Dev Sci 2004;7:F8–F11. Boemio A, Fromm S, Braun A, Poeppel D. Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nat Neurosci 2005;8:389–95. Boersma P. Praat, a system for doing phonetics by computer. Glot international 2002;5:341–5. Brosch M, Schulz A, Scheich H. Processing of sound sequences in macaque auditory cortex: response enhancement. J Neurophysiol 1999;82:1542–59. Budd TW, Michie PT. Facilitation of the N1 peak of the auditory ERP at short stimulus intervals. Neuroreport 1994;5:2513–6. Cameron S, Dillon H. The listening in spatialized noise-sentences test (LISN-S): testretest reliability study. Int J Audiol 2007;46:145–53. Dawes P, Bishop DMV. Auditory processing disorder in relation to developmental disorders of language, communication and attention: a review and critique. Int J Lang Commun Disord 2009;44:440–65. Dimitrijevic A, Michalewski HJ, Zeng F-G, Pratt H, Starr A. Frequency changes in a continuous tone: auditory cortical potentials. Clin Neurophysiol 2008;119: 2111–24. Dimitrijevic A, Pratt H, Starr A. Auditory cortical activity in normal hearing subjects to consonant vowels presented in quiet and in noise. Clin Neurophysiol 2012;124:1204–15. Eggermont JJ, Ponton CW. The neurophysiology of auditory perception: from single units to evoked potentials. Audiol Neurootol 2002;7:71–99. Elberling C, Bak C, Kofoed B, Lebech J, Soermark K. Auditory magnetic fields: source location and ‘tonotopical organization’ in the right hemisphere of the human brain. Scand Audiol 1982;11:61–5. Faber EL, Sah P. Calcium-activated potassium channels: multiple contributions to neuronal function. Neuroscientist 2003;9:181–94. Farmer ME, Klein RM. The evidence for a temporal processing deficit linked to dyslexia: a review. Psychon Bull Rev 1995;2:460–93. Ganapathy M, Narne VK, Kalaiah MK, Manjula P. Effect of pre-transition stimulus duration on acoustic change complex. Int J Audiol 2013;52:350–9. Grassi M, Soranzo A. MLP: a MATLAB toolbox for rapid and reliable auditory threshold estimation. Behav Res Methods 2009;41:20–8. Hämäläinen JA, Salminen HK, Leppänen PHT. Basic auditory processing deficits in dyslexia: systematic review of the behavioral and event-related potential/field evidence. J Learn Disabil 2012;46:413–27. Hari R, Pelizzone M, Mäkelä J, Hällström J, Leinonen L, Lounasmaa O. Neuromagnetic responses of the human auditory cortex to on-and offsets of noise bursts. Int J Audiol 1987;26:31–43. He S, Grose JH, Buchman CA. Auditory discrimination: the relationship between psychophysical and electrophysiological measures. Int J Audiol 2012;51:771–82. Hillyard SA, Picton TW. On and off components in the auditory evoked potential. Atten Percept Psychophys 1978;24:391–8. Howard MF, Poeppel D. Hemispheric asymmetry in mid and long latency neuromagnetic responses to single clicks. Hear Res 2009;257:41–52. Johnson BW, Crain S, Thornton R, Tesan G, Reid M. Measurement of brain function in pre-school children using a custom sized whole-head MEG sensor array. Clin Neurophysiol 2010;121:340–9. Jones S, Perez N. The auditory ‘C-process’: analyzing the spectral envelope of complex sounds. Clin Neurophysiol 2001;112:965–75. Kado H, Higuchi M, Shimogawara M, Haruta Y, Adachi Y, Kawai J, et al. Magnetoencephalogram systems developed at KIT. IEEE Trans Appl Supercond 1999;9:4057–62. Lanting CP, Briley PM, Sumner CJ, Krumbholz K. Mechanisms of adaptation in human auditory cortex. J Neurophysiol 2013;110:973–83. Liégeois-Chauvel C, De Graaf JB, Laguitton V, Chauvel P. Specialization of left auditory cortex for speech perception in man depends on temporal coding. Cereb Cortex 1999;9:484–96. Loveless N, Hari R, Tiihonen J. Evoked responses of human auditory cortex may be enhanced by preceding stimuli. Electroencephalogr Clin Neurophysiol 1989;74:217–27. Loveless N, Levänen S, Jousmäki V, Sams M, Hari R. Temporal integration in auditory sensory memory: neuromagnetic evidence. Electroencephalogr Clin Neurophysiol 1996;100:220–8. Mackersie CL, Dewey J, Guthrie LA. Effects of fundamental frequency and vocal-tract length cues on sentence segregation by listeners with hearing loss. J Acoust Soc Am 2011;130:1006. Martin BA, Boothroyd A. Cortical auditory event related potentials in response to periodic and aperiodic stimuli with same spectral envelope. Ear Hear 1999;20:33–44. Martin BA, Boothroyd A. Cortical auditory evoked potentials in response to changes of spectrum and amplitude. J Acoust Soc Am 2000;107:2155–61. Martin BA, Boothroyd A, Ali D, Leach-Berth T. Stimulus presentation strategies for eliciting the acoustic change complex: increasing efficiency. Ear Hear 2010;31:356.

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003

12

F. Bardy et al. / Clinical Neurophysiology xxx (2014) xxx–xxx

_ Matysiak A, Kordecki W, Sieluzycki C, Zacharias N, Heil P, König R. Variance stabilization for computing and comparing grand mean waveforms in MEG and EEG. Psychophysiol 2013;50:627–39. McArthur G, Bishop D. Speech and non-speech processing in people with specific language impairment: a behavioural and electrophysiological study. Brain Lang 2005;94:260–73. Metherate R, Ashe JH. Facilitation of an NMDA receptor-mediated EPSP by pairedpulse stimulation in rat neocortex via depression of GABAergic IPSPs. J Physiol 1994;481:331–48. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 1971;9:97–113. Ostroff JM, Martin BA, Boothroyd A. Cortical evoked response to acoustic change within a syllable. Ear Hear 1998;19:290–7. Pantev C, Lütkenhöner B, Hoke M, Lehnertz K. Comparison between simultaneously recorded auditory-evoked magnetic fields and potentials elicited by ipsilateral, contralateral and binaural tone burst stimulation. Int J Audiol 1986;25:54–61. Pantev C, Eulitz C, Hampson S, Ross B, Roberts L. The auditory evoked ‘‘off’’ response: sources and comparison with the ‘‘on’’ and the ‘‘sustained’’ responses. Ear Hear 1996;17:255–65. Penhune V, Zatorre R, MacDonald J, Evans A. Interhemispheric anatomical differences in human primary auditory cortex: probabilistic mapping and volume measurement from magnetic resonance scans. Cereb Cortex 1996;6:661–72. Pihko E, Kujala T, Mickos A, Alku P, Byring R, Korkman M. Language impairment is reflected in auditory evoked fields. Int J Psychophysiol 2008;68:161–9. Poeppel D. The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. Speech Commun 2003;41:245–55. Raicevich G, Burwood E, Dillon H, Johnson BW, Crain S. Wide band pneumatic sound system for MEG. 20th International Congress on Acoustics. Sydney: ICA 2010; 2010. p. 1–5. Ross B, Tremblay KL, Picton TW. Physiological detection of interaural phase differences. J Acoust Soc Am 2007;121:1017. Scholl B, Gao X, Wehr M. Nonoverlapping sets of synapses drive on responses and off responses in auditory cortex. Neuron 2010;65:412–21. Schwindt P, Spain W, Foehring R, Stafstrom C, Chubb M, Crill W. Multiple potassium conductances and their functions in neurons from cat sensorimotor cortex in vitro. J Neurophysiol 1988;59:424–49. Shaw ME, Hämäläinen MS, Gutschalk A. How anatomical asymmetry of human auditory cortex can lead to a rightward bias in auditory evoked fields. Neuroimage 2013;74:22–9. Steinbrink C, Ackermann H, Lachmann T, Riecker A. Contribution of the anterior insula to temporal auditory processing deficits in developmental dyslexia. Hum Brain Mapp 2009;30:2401–11. Steinschneider M, Volkov IO, Fishman YI, Oya H, Arezzo JC, Howard MA. Intracortical responses in human and monkey primary auditory cortex

support a temporal processing mechanism for encoding of the voice onset time phonetic parameter. Cereb Cortex 2005;15:170–86. Tallal P. Improving language and literacy is a matter of time. Nat Rev Neurosci 2004;5:721–8. Tallal P, Piercy M. Developmental aphasia: Impaired rate of non-verbal processing as a function of sensory modality. Neuropsychologia 1973;11:389–98. Teder W, Alho K, Reinikainen K, Näätänen R. Interstimulus interval and the selective-attention effect on auditory ERPs. Psychophysiol 1993;30:71–81. Trébuchon-Da Fonseca A, Giraud K, Badier J-M, Chauvel P, Liégeois-Chauvel C. Hemispheric lateralization of voice onset time (VOT) comparison between depth and scalp EEG recordings. Neuroimage 2005;27:1–14. Tremblay K, Friesen L, Martin B, Wright R. Test-retest reliability of cortical evoked potentials using naturally produced speech sounds. Ear Hear 2003;24:225. Uehara G, Adachi Y, Kawai J, Shimogawara M, Higuchi M, Haruta Y, et al. Multichannel SQUID systems for biomagnetic measurement. IEICE trans electron 2003;86:43–54. Van Son R. Vowel perception: a closer look at the literature. In: Proceedings of the Institute of Phonetic Sciences, University of Amsterdam; 1993. p. 33–64. Wang A, Mouraux A, Liang M, Iannetti G. The enhancement of the N1 wave elicited by sensory stimuli presented at very short inter-stimulus intervals is a general feature across sensory systems. PLoS One 2008;3:e3929. Weber-Fox C, Leonard LB, Wray AH, Bruce Tomblin J. Electrophysiological correlates of rapid auditory and linguistic processing in adolescents with specific language impairment. Brain Lang 2010;115:162–81. Wehr M, Zador AM. Synaptic mechanisms of forward suppression in rat auditory cortex. Neuron 2005;47:437–45. Weismer SE, Hesketh LJ. The influence of prosodic and gestural cues on novel word acquisition by children with specific language impairment. J Speech Lang Hear Res 1993;36:1013–25. Witton C, Patel T, Furlong P, Henning G, Worthen S, Talcott J. Sensory thresholds obtained from MEG data: cortical psychometric functions. Neuroimage 2012;63:1249–56. Wright BA, Lombardino LJ, King WM, Puranik CS, Leonard CM, Merzenich MM. Deficits in auditory temporal and spectral resolution in language-impaired children. Nature 1997;387:176–8. Yamashiro K, Inui K, Otsuru N, Kakigi R. Change-related responses in the human auditory cortex: an MEG study. Psychophysiol 2011;48:23–30. Yetkin FZ, Roland PS, Christensen WF, Purdy PD. Silent functional magnetic resonance imaging (fMRI) of tonotopicity and stimulus intensity coding in human primary auditory cortex. Laryngoscope 2004;114:512–8. Zacharias N, König R, Heil P. Stimulation-history effects on the M100 revealed by its differential dependence on the stimulus onset interval. Psychophysiol 2012;49:909–19. Zatorre RJ, Belin P. Spectral and temporal processing in human auditory cortex. Cereb Cortex 2001;11:946–53.

Please cite this article in press as: Bardy F et al. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol (2014), http://dx.doi.org/ 10.1016/j.clinph.2014.03.003