IOP 2016
45
381
the physical appearance of faces, are processed in largely dissociable neural systems.
Crossmodal reorganization of face-voice interactions in Cochlear implanted deaf patients
doi:10.1016/j.ijpsycho.2016.07.149
Pascal Barone Cerveau & Cognition (CNRS UMR 5549), Toulouse, France
49 Symposium E5
The integration of the information from face and voice is important for our social communication as both carry non-speech identity information that allows an internal supramodal representation of the person. It is clear that any dysfunction of the capacity of voice or face recognition can have a large negative impact on the social communication as it is the case in profoundly deaf patients with cochlear implant (CIP). Here we provide evidence that CIP present strong deficits in processing voice attributes leading to a crossmodal bias in face-voice interactions. When the auditory signal is ambiguous and insufficient to allow a correct perceptual decision, CI deaf patients rely strongly on the visual information. Our brain imaging study suggest that such crossmodal influence in face-voice interactions is probably supported by the colonization of the Temporal voice Area-TVA, by speech-related visual function, as a consequence of a long period of auditory deprivation. Further, the level crossmodal reorganization of the TVA is inversely correlated to the CI outcomes for speech comprehension. Altogether, in addition to the technical limitation of the CI, our results provide strong evidence on the impact of brain plasticity on the recovery of sensory function through a neuroprosthesis.
Language, music and the brain: from theoretical to clinical approaches Organizer: Mireille R. Besson (France), Lutz Jäncke (Switzerland), Stefan Elmer (Switzerland), Yun Nan (China), Mari Tervaniemi (Finland) & Eva M. Dittinger (France) Comparing music and language to determine whether these two complex human abilities involve similar or different computations and share neural resources are debated issues in the literature. Results obtained using different methodologies (i.e., fMRI, DTI, and EEG data) will be reviewed as well as their implications for the rehabilitation of children and adults with neurological problems. L. Jäncke will present an historical view of the comparative approach between music and language. doi:10.1016/j.ijpsycho.2016.07.150
doi:10.1016/j.ijpsycho.2016.07.148
134
Explore the coding of emotional valence of faces using multivoxel pattern analysis (MVPA) Maria A. Bobes Leóna, Agustín Lage-Castellanosa, Marlis Ontiveiro Ortegaa, Pedro M. Guerrab, Alicia Sanchez-Adamb, Jaime Vilab, Mitchell Valdes-Sosaa a Cuban Neurosciences Center, Havana, Cuba b University of Granada, Spain Interpersonal relationship modulate the affective valence assigned to faces. On the other hand, valence for unfamiliar faces can depend solely on physical appearance. It is not clear if processing of affective valence of familiar faces implicates a specific functional network, or if it relies on a more general face-emotional system, which could also process attributes such as attractiveness or trustworthiness. Here we contrasted these two options by means of an fMRI MVPA study. We obtained high spatial resolution fMRI responses to four face conditions, resulting from the crossing of two factors: familiarity (familiar vs. unfamiliar) and emotional valence (agreeable vs. disagreeable). Peripheral measures and subject’s ratings were used to validate the categorization of the stimuli. MVPA was performed on functional ROIs that had been previously described to be part of the face processing networks. Results showed that unfamiliar face valence was decoded in early visual areas and core face areas (OFA, FFA and pSTS), whereas familiar faces valence was decoded in areas within ventro-lateral frontal cortex and insula. The response-pattern dissimilarity matrices analysis corroborated the presence of different types of ROIs, those responding to all face stimuli, located in the occipito-toremporal lobes, those coding familiarity like CP, and those like AC and mOF, which distinguishes familiarity, and valence of faces. Our findings suggest that the valence associated to personal significance, and the valence associated to
Word leaning in children and adult musicians Eva M. Dittingera,b, Mireille R. Bessona a Laboratoire de Neurosciences Cognitives CNRS & Aix-Marseille University, Marseille, France b Laboratoire Parole et Langage CNRS & Aix-Marseille University, Marseille, France Results of many experiments have demonstrated that music training enhances auditory perception not only of musical sounds but also of speech sounds. More recent findings also showed improved auditory attention and auditory short-term memory in musicians than in non-musicians (e.g., George & Coch, 2011; Strait et al, 2010, 2015; Ho et al, 2003). In order to go a step beyond, we conducted a series of experiments using both behavioral and electrophysiological measures (ERPs) that aimed at testing the hypothesis that music training also positively influences novel word learning, a task based on sound perception, attention, associative learning and memory. We will describe results in children and adults, with and without music training, showing that the percentage of correct responses in a semantic task, as well as the N400 effect (N400 to unrelated – N400 to related words), were positively enhanced by music training. These results provide evidence that the influence of musical expertise extends beyond auditory perception and can facilitate the learning of novel words. These findings have potentially strong impact for learning foreign languages. In addition, the procedure that we used can easily be adapted to various patient populations (e.g., dyslexic children, Alzheimer patients…) to pinpoint the origin of the learning deficits.
doi:10.1016/j.ijpsycho.2016.07.151