About

A friend speaking, a bird chirping, a piano playing — any living being or non-living object that vibrates generates acoustic waveforms, which we call sounds. Auditory perception and cognition (or simply audition) refers to the capability that humans and animals have to analyse and interpret the sounds from the environments and to adapt behaviour to these external auditory events. Audition starts in the ears, where the acoustic waveforms are collected. However, it is in the brain, and more precisely in the auditory cortex, where the neurobiological processes occur that enable transforming these acoustic waveforms into meaningful descriptions of the external world and ultimately into our subjective perception and cognition.

Audition enriches our daily life in many ways. It enables us to interact with each other through speech and to perceive and communicate each other’s emotions (e.g. through laughter, crying, music). Audition also helps us interacting with the environment, especially when we cannot rely on vision (e.g. in the dark or through walls). Women and men, who have lost their auditory function, experience a much impoverished world and a feeling of isolation. Neuroscience research on audition is thus of great relevance, as it may help preserve speech, music and all other sounds in those for whom the world is falling silent.

Research

The Auditory Cognition Research Group conducts multidisciplinary research on the neural basis of auditory perception and cognition. We investigate the anatomical and functional organization of the auditory neural pathway, especially the auditory cortex, and the neural mechanisms underlying the analysis of spectro-temporal and spatial features of sounds. We also study the perception of complex, real-life sounds, including voice, speech and music and examine the neural mechanisms for attentive listening and auditory scene analysis.

Our experimental approach integrates: 1) non-invasive measurements of brain activity at high spatial (fMRI at 3T, 7T and 9.4T) and high temporal (EEG/MEG) resolution with psychophysics and non-invasive brain stimulation (TES) paradigms in standard and virtual reality settings,  2) computational modeling of auditory processing, and 3) advanced data analysis techniques such as statistical pattern recognition and machine learning methods.