Auditory Cognition
Research Group

Department of Cognitive Neuroscience
Maastricht University


NWO project “Aud2Sem: Acoustic to semantic transformations in Human Auditory Cortex” started

The NWO-funded project "Aud2Sem: Acoustic to semantic transformations in Human Auditory Cortex" led by E.Formisano has officially started on October, 1st, 2021. Find below the project abstract and stay tuned for further information... A bird chirping, a glass breaking, an ambulance passing by. Listening to sounds helps recognizing events and objects, even when they are out of sight, in the dark or behind a wall, for example. Despite rapid progress in the field of auditory cognition, we know very little about how the brain transforms acoustic sound representations into meaningful source representations. To fill this gap, this project develops a neurobiologically-grounded computational model of sound recognition by combining advanced methodologies from information science, artificial intelligence, and cognitive neuroscience. First, we develop "Sounds", an ontology that characterizes a large number of everyday sounds and their taxonomic relation in terms of sound-generating mechanisms and properties of the corresponding sources. Second, we develop novel ontology-based deep neural networks (DNNs) that combine acoustic sound analysis with high-level information over the sound sources and learn to perform sound recognition tasks at different abstraction levels. In parallel, we measure brain responses with sub-millimeter functional MRI (fMRI) and intracranial electroencephalography (iEEG) in human listeners as they perform the same SR tasks. We expect that the ontology-based DNNs will explain measured laminar-specific (fMRI) and spectro-temporally resolved (iEEG) neural responses in auditory cortex better than existing acoustic and semantic models. Results of this research will provide mechanistic insights into the acoustic-to-semantic transformations in the human brain. Furthermore, the ontology of everyday sounds and the methods developed for embedding ontological information in DNNs will be of broad academic interest and relevant for the rapidly-expanding societal applications of artificial hearing. See NWO website

New paper by Erb et al. in Cerebral Cortex: Human but not monkey auditory cortex is tuned to slow temporal modulations.

In a new comparative fMRI study just published in Cerebral Cortex, we and our collaborators in the Vanduffel lab (KU Leuven) provide novel insights into speech evolution. These data by Erb et al. reveal homologies and differences in natural sound-encoding in human and non-human primate cortex. From the Abstract: “Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.” The paper is available here:

All News