Do you have a genuine interest in auditory perception? Are you familiar with neuroimaging acquisition and analysis methods? Apply now for a PhD student position in Cognitive Neuroscience and join our team! (closed)
Are you passionate about combining AI research and neuroscience and find out how the brain recognize sounds? During the coming year Ph.D. student positions will be available in Maastricht (1 position, 4 years) and Marseille (2 positions, 3 years) and you will have the possibility to join our team. We will post here information and links to the official applications, so stay tuned...
The NWO-funded project "Aud2Sem: Acoustic to semantic transformations in Human Auditory Cortex" led by E.Formisano has officially started on October, 1st, 2021. Find below the project abstract and stay tuned for further information... A bird chirping, a glass breaking, an ambulance passing by. Listening to sounds helps recognizing events and objects, even when they are out of sight, in the dark or behind a wall, for example. Despite rapid progress in the field of auditory cognition, we know very little about how the brain transforms acoustic sound representations into meaningful source representations. To fill this gap, this project develops a neurobiologically-grounded computational model of sound recognition by combining advanced methodologies from information science, artificial intelligence, and cognitive neuroscience. First, we develop "Sounds", an ontology that characterizes a large number of everyday sounds and their taxonomic relation in terms of sound-generating mechanisms and properties of the corresponding sources. Second, we develop novel ontology-based deep neural networks (DNNs) that combine acoustic sound analysis with high-level information over the sound sources and learn to perform sound recognition tasks at different abstraction levels. In parallel, we measure brain responses with sub-millimeter functional MRI (fMRI) and intracranial electroencephalography (iEEG) in human listeners as they perform the same SR tasks. We expect that the ontology-based DNNs will explain measured laminar-specific (fMRI) and spectro-temporally resolved (iEEG) neural responses in auditory cortex better than existing acoustic and semantic models. Results of this research will provide mechanistic insights into the acoustic-to-semantic transformations in the human brain. Furthermore, the ontology of everyday sounds and the methods developed for embedding ontological information in DNNs will be of broad academic interest and relevant for the rapidly-expanding societal applications of artificial hearing. See NWO website
In a new comparative fMRI study just published in Cerebral Cortex, we and our collaborators in the Vanduffel lab (KU Leuven) provide novel insights into speech evolution. These data by Erb et al. reveal homologies and differences in natural sound-encoding in human and non-human primate cortex. From the Abstract: “Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.” The paper is available here: https://academic.oup.com/cercor/advance-article/doi/10.1093/cercor/bhy243/5158236#123814557
We recently published a paper in collaboration with the Zatorre lab at McGill university, introducing and validating a new paradigm developed for the study of auditory scene analysis with polyphonic music.
The local press release of the paper is here. Jonathan Peelle (Washington University) has written a nice Dispatch on the paper, read it here. Interview with Lars in Technisch Weekblad.
Earlier this month NWO announced new recipients of the coveted VENI grant. Two of the names on the list are researchers at the Faculty of Psychology and Neuroscience (FPN): Dr. Lotte Lemmens and Dr. Lars Hausfeld. We sat down with them to talk about their work. Read more here: https://www.maastrichtuniversity.nl/news/two-fpn-researchers-receive-nwo-veni-grants