AuditoRy Cognition in Humans and MachInEs (ARCHIE) is a research initiative led by Elia Formisano (Maastricht University) in collaboration with Bruno Giordano (Institut de Neurosciences de La Timone, Marseille). ARCHIE combines research in cognitive psychology and neuroscience with advanced methodologies from information science and artificial intelligence. Our aim is to develop and test neurobiologically-grounded computational models of sound recognition.

At the present, ARCHIE includes two funded research programs: Aud2Sem (funding: NWO Open Competition SSH, start October 2021) and SoundBrainSem (funding: ANR AAPG2021, start March 2022) and other related research projects.

Examples of research and ongoing work include:

“Sounds” ontology development (in collaboration with prof. M. Dumontier, IDS): We are developing “Sounds”, an ontology that characterizes a large number of everyday sounds and their taxonomic relation in terms of their acoustics, of sound-generating mechanisms and semantic properties of the corresponding sources.

– Deep neural networks (DNNs) development: We are developing ontology-based DNNs, which combine acoustic sound analysis with high-level information over the sound sources and learn to perform sound recognition tasks at different abstraction levels.

– Automated captioning of Auditory Scenes: We are developing transformer-based architectures for automated captioning of sounds and scenes.

Cognitive Neuroimaging of sound recognition in humans: We measure behavioral and brain (sub-millimeter fMRI, iEEG, EEG, MEG) responses in human listeners as they perform sound recognition tasks. Then, we evaluate how well DNN-based and other models of sound processing explain measured behavioral and brain responses using state-of-the-art multivariate statistical methods.

We are very keen to collaborate with researchers and companies with similar interests. So, if you are interested, please do not hesitate and contact us