AuditoRy Cognition in Humans and MachInEs (ARCHIE) is a research initiative led by Elia Formisano (Maastricht University) in collaboration with Bruno Giordano (Institut de Neurosciences de La Timone, Marseille). ARCHIE combines research in cognitive psychology and neuroscience with advanced methodologies from information science and artificial intelligence. Our aim is to develop and test neurobiologically-grounded computational models of sound recognition.

At the present, ARCHIE includes two funded research programs: Aud2Sem (funding: NWO Open Competition SSH, start October 2021) and SoundBrainSem (funding: ANR AAPG2021, start March 2022) and other related research projects

Sounds: An ontology of auditory semantics

We are developing “Sounds”, an ontology that characterizes a wide range of everyday sound based on their semantic properties. This includes identifying their sources (“Who/What”),  their sound-generating mechanisms (“How”), and contextualizing their occurrence (“Where”).

Sounds: An Ontology of Auditory Semantics

Automated Captioning of Auditory Scenes

We are developing transformer-based architectures for automated captioning of sounds and scenes. We also work on metrics to assess and compare human- and machine-generated captions

Automated captioning of Auditory Scenes

Deep Neural Networks

We are developing DNNs combining acoustic sound analysis with semantic information over the sound sources.

We are developing time-resolved DNNs based on multiscale convolutional RNN architecture to simulate auditory cortical processing and predict time-resolved neural signals (iEEG, MEG, EEG).

Brain-inspired Deep Neural Networks

Cognitive Neuroimaging of Natural Sounds and Scenes

We measure behavioral and brain (sub-millimeter) fMRI, iEEG, EEG, MEG responses in human listeners as they perform sound recognition task, and then evaluate how well DNN-based and other models of sound processing explain measured behavioral/brain responses

Cognitive Neuroimaging of Natural Sounds and Scenes