AuditoRy Cognition in Humans and MachInEs (ARCHIE) is a research initiative led by Elia Formisano (Maastricht University) in collaboration with Bruno Giordano (Institut de Neurosciences de La Timone, Marseille). ARCHIE combines research in cognitive psychology and neuroscience with advanced methodologies from information science and artificial intelligence. Our aim is to develop and test neurobiologically-grounded computational models of sound recognition.
At the present, ARCHIE includes two funded research programs: Aud2Sem (funding: NWO Open Competition SSH, start October 2021) and SoundBrainSem (funding: ANR AAPG2021, start March 2022) and other related research projects.
Examples of research and ongoing work include:
– “Sounds” ontology development (in collaboration with prof. M. Dumontier, IDS): We are developing “Sounds”, an ontology that characterizes a large number of everyday sounds and their taxonomic relation in terms of their acoustics, of sound-generating mechanisms and semantic properties of the corresponding sources.
- Article: What do we mean with sound semantics, exactly? A survey of taxonomies and ontologies of everyday sounds. Giordano BL, de Miranda Azevedo R, Plasencia-Calaña Y, Formisano E*, Dumontier M*. What do we mean with sound semantics, exactly? A survey of taxonomies and ontologies of everyday sounds. Front Psychol. 2022 Sep 29;13:964209. https://doi.org/10.3389/fpsyg.2022.964209
- Master Thesis: The Influence of Semantics on Neural Models of Sound Recognition. Martin Bremm, (supervision: Formisano, Dumontier)
– Deep neural networks (DNNs) development: We are developing ontology-based DNNs, which combine acoustic sound analysis with high-level information over the sound sources and learn to perform sound recognition tasks at different abstraction levels.
- Conference presentation: Semantically informed deep neural networks for sound recognition. Michele Esposito, Giancarlo Valente, Yeni Plasencia-Calana, Bruno L. Giordano and Elia Formisano. (Poster presented at International Conference on Auditory Cortex 2022)
- Conference proceedings: Semantically informed deep neural networks for sound recognition. Michele Esposito, Giancarlo Valente, Yeni Plasencia-Calana, Bruno L. Giordano and Elia Formisano. Presented at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023), Session: “Synergy between human and machine approaches to sound/scene recognition and processing” https://ieeexplore.ieee.org/document/10095606
– Automated captioning of Auditory Scenes: We are developing transformer-based architectures for automated captioning of sounds and scenes.
- Conference proceedings: ACES: Evaluating automated audio captioning models on the semantics of sounds Wijngaard G, Formisano E, Giordano B, Dumontier M, ACES: Evaluating automated audio captioning models on the semantics of sounds. Accepted at 31st European Signal Processing Conference, (EUSIPCO 2023)
– Cognitive Neuroimaging of sound recognition in humans: We measure behavioral and brain (sub-millimeter fMRI, iEEG, EEG, MEG) responses in human listeners as they perform sound recognition tasks. Then, we evaluate how well DNN-based and other models of sound processing explain measured behavioral and brain responses using state-of-the-art multivariate statistical methods.
- Abstract: Intermediate acoustic-to-semantic representations link behavioural and neural responses to natural sounds. Bruno L. Giordano, Michele Esposito, Giancarlo Valente, Elia Formisano (Poster presented at ICAC 2022)
- Article: Intermediate acoustic-to-semantic representations link behavioural and neural responses to natural sounds. Bruno L. Giordano, Michele Esposito, Giancarlo Valente, Elia Formisano (2023) Nature Neuroscience, 26(4):664-672. https://www.nature.com/articles/s41593-023-01285-9
- Ongoing work: Optimal selection of natural stimuli for auditory computational neuroimaging experiments (with Maria de Araújo Vitória, Marie Plegat, Giorgio Marinato, Bruno Giordano)
We are very keen to collaborate with researchers and companies with similar interests. So, if you are interested, please do not hesitate and contact us