{"id":11,"date":"2017-06-28T07:38:30","date_gmt":"2017-06-28T07:38:30","guid":{"rendered":"https:\/\/mbic-auditorylab.nl\/?page_id=11"},"modified":"2025-02-07T13:47:40","modified_gmt":"2025-02-07T13:47:40","slug":"archie_new","status":"publish","type":"page","link":"https:\/\/mbic-auditorylab.nl\/home\/archie_new\/","title":{"rendered":"ARCHIE"},"content":{"rendered":"<div class=\"wpb-content-wrapper\"><p>[vc_row][vc_column][vc_column_text css=&#8221;&#8221;]<strong>AuditoRy Cognition in Humans and MachInEs (ARCHIE)<\/strong> is a research initiative led by <a href=\"https:\/\/mbic-auditorylab.nl\/home\/people\/elia-formisano\/\">Elia Formisano<\/a> (Maastricht University) in collaboration with <a href=\"https:\/\/mbic-auditorylab.nl\/home\/people\/bruno-giordano\/\">Bruno Giordano<\/a> (Institut de Neurosciences de La Timone, Marseille). ARCHIE combines research in cognitive psychology and neuroscience with advanced methodologies from information science and artificial intelligence. Our aim is to develop and test neurobiologically-grounded computational models of sound recognition.<\/p>\n<p>Initially, ARCHIE included two funded research programs: Aud2Sem (funding:<a href=\"https:\/\/www.nwo.nl\/en\/projects\/40620go030\"> NWO Open Competition SSH<\/a>, start October 2021) and <a href=\"https:\/\/anr.fr\/Project-ANR-21-CE37-0027\">SoundBrainSem<\/a> (funding: ANR AAPG2021, start March 2022) and other related research projects<\/p>\n<div class=\"XeAuAHykosehisCztOTcHtVBDfRWEyJTUs display-flex pt3 align-items-flex-start \">\n<div class=\"update-components-actor__container pr4 display-flex flex-grow-1\">\n<div class=\"ivm-image-view-model update-components-actor__avatar\">\n<p>In October 2024, our collaborative project <span class=\"break-words tvm-parent-container\"><span dir=\"ltr\"><strong>Natural Auditory SCEnes in Humans and Machines (NASCE) <\/strong>received <a href=\"https:\/\/erc.europa.eu\/news-events\/news\/erc-2024-synergy-grants-results\">ERC Synergy funding\u00a0 8.6 M\u20ac).<\/a> \u00a0<\/span><\/span>NASCE integrates AI and multimodal imaging to uncover how the brain transforms everyday soundscapes into meaningful representations of the world <span class=\"break-words tvm-parent-container\"><span dir=\"ltr\">(<a href=\"https:\/\/www.maastrichtuniversity.nl\/news\/understanding-everyday-hearing-elia-formisano-erc-synergy-grant\">read more here).<\/a><\/span><\/span><\/p>\n<p>Launching in <strong>April 2025<\/strong>, NASCE will significantly expand research facilities in <strong>Maastricht and Marseille<\/strong>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/2&#8243;][vc_hoverbox image=&#8221;625&#8243; primary_title=&#8221;&#8221; hover_title=&#8221;Sounds: An ontology of auditory semantics&#8221; el_width=&#8221;90&#8243;]We are developing \u201cSounds\u201d, an ontology that characterizes a wide range of everyday sound based on their semantic properties. This includes identifying their sources (&#8220;Who\/What&#8221;),\u00a0 their sound-generating mechanisms (&#8220;How&#8221;), and contextualizing their occurrence (&#8220;Where&#8221;).[\/vc_hoverbox][vc_column_text]<\/p>\n<p style=\"text-align: center;\"><strong>Sounds: An Ontology of Auditory Semantics<\/strong><\/p>\n<p>[\/vc_column_text][\/vc_column][vc_column width=&#8221;1\/2&#8243;][vc_hoverbox image=&#8221;636&#8243; primary_title=&#8221;&#8221; hover_title=&#8221;Automated Captioning of Auditory Scenes&#8221; el_width=&#8221;90&#8243;]We are developing transformer-based architectures for automated captioning of sounds and scenes. We also work on metrics to assess and compare human- and machine-generated captions[\/vc_hoverbox][vc_column_text]<\/p>\n<p style=\"text-align: center;\"><strong>Automated captioning of Auditory Scenes<\/strong><\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/2&#8243;][vc_hoverbox image=&#8221;651&#8243; primary_title=&#8221;&#8221; hover_title=&#8221;Deep Neural Networks&#8221; el_width=&#8221;90&#8243;]We are developing DNNs combining acoustic sound analysis with semantic information over the sound sources.<\/p>\n<p>We are developing time-resolved DNNs based on multiscale convolutional RNN architecture to simulate auditory cortical processing and predict time-resolved neural signals (iEEG, MEG, EEG).[\/vc_hoverbox][vc_column_text]<\/p>\n<p style=\"text-align: center;\"><strong>Brain-inspired Deep Neural Networks<\/strong><\/p>\n<p>[\/vc_column_text][\/vc_column][vc_column width=&#8221;1\/2&#8243;][vc_hoverbox image=&#8221;653&#8243; primary_title=&#8221;&#8221; hover_title=&#8221;Cognitive Neuroimaging of Natural Sounds and Scenes&#8221; el_width=&#8221;90&#8243;]<\/p>\n<p style=\"text-align: center;\">We measure behavioral and brain (sub-millimeter) fMRI, iEEG, EEG, MEG responses in human listeners as they perform sound recognition task, and then\u00a0evaluate how well DNN-based and other models of sound processing explain measured behavioral\/brain responses<\/p>\n<p>[\/vc_hoverbox][vc_column_text]<\/p>\n<p style=\"text-align: center;\"><strong>Cognitive Neuroimaging of Natural Sounds and Scenes<\/strong><\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_column_text css=&#8221;&#8221;]AuditoRy Cognition in Humans and MachInEs (ARCHIE) is a research initiative led by Elia Formisano (Maastricht University) in collaboration with Bruno Giordano (Institut de Neurosciences de La Timone, Marseille). ARCHIE combines research in cognitive psychology and neuroscience with advanced methodologies from information science and artificial intelligence. Our aim is to develop and test neurobiologically-grounded [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":""},"class_list":["post-11","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/pages\/11","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/comments?post=11"}],"version-history":[{"count":77,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/pages\/11\/revisions"}],"predecessor-version":[{"id":714,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/pages\/11\/revisions\/714"}],"wp:attachment":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/media?parent=11"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}