https://huggingface.co/gijs/aces-roberta-base-13
Model to classify tokens of sound captions with auditory semantic tags (Who/What/How/Where) (see https://arxiv.org/abs/2403.18572)
https://doi.org/10.5061/dryad.0p2ngf258
Here you can download data and code from: Intermediate acoustic-to-semantic representations link behavioural and neural responses to natural sounds. Bruno L. Giordano, Michele Esposito, Giancarlo Valente, Elia Formisano (2023) Nature Neuroscience, 26(4):664-672. https://www.nature.com/articles/s41593-023-01285-9
https://figshare.com/articles/dataset/Data_from_What_do_we_mean_with_sound_semantics_exactly_A_survey_of_taxonomies_and_ontologies_of_everyday_sounds/20813626
Here you can download the data from: Giordano BL, de Miranda Azevedo R, Plasencia-Calaña Y, Formisano E, Dumontier M (2022). What do we mean with sound semantics, exactly? A survey of taxonomies and ontologies of everyday sounds. Front Psychol. 2022 13:964209. https://doi.org/10.3389/fpsyg.2022.964209
http://datadryad.org/resource/doi:10.5061/dryad.np4hs
Here you can download (F)MRI data and stimuli from: Santoro R, Moerel M, De Martino F, Valente G, Ugurbil K, Yacoub E, Formisano E (2017) Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns. Proceedings of the National Academy of Sciences of the United States of America 114(18): 4799 4804. https://doi.org/10.1073/pnas.1617622114