{"id":872,"date":"2025-10-02T14:54:39","date_gmt":"2025-10-02T12:54:39","guid":{"rendered":"https:\/\/mbic-auditorylab.nl\/home\/?p=872"},"modified":"2025-10-02T14:55:57","modified_gmt":"2025-10-02T12:55:57","slug":"new-paper-in-neurocomputing-deciphering-the-transformation-of-sounds-into-meaning-insights-from-disentangling-intermediate-representations-in-sound-to-event-dnns","status":"publish","type":"post","link":"https:\/\/mbic-auditorylab.nl\/home\/new-paper-in-neurocomputing-deciphering-the-transformation-of-sounds-into-meaning-insights-from-disentangling-intermediate-representations-in-sound-to-event-dnns\/","title":{"rendered":"New paper in Neurocomputing: Deciphering the transformation of sounds into meaning: Insights from disentangling intermediate representations in sound-to-event DNNs"},"content":{"rendered":"<p>In neuroscientific applications of deep neural networks (DNNs), interpretability of latent representations is crucial. Otherwise, we risk replacing one unknown (the brain) with another (the network).<\/p>\n<p>This new <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231225022726\"> <em>Neurocomputing<\/em><\/a> article by Tim Dick, Alexia Briassouli, Enrique Hortal, and Elia Formisano, published shows how invertible flow models can:<br \/>\n&#8211; Disentangle DNN latent spaces into interpretable dimensions (here: actions &amp; materials in sound categorization networks).<br \/>\n&#8211; Enable systematic manipulations of these dimensions that predictably change the network\u2019s outputs.<\/p>\n<p>Although applied here to auditory networks, the method is generalizable and can extend to other domains where neuroscientists use DNNs as computational models.<\/p>\n<p>Check:\u00a0<a href=\"https:\/\/github.com\/TimHenry1995\/LatentAudio\">Data and code.<\/a> and <a href=\"https:\/\/pypi.org\/project\/gyoza\/\">Custom flow model library Gyoza<\/a><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In neuroscientific applications of deep neural networks (DNNs), interpretability of latent representations is crucial. Otherwise, we risk replacing one unknown (the brain) with another (the network). This new Neurocomputing article by Tim Dick, Alexia Briassouli, Enrique Hortal, and Elia Formisano, published shows how invertible flow models can: &#8211; Disentangle DNN latent spaces into interpretable dimensions [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":874,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-872","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"acf":[],"_links":{"self":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/posts\/872","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/comments?post=872"}],"version-history":[{"count":2,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/posts\/872\/revisions"}],"predecessor-version":[{"id":875,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/posts\/872\/revisions\/875"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/media\/874"}],"wp:attachment":[{"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/media?parent=872"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/categories?post=872"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mbic-auditorylab.nl\/home\/wp-json\/wp\/v2\/tags?post=872"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}