BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation




Brain Inspired show

Summary: Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more. Catherine's website.Jessica's blog.Twitter: Jess: @tsonj.Related papersFrom Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence - CatherineForms of explanation and understanding for neuroscience and artificial intelligence - JessJess is a postdoc in Chris Summerfield's lab, and Chris and San Gershman were on a recent episode.Understanding Scientific Understanding by Henk de Regt. Timestamps: 0:00 - Intro 11:11 - Background and approaches 27:00 - Understanding distinct from explanation 36:00 - Explanations as programs (early explanation) 40:42 - Explaining classes of phenomena 52:05 - Constitutive (neuro) vs. etiological (AI) explanations 1:04:04 - Do nonphysical objects count for explanation? 1:10:51 - Advice for early philosopher/scientists