Brain Inspired show

Brain Inspired

Summary: Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Join Now to Subscribe to this Podcast

Podcasts:

 BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories | File Type: audio/mpeg | Duration: 01:40:02

Support the show to get full episodes and join the Discord community. James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior. James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory 0:00 - Intro 3:57 - Guest Intros 15:04 - Organizing memories for generalization 26:48 - Teacher, student, and notebook models 30:51 - Shallow linear networks 33:17 - How to optimize generalization 47:05 - Replay as a generalization regulator 54:57 - Whole greater than sum of its parts 1:05:37 - Unpredictability 1:10:41 - Heuristics 1:13:52 - Theoretical neuroscience for AI 1:29:42 - Current personal thinking

 BI 119 Henry Yin: The Crisis in Neuroscience | File Type: audio/mpeg | Duration: 01:06:36

Announcement: I'm releasing my Neuro-AI online course about the conceptual landscape of neuroscience and AI together trying to explain intelligence. Next week I'll post 3 short free videos introducing the course. The videos and course will only be available 11/17 - 11/20. For access, sign up here. Support the show to get full episodes and join the Discord community. Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter. Yin lab at Duke.Twitter: @HenryYin19.Related papersThe Crisis in Neuroscience.Restoring Purpose in Behavior.Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control. 0:00 - Intro 5:40 - Kuhnian crises 9:32 - Control theory and cybernetics 17:23 - How much of brain is control system? 20:33 - Higher order control representation 23:18 - Prediction and control theory 27:36 - The way forward 31:52 - Compatibility with mental representation 38:29 - Teleology 45:53 - The right number of subjects 51:30 - Continuous measurement 57:06 - Artificial intelligence and control theory

 BI 118 Johannes Jäger: Beyond Networks | File Type: audio/mpeg | Duration: 01:36:08

Support the show to get full episodes and join the Discord community. Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell. Yogi's website and blog: Untethered in the Platonic Realm.Twitter: @yoginho.His youtube course: Beyond Networks: The Evolution of Living Systems.Kevin Mitchell's previous episode: BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness. 0:00 - Intro 4:10 - Yogi's background 11:00 - Beyond Networks - limits of dynamical systems models 16:53 - Kevin Mitchell question 20:12 - Process metaphysics 26:13 - Agency in evolution 40:37 - Agent-environment interaction, open-endedness 45:30 - AI and agency 55:40 - Life and intelligence 59:08 - Deep learning and neuroscience 1:03:21 - Mental autonomy 1:06:10 - William Wimsatt's biopsychological thicket 1:11:23 - Limtiations of mechanistic dynamic explanation 1:18:53 - Synthesis versus multi-perspectivism 1:30:31 - Specialization versus generalization

 BI 117 Anil Seth: Being You | File Type: audio/mpeg | Duration: 01:32:09

Support the show to get full episodes and join the Discord community. Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science. Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self,  is that it's rooted in predicting our bodily states to control them. We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests. Anil's website.Twitter: @anilkseth.Anil's book: BEING YOU A New Science of Consciousness.Megan's previous episode:BI 073 Megan Peters: Consciousness and MetacognitionSteve's previous episodesBI 099 Hakwan Lau and Steve Fleming: Neuro-AI ConsciousnessBI 107 Steve Fleming: Know Thyself 0:00 - Intro 6:32 - Megan Peters Q: Communicating Consciousness 15:58 - Human vs. animal consciousness 19:12 - BEING YOU A New Science of Consciousness 20:55 - Megan Peters Q: Will the hard problem go away? 30:55 - Steve Fleming Q: Contents of consciousness 41:01 - Megan Peters Q: Phenomenal character vs. content 43:46 - Megan Peters Q: Lempels of complexity 52:00 - Complex systems and emergence 55:53 - Psychedelics 1:06:04 - Free will 1:19:10 - Consciousness vs. life vs. intelligence

 BI 116 Michael W. Cole: Empirical Neural Networks | File Type: audio/mpeg | Duration: 01:31:20

Support the show to get full episodes and join the Discord community. Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent. The Cole Neurocognition lab.Twitter: @TheColeLab.Related papersDiscovering the Computational Relevance of Brain Network Organization.Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.Kendrick Kay's previous episode: BI 026 Kendrick Kay: A Model By Any Other Name.Kanaka Rajan's previous episode: BI 054 Kanaka Rajan: How Do We Switch Behaviors? 0:00 - Intro 4:58 - Cognitive control 7:44 - Rapid Instructed Task Learning and Flexible Hub Theory 15:53 - Patryk Laurent question: free will 26:21 - Kendrick Kay question: fMRI limitations 31:55 - Empirically-estimated neural networks (ENNs) 40:51 - ENNs vs. deep learning 45:30 - Clinical relevance of ENNs 47:32 - Kanaka Rajan question: a proposed collaboration 56:38 - Advantage of modeling multiple regions 1:05:30 - How ENNs work 1:12:48 - How ENNs might benefit artificial intelligence 1:19:04 - The need for causality 1:24:38 - Importance of luck and serendipity

 BI 115 Steve Grossberg: Conscious Mind, Resonant Brain | File Type: audio/mpeg | Duration: 01:23:41

Support the show to get full episodes and join the Discord community. Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer. Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory 0:00 - Intro 2:38 - Conscious Mind, Resonant Brain 11:49 - Theoretical method 15:54 - ART, learning, and consciousness 22:58 - Conscious vs. unconscious resonance 26:56 - Györy Buzsáki question 30:04 - Remaining mysteries in visual system 35:16 - John Krakauer question 39:12 - Jay McClelland question 51:34 - Any missing principles to explain human cognition? 1:00:16 - Importance of an early good career start 1:06:50 - Has modeling training caught up to experiment training? 1:17:12 - Universal development code

 BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind | File Type: audio/mpeg | Duration: 01:38:07

Support the show to get full episodes and join the Discord community. Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more. Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe related book we discuss:The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors) 0:00 - Intro 5:26 - Philosophy contributing to mind science 15:45 - Trend toward hyperspecialization 21:38 - Practice-focused philosophy of science 30:42 - Computationalism 33:05 - Philosophy of mind: identity theory, functionalism 38:18 - Computations as descriptions 41:27 - Pluralism and perspectivalism 54:18 - How much of brain function is computation? 1:02:11 - AI as computationalism 1:13:28 - Naturalizing representations 1:30:08 - Are you doing it right?

 BI 113 David Barack and John Krakauer: Two Views On Cognition | File Type: audio/mpeg | Duration: 01:30:38

Support the show to get full episodes and join the Discord community. David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher. David's webpage.John's Lab.Twitter: David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2 Timestamps 0:00 - Intro 3:13 - David's philosophy and neuroscience experience 20:01 - Renaissance person 24:36 - John's medical training 31:58 - Two Views on the Cognitive Brain 44:18 - Representation 49:37 - Studying populations of neurons 1:05:17 - What counts as representation 1:18:49 - Does this approach matter for AI?

 BI ViDA Panel Discussion: Deep RL and Dopamine | File Type: audio/mpeg | Duration: 00:57:25
 BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine | File Type: audio/mpeg | Duration: 01:13:56
 BI NMA 06: Advancing Neuro Deep Learning Panel | File Type: audio/mpeg | Duration: 01:20:32
 BI NMA 05: NLP and Generative Models Panel | File Type: audio/mpeg | Duration: 01:23:50
 BI NMA 04: Deep Learning Basics Panel | File Type: audio/mpeg | Duration: 00:59:21
 BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness | File Type: audio/mpeg | Duration: 01:38:04

Erik, Kevin, and I discuss... well a lot of things. Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence. Kevin's website.Eriks' website.Twitter: @WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory. Timestamps 0:00 - Intro 3:28 - The Revelations - Erik's novel 15:15 - Innate - Kevin's book 22:56 - Cycle of progress 29:05 - Brains for movement or consciousness? 46:46 - Freud's influence 59:18 - Theories of consciousness 1:02:02 - Meaning and emergence 1:05:50 - Reduction in neuroscience 1:23:03 - Micro and macro - emergence 1:29:35 - Agency and intelligence

 BI NMA 03: Stochastic Processes Panel | File Type: audio/mpeg | Duration: 01:00:48

Panelists: Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle & Human Machines.Tim Behrens.@behrenstim.Previous BI episodes:BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay.BI 024 Tim Behrens: Cognitive Maps. This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Comments

Login or signup comment.