Brain Inspired show

Brain Inspired

Summary: Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Join Now to Subscribe to this Podcast

Podcasts:

 BI 046 David Poeppel: From Sounds to Meanings | File Type: audio/mpeg | Duration: 01:37:12

Support the Podcast David and I talk about his work to understand how sound waves floating in the air get transformed into meaningful concepts in your mind. He studies speech processing and production, language, music, and everything in between, approaching his work with steadfast principles to help frame what it means to understand something scientifically. We discuss many of the hurdles to understanding how our brains work and making real progress in science, plus a ton more. Show Notes Visit David's lab website at NYU. He’s also a director at Max Planck Institute for Empirical Aesthetics. Follow him on twitter: @davidpoeppel. For a related episode (philosophically), you might re-visit my discussion with John Krakauer. Some of the papers we discuss or mention (lots more on his website): The cortical organization of speech processing.The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language.A good talk: What Language Processing in the Brain Tells Us About the Structure of the Mind. NLP Transformer model: How do Transformers Work in NLP? A Guide to the Latest State-of-the-Art Models. Attention Is All You Need.

 BI 045 Raia Hadsell: Robotics and Deep RL | File Type: audio/mpeg | Duration: 01:16:52

Support the Podcast Show notes: Raia and I discuss her work at DeepMind figuring out how to build robots using deep reinforcement learning to do things like navigate cities and generalize intelligent behaviors across different tasks. We also talk about challenges specific for embodied AI (robots), how much of it takes inspiration from neuroscience, and lots more. Raia’s website. Follow her on Twitter: @RaiaHadsell Papers relevant to our discussion: Learning to Navigate in Cities without a Map. Overcoming catastrophic forgetting in neural networks. Progressive neural networks. A few Talks: Deep reinforcement learning in complex environments. Progressive Nets & Transfer. The new Neuro-AI conference she's starting with Tony Zador and Blake Richards:From Neuroscience to Artificially Intelligent Systems (NAISys) 

 BI 044 Talia Konkle: Turning Vision On Its Side | File Type: audio/mpeg | Duration: 01:15:33

Talia and I discuss her work on how our visual system is organized topographically, and divides into three main categories: big inanimate things, small inanimate things, and animals. Her work is unique in that it focuses not on the classic hierarchical processing of vision (though she does that, too), but what kinds of things are represented along that hierarchy. She also uses deep networks to learn more about the visual system. We also talk about her keynote talk at the Cognitive Computational Neuroscience conference and plenty more. Show notes: Talia’s lab website. Follow her on twitter: @talia_konkle. Check out the Cognitive Computational Neuroscience conference, where she'll give a keynote address.Papers we discuss/reference: Early work on the tripartite organization. Tripartite Organization of the Ventral Stream by Animacy and Object Size. A more recent update, with the texforms we discuss and comparision too deep learning CNN networks used to model the ventral visual stream. Mid-level visual features underlie the high-level categorical organization of the ventral stream.The article Talia references about an elegant solution to an old problem in computer science.

 BI 043 Anna Schapiro: Learning in Hippocampus and Cortex | File Type: audio/mpeg | Duration: 01:30:30

How does knowledge in the world get into our brains and integrated with the rest of our knowledge and memories? Anna and I talk about the complementary learning systems theory introduced in 1995 that posits a fast episodic hippopcampal learning system and a slower statistical cortical learning system. We then discuss her work that advances and adds missing pieces to the CLS framework, and explores how sleep and sleep cycles contribute to the process. We also discuss how her work might contribute to AI systems by using multiple types of memory buffers, a little about being a woman in science, and how it’s going with her brand new lab. Show Notes: Anna’s Penn Computational Cognitive Neuroscience Lab. Follow Anna on Twitter: @annaschapiro. Papers we discuss: The original Complimentary Learning Systems paper:  Complimentary Learning Systems Theory and Its Recent Update. Anna’s work on CLS and Hippocampus: The hippocampus is necessary for the consolidation of a task that does not require the hippocampus for initial learning. Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning. Examples of her work on sleep: Active and effective replay: Systems consolidation reconsidered again. Switching between internal and external modes: A multiscale learning principle. Sleep Benefts Memory for Semantic Category Structure While Preserving Exemplar-Specifc Information.

 BI 042 Brad Aimone: Brains at the Funeral of Moore’s Law | File Type: audio/mpeg | Duration: 00:59:35

This is part 2 of my conversation with Brad (listen to part 1 here). We discuss how Moore’s law is on its last legs, and his ideas for how neuroscience - in particular neural algorithms - may help computing continue to scale in a post-Moore’s law world. We also discuss neuromporphics in general, and more. Brad's homepage.Follow Brad on Twitter: @jbimaknee.The paper we discuss:Neural Algorithms and Computing Beyond Moore's Law.Check out the Neuro Inspired Computing Elements (NICE) workshop - lots of great talks and panel discussions.

 BI 041 Brad Aimone: Neurogenesis and Spiking in Deep Nets | File Type: audio/mpeg | Duration: 01:06:33

In this first part of our discussion, Brad and I discuss the state of neuromorphics and its relation to neuroscience and artificial intelligence.  He describes his work adding new neurons to deep learning networks during training, called neurogenesis deep learning, inspired by how neurogenesis in the dentate gyrus of the hippocampus helps learn new things while keeping previous memories intact. We also talk about his method to transform deep learning networks into spiking neural networks so they can run on neuromorphic hardware, and the neuromorphics workshop he puts on every year, the Neuro Inspired Computational Elements (NICE) workshop. Show Notes: Brad's homepage.Follow Brad on Twitter: @jbimaknee.The papers we discuss:Computational Influence of Adult Neurogenesis on Memory Encoding.Neurogenesis Deep Learning.Training deep neural networks for binary communication with the Whetstone method.And here's the arXiv version.Check out the Neuro Inspired Computing Elements (NICE) workshop - lots of great talks and panel discussions.

 BI 040 Nando de Freitas: Enlightenment, Compassion, Survival | File Type: audio/mpeg | Duration: 01:02:19

Show Notes: Nando’s CIFAR page.Follow Nando on Twitter: @NandoDF He's giving a keynote address at Cognitive Computational Neuroscience Meeting 2020.Check out his famous machine learning lectures on Youtube. Papers we (more allude to than) discuss: Neural Programmer-Interpreters. Learning to learn by gradient descent by gradient descent. Dueling Network Architectures for Deep Reinforcement Learning.Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions. One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL.

 BI 039 Anne Churchland: Decisions, Lapses, and Fidgets | File Type: audio/mpeg | Duration: 01:19:09

Show notes: Check out Anne's lab website.Follow her on twitter: @anne_churchlandAnne's List, the list of female systems neuroscientists to invite as speakers.The papers we discuss: Single-trial neural dynamics are dominated by richly varied movements.Lapses in perceptual judgments reflect exploration.Complexity vs Stimulus-Response Compatibility vs Stimulus-response ethological validity.Perceptual Decision-Making: A Field in the Midst of a Transformation.

 BI 038 Máté Lengyel: Probabilistic Perception and Learning | File Type: audio/mpeg | Duration: 01:18:37

Show notes: Máté's Cambridge website.He's part of the Computational Learning and Memory Group there.Here's his webpage at Central European University.A review to introduce his subsequent work:Statistically optimal perception and learning: from behavior to neural representations.Related recent talks:Bayesian models of perception, cognition and learning - CCCN 2017. Sampling: coding, dynamics, and computation in the cortex (Cosyne 2018).

 BI 037 Nathaniel Daw: Thinking the Right Thoughts | File Type: audio/mpeg | Duration: 01:29:52

Show notes: Nathaniel will deliver a keynote address at the upcoming CCN conference.Check out his lab website.Follow him on Twitter: @nathanieldaw.The paper we discuss:Prioritized memory access explains planning and hippocampal replayOr see a related talk: Rational planning using prioritized experience replay.

 BI 036 Roshan Cools: Cognitive Control and Dopamine | File Type: audio/mpeg | Duration: 01:11:08

Show notes: Roshan will deliver a keynote address at the upcoming CCN conference.Roshan's Motivational and Cognitive Control lab.Follow her on Twitter: @CoolsControl.Her TED Talk on Trusting Science.Papers related to the research we discuss:The costs and benefits of brain dopamine for cognitive control.Or see her variety of related works.

 BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay | File Type: audio/mpeg | Duration: 01:11:03

Show notes: This is the first in a series of episodes where I interview keynote speakers at the upcoming Cognitive Computational Neuroscience conference in September in Berlin. Thomas Naseralis summarizes the origins and vision of the CCN. Tim’s Neuroscience homepage: The papers we discuss: Generalisation of structural knowledge in the hippocampal-entorhinal system (referred to in the podcast at "The Tolman Eichenbaum Machine”) Human replay spontaneously reorganizes experience. (In press at Cell - below is an abstract for it from COSYNE 2018) Inference in replay through factorized representations.

 BI 034 Tony Zador: How DNA and Evolution Can Inform AI | File Type: audio/mpeg | Duration: 01:18:36

Show notes: Tony’s lab site, where there are links to his auditory decision making work and connectome work we discuss. Here are a few talks online about that: Corticostriatal circuits underlying auditory decisions. Can we upload our mind to the cloud?. Follow Tony on Twitter: @TonyZador The paper we discuss: A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains. Conferences we talk about: COSYNE conference. Neural Information and Coding workshops. Neural Information Processing conference.

 BI 033 Federico Turkheimer: Weak Versus Strong Emergence | File Type: audio/mpeg | Duration: 01:06:05

Show Notes: Federico's website.Federico’s papers we discuss: Conflicting emergences. Weak vs. strong emergence for the modelling of brain functionFrom homeostasis to behavior: balanced activity in an exploration of embodied dynamic environmental-neural interactionFree Energy Principle. Integrated Information Theory. The Tononi paper about Integrated Information Theory and its relation to emergence: Quantifying causal emergence shows that macro can beat micro Default mode as large scale oscillation: The brain's code and its canonical computational motifs. From sensory cortex to the default mode network: A multi-scale model of brain function in health and disease.

 BI 032 Rafal Bogacz: Back-Propagation in Brains | File Type: audio/mpeg | Duration: 01:15:44

Show notes: Visit Rafal’s Lab Website. Rafal's papers we discuss: Theories of Error Back-Propagation in the Brain. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.A tutorial on the free-energy framework for modelling perception and learning.Check out Episode 9 with Blake Richards about how apical dendrites could do back-prop. The Randall O’Reilly early paper describing biologically plausible back propagation: O'Reilly, R.C. (1996). Biologically Plausible Error-driven Learning using Local Activation Differences: The Generalized Recirculation Algorithm. Neural Computation, 8, 895-938.

Comments

Login or signup comment.