Brain Inspired show

Brain Inspired

Summary: Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Join Now to Subscribe to this Podcast

Podcasts:

 BI 102 Mark Humphries: What Is It Like To Be A Spike? | File Type: audio/mpeg | Duration: 01:32:20

Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode! The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion. Timestamps: 0:00 - Intro 3:25 - Writing a book 15:37 - Mark's main interest 19:41 - Future explanation of brain/mind 27:00 - Stochasticity and excitation/inhibition balance 36:56 - Dendritic computation for network dynamics 39:10 - Do details matter for AI? 44:06 - Spike failure 51:12 - Dark neurons 1:07:57 - Intrinsic spontaneous activity 1:16:16 - Best scientific moment 1:23:58 - Failure 1:28:45 - Advice

 BI 101 Steve Potter: Motivating Brains In and Out of Dishes | File Type: audio/mpeg | Duration: 01:45:22

Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book. The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own  learning. Potter Lab.Twitter: @stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie. 0:00 - Intro 6:38 - Brain organoids 18:48 - Glial cell plasticity 24:50 - Whole brain emulation 35:28 - Industry vs. academia 45:32 - Intro to book: How To Motivate Your Students To Love Learning 48:29 - Steve's childhood influences 57:21 - Developing one's own intrinsic motivation 1:02:30 - Real-world assignments 1:08:00 - Keys to motivation 1:11:50 - Peer pressure 1:21:16 - Autonomy 1:25:38 - Wikipedia real-world assignment 1:33:12 - Relation to running a lab

 BI 100.6 Special: Do We Have the Right Vocabulary and Concepts? | File Type: audio/mpeg | Duration: 00:50:03

We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests: Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not? Timestamps: 0:00 - Intro 5:04 - Andrew Saxe 7:04 - Thomas Naselaris 7:46 - John Krakauer 9:03 - Federico Turkheimer 11:57 - Steve Potter 13:31 - David Krakauer 17:22 - Dean Buonomano 20:28 - Konrad Kording 22:00 - Uri Hasson 23:15 - Rodrigo Quian Quiroga 24:41 - Jim DiCarlo 25:26 - Marcel van Gerven 28:02 - Mazviita Chirimuuta 29:27 - Brad Love 31:23 - Patrick Mayo 32:30 - György Buzsáki 37:07 - Pieter Roelfsema 37:26 - David Poeppel 40:22 - Paul Cisek 44:52 - Talia Konkle 47:03 - Steve Grossberg

 BI 100.4 Special: What Ideas Are Holding Us Back? | File Type: audio/mpeg | Duration: 01:04:26

In the 4th installment of our 100th episode celebration, previous guests responded to the question: What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why? As usual, the responses are varied and wonderful! Timestamps: 0:00 - Intro 6:41 - Pieter Roelfsema 7:52 - Grace Lindsay 10:23 - Marcel van Gerven 11:38 - Andrew Saxe 14:05 - Jane Wang 16:50 - Thomas Naselaris 18:14 - Steve Potter 19:18 - Kendrick Kay 22:17 - Blake Richards 27:52 - Jay McClelland 30:13 - Jim DiCarlo 31:17 - Talia Konkle 33:27 - Uri Hasson 35:37 - Wolfgang Maass 38:48 - Paul Cisek 40:41 - Patrick Mayo 41:51 - Konrad Kording 43:22 - David Poeppel 44:22 - Brad Love 46:47 - Rodrigo Quian Quiroga 47:36 - Steve Grossberg 48:47 - Mark Humphries 52:35 - John Krakauer 55:13 - György Buzsáki 59:50 - Stefan Leijnan 1:02:18 - Nathaniel Daw

 BI 100.3 Special: Can We Scale Up to AGI with Current Tech? | File Type: audio/mpeg | Duration: 01:08:43

Part 3 in our 100th episode celebration. Previous guests answered the question: Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3): Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing? It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing. Timestamps: 0:00 - Intro 3:56 - Wolgang Maass 5:34 - Paul Humphreys 9:16 - Chris Eliasmith 12:52 - Andrew Saxe 16:25 - Mazviita Chirimuuta 18:11 - Steve Potter 19:21 - Blake Richards 22:33 - Paul Cisek 26:24 - Brad Love 29:12 - Jay McClelland 34:20 - Megan Peters 37:00 - Dean Buonomano 39:48 - Talia Konkle 40:36 - Steve Grossberg 42:40 - Nathaniel Daw 44:02 - Marcel van Gerven 45:28 - Kanaka Rajan 48:25 - John Krakauer 51:05 - Rodrigo Quian Quiroga 53:03 - Grace Lindsay 55:13 - Konrad Kording 57:30 - Jeff Hawkins 102:12 - Uri Hasson 1:04:08 - Jess Hamrick 1:06:20 - Thomas Naselaris

 BI 100.2 Special: What Are the Biggest Challenges and Disagreements? | File Type: audio/mpeg | Duration: 01:25:00

In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on. Timestamps: 0:00 - Intro 7:10 - Rodrigo Quian Quiroga 8:33 - Mazviita Chirimuuta 9:15 - Chris Eliasmith 12:50 - Jim DiCarlo 13:23 - Paul Cisek 16:42 - Nathaniel Daw 17:58 - Jessica Hamrick 19:07 - Russ Poldrack 20:47 - Pieter Roelfsema 22:21 - Konrad Kording 25:16 - Matt Smith 27:55 - Rafal Bogacz 29:17 - John Krakauer 30:47 - Marcel van Gerven 31:49 - György Buzsáki 35:38 - Thomas Naselaris 36:55 - Steve Grossberg 48:32 - David Poeppel 49:24 - Patrick Mayo 50:31 - Stefan Leijnen 54:24 - David Krakuer 58:13 - Wolfang Maass 59:13 - Uri Hasson 59:50 - Steve Potter 1:01:50 - Talia Konkle 1:04:30 - Matt Botvinick 1:06:36 - Brad Love 1:09:46 - Jon Brennan 1:19:31 - Grace Lindsay 1:22:28 - Andrew Saxe

 BI 100.1 Special: What Has Improved Your Career or Well-being? | File Type: audio/mpeg | Duration: 00:42:32

Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go... Timestamps: 0:00 - Intro 6:13 - David Krakauer 8:50 - David Poeppel 9:32 - Jay McClelland 11:03 - Patrick Mayo 11:45 - Marcel van Gerven 12:11 - Blake Richards 12:25 - John Krakauer 14:22 - Nicole Rust 15:26 - Megan Peters 17:03 - Andrew Saxe 18:11 - Federico Turkheimer 20:03 - Rodrigo Quian Quiroga 22:03 - Thomas Naselaris 23:09 - Steve Potter 24:37 - Brad Love 27:18 - Steve Grossberg 29:04 - Talia Konkle 29:58 - Paul Cisek 32:28 - Kanaka Rajan 34:33 - Grace Lindsay 35:40 - Konrad Kording 36:30 - Mark Humphries

 BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness | File Type: audio/mpeg | Duration: 01:46:35

Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness, related to metacognition. So we discuss HOTs in particular and their relation to other approaches/theories, the idea of approaching consciousness as a computational problem to be tackled with computational modeling, we talk about the cultural, social, and career aspects of choosing to study something as elusive and controversial as consciousness, we talk about two of the models they're working on now to account for various properties of conscious experience, and, of course, the prospects of consciousness in AI. For more on metacognition and awareness, check out episode 73 with Megan Peters. Hakwan's lab: Consciousness and Metacognition Lab.Steve's lab: The MetaLab.Twitter: @hakwanlau; @smfleming.Hakwan's brief Aeon article: Is consciousness a battle between your beliefs and perceptions?Related papersAn Informal Internet Survey on the Current State of Consciousness Science.Opportunities and challenges for a maturing science of consciousness.What is consciousness, and could machines have it?"Understanding the higher-order approach to consciousness.Awareness as inference in a higher-order state space. (Steve's bayesian predictive generative model)Consciousness, Metacognition, & Perceptual Reality Monitoring. (Hakwan's reality-monitoring model a la generative adversarial networks) Timestamps 0:00 - Intro 7:25 - Steve's upcoming book 8:40 - Challenges to study consciousness 15:50 - Gurus and backscratchers 23:58 - Will the problem of consciousness disappear? 27:52 - Will an explanation feel intuitive? 29:54 - What do you want to be true? 38:35 - Lucid dreaming 40:55 - Higher order theories 50:13 - Reality monitoring model of consciousness 1:00:15 - Higher order state space model of consciousness 1:05:50 - Comparing their models 1:10:47 - Machine consciousness 1:15:30 - Nature of first order representations 1:18:20 - Consciousness prior (Yoshua Bengio) 1:20:20 - Function of consciousness 1:31:57 - Legacy 1:40:55 - Current projects

 BI 098 Brian Christian: The Alignment Problem | File Type: audio/mpeg | Duration: 01:32:38

Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about: The history of machine learning and how we got this point;Some methods researches are creating to understand what's being represented in neural nets and how they generate their output;Some modern proposed solutions to the alignment problem, like programming the machines to learn our preferences so they can help achieve those preferences - an idea called inverse reinforcement learning;The thorny issue of accurately knowing our own values- if we get those wrong, will machines also get it wrong? Links: Brian's website.Twitter: @brianchristian.The Alignment Problem: Machine Learning and Human Values.Related papersNorbert Wiener from 1960: Some Moral and Technical Consequences of Automation. Timestamps: 4:22 - Increased work on AI ethics 8:59 - The Alignment Problem overview 12:36 - Stories as important for intelligence 16:50 - What is the alignment problem 17:37 - Who works on the alignment problem? 25:22 - AI ethics degree? 29:03 - Human values 31:33 - AI alignment and evolution 37:10 - Knowing our own values? 46:27 - What have learned about ourselves? 58:51 - Interestingness 1:00:53 - Inverse RL for value alignment 1:04:50 - Current progress 1:10:08 - Developmental psychology 1:17:36 - Models as the danger 1:25:08 - How worried are the experts?

 BI 097 Omri Barak and David Sussillo: Dynamics and Structure | File Type: audio/mpeg | Duration: 01:23:57

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss: The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);The difference between classical approaches to modeling brains and the machine learning approach;The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains. Barak LabTwitter: @SussilloDavidThe papers we discuss or mention:Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.Computation Through Neural Population Dynamics.Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.Dynamics of random recurrent networks with correlated low-rank structure.Quality of internal representation shapes learning performance in feedback neural networks.Feigenbaum's universality constant original paper: Feigenbaum, M. J. (1976) "Universality in complex discrete dynamics", Los Alamos Theoretical Division Annual Report 1975-1976TalksUniversality and individuality in neural dynamics across large populations of recurrent networks.World Wide Theoretical Neuroscience Seminar: Omri Barak, January 6, 2021 Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture

 BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths | File Type: audio/mpeg | Duration: 01:34:10

K, Josh, and I were postdocs together in Jeff Schall's and Geoff Woodman's labs. K and Josh had backgrounds in psychology and were getting their first experience with neurophysiology, recording single neuron activity in awake behaving primates. This episode is a discussion surrounding their reflections and perspectives on neuroscience and psychology, given their backgrounds and experience (we reference episode 84 with György Buzsáki and David Poeppel). We also talk about their divergent paths - K stayed in academia and runs an EEG lab studying human decision-making and memory, and Josh left academia and has worked for three different pharmaceutical and tech companies. So this episode doesn't get into gritty science questions, but is a light discussion about the state of neuroscience, psychology, and AI, and reflections on academia and industry, life in lab, and plenty more. The Fukuda Lab.Josh's website.Twitter: @KeisukeFukuda4 Time stamps 0:00 - Intro 4:30 - K intro 5:30 - Josh Intro 10:16 - Academia vs. industry 16:01 - Concern with legacy 19:57 - Best scientific moment 24:15 - Experiencing neuroscience as a psychologist 27:20 - Neuroscience as a tool 30:38 - Brain/mind divide 33:27 - Shallow vs. deep knowledge in academia and industry  36:05 - Autonomy in industry 42:20 - Is this a turning point in neuroscience? 46:54 - Deep learning revolution 49:34 - Deep nets to understand brains 54:54 - Psychology vs. neuroscience 1:06:42 - Is language sufficient? 1:11:33 - Human-level AI 1:13:53 - How will history view our era of neuroscience? 1:23:28 - What would you have done differently? 1:26:46 - Something you wish you knew

 BI 095 Chris Summerfield and Sam Gershman: Neuro for AI? | File Type: audio/mpeg | Duration: 01:25:28

It's generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam's previous appearance on the podcast. Chris's lab: Human Information Processing lab.Sam's lab: Computational Cognitive Neuroscience Lab.Twitter: @gershbrain; @summerfieldlabPapers we discuss or mention or are related:If deep learning is the answer, then what is the question?Neuroscience-Inspired Artificial Intelligence.Building Machines that Learn and Think Like People. 0:00 - Intro 5:00 - Good ol' days 13:50 - AI for neuro, neuro for AI 24:25 - Intellectual diversity in AI 28:40 - Role of philosophy 30:20 - Operationalization and benchmarks 36:07 - Prediction vs. understanding 42:48 - Role of humans in the loop 46:20 - Value alignment 51:08 - Andrew Saxe question 53:16 - Explainable AI 58:55 - Generalization 1:01:09 - What has AI revealed about us? 1:09:38 - Neuro for AI 1:20:30 - Concluding remarks

 BI 094 Alison Gopnik: Child-Inspired AI | File Type: audio/mpeg | Duration: 01:19:13

Alison and I discuss her work to accelerate learning and thus improve AI by studying how children learn, as Alan Turing suggested in his famous 1950 paper. The ways children learn are via imitation, by learning abstract causal models, and active learning by implementing a high exploration/exploitation ratio. We also discuss child consciousness, psychedelics, the concept of life history, the role of grandparents and elders, and lots more. Alison's Website.Cognitive Development and Learning Lab.Twitter: @AlisonGopnik.Related papers:Childhood as a solution to explore-exploit tensions.The Aeon article about grandparents, children, and evolution: Vulnerable Yet Vital.Books:The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children.The Scientist in the Crib: What Early Learning Tells Us About the Mind.The Philosophical Baby: What Children's Minds Tell Us About Truth, Love, and the Meaning of Life. Take-home points: Children learn by imitation, and not just unthinking imitation. They pay attention to and evaluate the intentions of others and judge whether a person seems to be a reliable source of information. That is, they learn by sophisticated socially-constrained imitation.Children build abstract causal models of the world. This allows them to simulate potential outcomes and test their actions against those simulations, accelerating learning.Children keep their foot on the exploration pedal, actively learning by exploring a wide spectrum of actions to determine what works. As we age, our exploratory cognition decreases, and we begin to exploit more what we've learned. Timestamps 0:00 - Intro 4:40 - State of the field 13:30 - Importance of learning 20:12 - Turing's suggestion 22:49 - Patience for one's own ideas 28:53 - Learning via imitation 31:57 - Learning abstract causal models 41:42 - Life history 43:22 - Learning via exploration 56:19 - Explore-exploit dichotomy 58:32 - Synaptic pruning 1:00:19 - Breakthrough research in careers 1:04:31 - Role of elders 1:09:08 - Child consciousness 1:11:41 - Psychedelics as child-like brain 1:16:00 - Build consciousness into AI?

 BI 093 Dileep George: Inference in Brain Microcircuits | File Type: audio/mpeg | Duration: 01:06:31

Dileep and I discuss his theoretical account of how the thalamus and cortex work together to implement visual inference. We talked previously about his Recursive Cortical Network (RCN) approach to visual inference, which is a probabilistic graph model that can solve hard problems like CAPTCHAs, and more recently we talked about using his RCNs with cloned units to account for cognitive maps related to the hippocampus. On this episode, we walk through how RCNs can map onto thalamo-cortical circuits so a given cortical column can signal whether it believes some concept or feature is present in the world, based on bottom-up incoming sensory evidence, top-down attention, and lateral related features. We also briefly compare this bio-RCN version with Randy O'Reilly's Deep Predictive Learning account of thalamo-cortical circuitry. Vicarious website - Dileeps AGI robotics company.Twitter: @dileeplearningThe papers we discuss or mention:A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence.Probabilistic graphical models.Hierarchical temporal memory. Time Stamps: 0:00 - Intro 5:18 - Levels of abstraction 7:54 - AGI vs. AHI vs. AUI 12:18 - Ideas and failures in startups 16:51 - Thalamic cortical circuitry computation  22:07 - Recursive cortical networks 23:34 - bio-RCN 27:48 - Cortical column as binary random variable 33:37 - Clonal neuron roles 39:23 - Processing cascade 41:10 - Thalamus 47:18 - Attention as explaining away 50:51 - Comparison with O'Reilly's predictive coding framework 55:39 - Subjective contour effect 1:01:20 - Necker cube

 BI 092 Russ Poldrack: Cognitive Ontologies | File Type: audio/mpeg | Duration: 01:42:12

Russ and I discuss cognitive ontologies - the "parts" of the mind and their relations - as an ongoing dilemma of how to map onto each other what we know about brains and what we know about minds. We talk about whether we have the right ontology now, how he uses both top-down and data-driven approaches to analyze and refine current ontologies, and how all this has affected his own thinking about minds. We also discuss some of the current  meta-science issues and challenges in neuroscience  and AI, and Russ answers guest questions from Kendrick Kay and David Poeppel. Russ’s website.Poldrack Lab.Stanford Center For Reproducible Neuroscience.Twitter: @russpoldrack.Book:The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts.The papers we discuss or mention:Atlases of cognition with large-scale human brain mapping.Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.Uncovering the structure of self-regulation through data-driven ontology discoveryTalks:Reproducibility: NeuroHackademy: Russell Poldrack - Reproducibility in fMRI: What is the problem?Cognitive Ontology: Cognitive Ontologies, from Top to BottomA good series of talks about cognitive ontologies: Online Seminar Series: Problem of Cognitive Ontology. Some take-home points: Our folk psychological cognitive ontology hasn't changed much since early Greek Philosophy, and especially since William James wrote about attention, consciousness, and so on.Using encoding models, we can predict brain responses pretty well based on what task a subject is performing or what "cognitive function" a subject is engaging, at least to a course approximation.Using a data-driven approach has potential to help determine mental structure, but important human decisions must still be made regarding how exactly to divide up the various "parts" of the mind. Time points 0:00 - Introduction 5:59 - Meta-science issues 19:00 - Kendrick Kay question 23:00 - State of the field 30:06 - fMRI for understanding minds 35:13 - Computational mind 42:10 - Cognitive ontology 45:17 - Cognitive Atlas 52:05 - David Poeppel question 57:00 - Does ontology matter? 59:18 - Data-driven ontology 1:12:29 - Dynamical systems approach 1:16:25 - György Buzsáki's inside-out approach 1:22:26 - Ontology for AI 1:27:39 - Deep learning hype 

Comments

Login or signup comment.