Talking Machines
Summary: Talking Machines is your window into the world of machine learning. Your hosts, Katherine Gorman and Ryan Adams, bring you clear conversations with experts in the field, insightful discussions of industry news, and useful answers to your questions. Machine learning is changing the questions we can ask of the world around us, here we explore how to ask the best questions and what to do with the answers.
- Visit Website
- RSS
- Artist: Tote Bag Productions
Podcasts:
In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It's more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft.
Talking Machines is entering its third season and going through some changes. Ryan is moving on and in his place Neil Lawrence of Amazon is taking over as co host. We say thank you and good bye to Ryan with an interview about his work.
In episode seventeen of season two we get an introduction to Min Hashing, talk with Frank Wood the creator of ANGLICAN, about probabilistic programming and his new company, INVREA, and take a listener question about how to choose an architecture when using a neural network.
In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, we take a listener question about tuning hyperparameters, plus we talk with Eric Lander of the Broad Institute.
In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, we take a listener question about unbalanced data, plus we talk with Doug Eck of Google’s Magenta project.
In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.
In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)
In episode twelve of season two, we talk about generative adversarial networks, we take a listener question about using machine learning to improve or create products, plus we talk with Iain Murray of the University of Edinburgh.
In episode eleven of season two, we talk about the machine learning toolkit Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.
In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.
In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.
Recently Professor David MacKay passed away. We’ll spend this episode talking about his extensive body of work and it’s impacts. We’ll also talk with Philipp Hennig, a research group leader at the Max Planck Institute for Intelligent Systems, who trained in Professor MacKay’s group (with Ryan).
Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow.
In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.
In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.