The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) show

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Summary: Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

Join Now to Subscribe to this Podcast

Podcasts:

 Reinforcement Learning for Personalization at Spotify with Tony Jebara - #609 | File Type: audio/mpeg | Duration: 2487

Today we continue our NeurIPS 2022 series joined by Tony Jebara, VP of engineering and head of machine learning at Spotify. In our conversation with Tony, we discuss his role at Spotify and how the company’s use of machine learning has evolved over the last few years, and the business value of machine learning, specifically recommendations, hold at the company. We dig into his talk on the intersection of reinforcement learning and lifetime value (LTV) at Spotify, which explores the application of Offline RL for user experience personalization. We discuss the various papers presented in the talk, and how they all map toward determining and increasing a user’s LTV.  The complete show notes for this episode can be found at twimlai.com/go/609.

 Will ChatGPT take my job? - #608 | File Type: audio/mpeg | Duration: 2248

More than any system before it, ChatGPT has tapped into our enduring fascination with artificial intelligence, raising in a more concrete and present way important questions and fears about what AI is capable of and how it will impact us as humans. One of the concerns most frequently voiced, whether sincerely or cloaked in jest, is how ChatGPT or systems like it, will impact our livelihoods. In other words, “will ChatGPT put me out of a job???” In this episode of the podcast, I seek to answer this very question by conducting an interview in which ChatGPT is asking all the questions. (The questions are answered by a second ChatGPT, as in my own recent Interview with it, Exploring Large Laguage Models with ChatGPT.) In addition to the straight dialogue, I include my own commentary along the way and conclude with a discussion of the results of the experiment, that is, whether I think ChatGPT will be taking my job as your host anytime soon. Ultimately, though, I hope you’ll be the judge of that and share your thoughts on how ChatGPT did at my job via a comment below or on social media.

 Geospatial Machine Learning at AWS with Kumar Chellapilla - #607 | File Type: audio/mpeg | Duration: 2206

Today we continue our re:Invent 2022 series joined by Kumar Chellapilla, a general manager of ML and AI Services at AWS. We had the opportunity to speak with Kumar after announcing their recent addition of geospatial data to the SageMaker Platform. In our conversation, we explore Kumar’s role as the GM for a diverse array of SageMaker services, what has changed in the geospatial data landscape over the last 10 years, and why Amazon decided now was the right time to invest in geospatial data. We discuss the challenges of accessing and working with this data and the pain points they’re trying to solve. Finally, Kumar walks us through a few customer use cases, describes how this addition will make users more effective than they currently are, and shares his thoughts on the future of this space over the next 2-5 years, including the potential intersection of geospatial data and stable diffusion/generative models. The complete show notes for this episode can be found at twimlai.com/go/607

 Real-Time ML Workflows at Capital One with Disha Singla - #606 | File Type: audio/mpeg | Duration: 2616

Today we’re joined by Disha Singla, a senior director of machine learning engineering at Capital One. In our conversation with Disha, we explore her role as the leader of the Data Insights team at Capital One, where they’ve been tasked with creating reusable libraries, components, and workflows to make ML usable broadly across the company, as well as a platform to make it all accessible and to drive meaningful insights. We discuss the construction of her team, as well as the types of interactions and requests they receive from their customers (data scientists), productionized use cases from the platform, and their efforts to transition from batch to real-time deployment. Disha also shares her thoughts on the ROI of machine learning and getting buy-in from executives, how she sees machine learning evolving at the company over the next 10 years, and much more! The complete show notes for this episode can be found at twimlai.com/go/606

 Weakly Supervised Causal Representation Learning with Johann Brehmer - #605 | File Type: audio/mpeg | Duration: 2804

Today we’re excited to kick off our coverage of the 2022 NeurIPS conference with Johann Brehmer, a research scientist at Qualcomm AI Research in Amsterdam. We begin our conversation discussing some of the broader problems that causality will help us solve, before turning our focus to Johann’s paper Weakly supervised causal representation learning, which seeks to prove that high-level causal representations are identifiable in weakly supervised settings. We also discuss a few other papers that the team at Qualcomm presented, including neural topological ordering for computation graphs, as well as some of the demos they showcased, which we’ll link to on the show notes page.  The complete show notes for this episode can be found at twimlai.com/go/605.

 Stable Diffusion & Generative AI with Emad Mostaque - #604 | File Type: audio/mpeg | Duration: 2571

Today we’re excited to kick off our 2022 AWS re:Invent series with a conversation with Emad Mostaque, Founder and CEO of Stability.ai. Stability.ai is a very popular name in the generative AI space at the moment, having taken the internet by storm with the release of its stable diffusion model just a few months ago. In our conversation with Emad, we discuss the story behind Stability's inception, the model's speed and scale, and the connection between stable diffusion and programming. We explore some of the spaces that Emad anticipates being disrupted by this technology, his thoughts on the open-source vs API debate, how they’re dealing with issues of user safety and artist attribution, and of course, what infrastructure they’re using to stand the model up. The complete show notes for this episode can be found at https://twimlai.com/go/604.

 Exploring Large Language Models with ChatGPT - #603 | File Type: audio/mpeg | Duration: 2190

Today we're joined by ChatGPT, the latest and coolest large language model developed by OpenAl. In our conversation with ChatGPT, we discuss the background and capabilities of large language models, the potential applications of these models, and some of the technical challenges and open questions in the field. We also explore the role of supervised learning in creating ChatGPT, and the use of PPO in training the model. Finally, we discuss the risks of misuse of large language models, and the best resources for learning more about these models and their applications. Join us for a fascinating conversation with ChatGPT, and learn more about the exciting world of large language models. The complete show notes for this episode can be found at https://twimlai.com/go/603

 Accelerating Intelligence with AI-Generating Algorithms with Jeff Clune - #602 | File Type: audio/mpeg | Duration: 3401

Are AI-generating algorithms the path to artificial general intelligence(AGI)?  Today we’re joined by Jeff Clune, an associate professor of computer science at the University of British Columbia, and faculty member at the Vector Institute. In our conversation with Jeff, we discuss the broad ambitious goal of the AI field, artificial general intelligence, where we are on the path to achieving it, and his opinion on what we should be doing to get there, specifically, focusing on AI generating algorithms. With the goal of creating open-ended algorithms that can learn forever, Jeff shares his three pillars to an AI-GA, meta-learning architectures, meta-learning algorithms, and auto-generating learning environments. Finally, we discuss the inherent safety issues with these learning algorithms and Jeff’s thoughts on how to combat them, and what the not-so-distant future holds for this area of research.  The complete show notes for this episode can be found at twimlai.com/go/602.

 Programmatic Labeling and Data Scaling for Autonomous Commercial Aviation with Cedric Cocaud - #601 | File Type: audio/mpeg | Duration: 3280

Today we’re joined by Cedric Cocaud, the chief engineer of the Wayfinder Group at Acubed, the innovation center for aircraft manufacturer Airbus. In our conversation with Cedric, we explore some of the technical challenges of innovation in the aircraft space, including autonomy. Cedric’s work on Project Vahana, Acubed’s foray into air taxis, attempted to leverage work in the self-driving car industry to develop fully autonomous planes. We discuss some of the algorithms being developed for this work, the data collection process, and Cedric’s thoughts on using synthetic data for these tasks. We also discuss the challenges of labeling the data, including programmatic and automated labeling, and much more.

 Engineering Production NLP Systems at T-Mobile with Heather Nolis - #600 | File Type: audio/mpeg | Duration: 2633

Today we’re joined by Heather Nolis, a principal machine learning engineer at T-Mobile. In our conversation with Heather, we explored her machine learning journey at T-Mobile, including their initial proof of concept project, which held the goal of putting their first real-time deep learning model into production. We discuss the use case, which aimed to build a model customer intent model that would pull relevant information about a customer during conversations with customer support. This process has now become widely known as blank assist. We also discuss the decision to use supervised learning to solve this problem and the challenges they faced when developing a taxonomy. Finally, we explore the idea of using small models vs uber-large models, the hardware being used to stand up their infrastructure, and how Heather thinks about the age-old question of build vs buy. 

 Sim2Real and Optimus, the Humanoid Robot with Ken Goldberg - #599 | File Type: audio/mpeg | Duration: 2831

Today we’re joined by return guest Ken Goldberg, a professor at UC Berkeley and the chief scientist at Ambi Robotics. It’s been a few years since our initial conversation with Ken, so we spent a bit of time talking through the progress that has been made in robotics in the time that has passed. We discuss Ken’s recent work, including the paper Autonomously Untangling Long Cables, which won Best Systems Paper at the RSS conference earlier this year, including the complexity of the problem and why it is classified as a systems challenge, as well as the advancements in hardware that made solving this problem possible. We also explore Ken’s thoughts on the push towards simulation by research entities and large tech companies, and the potential for causal modeling to find its way into robotics. Finally, we discuss the recent showcase of Optimus, Tesla, and Elon Musk’s “humanoid” robot and how far we are from it being a viable piece of technology. The complete show notes for this episode can be found at twimlai.com/go/599.

 The Evolution of the NLP Landscape with Oren Etzioni - #598 | File Type: audio/mpeg | Duration: 3195

Today friend of the show and esteemed guest host John Bohannon is back with another great interview, this time around joined by Oren Etzioni, former CEO of the Allen Institute for AI, where he is currently an advisor. In our conversation with Oren, we discuss his philosophy as a researcher and how that has manifested in his pivot to institution builder. We also explore his thoughts on the current landscape of NLP, including the emergence of LLMs and the hype being built up around AI systems from folks like Elon Musk. Finally, we explore some of the research coming out of AI2, including Semantic Scholar, an AI-powered research tool analogous to arxiv, and the somewhat controversial Delphi project, a research prototype designed to model people’s moral judgments on a variety of everyday situations.

 Live from TWIMLcon! The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools - #597 | File Type: audio/mpeg | Duration: 2879

Over the last few years, it’s been established that your ML team needs at least some basic tooling in order to be effective, providing support for various aspects of the machine learning workflow, from data acquisition and management, to model development and optimization, to model deployment and monitoring. But how do you get there? Many tools available off the shelf, both commercial and open source, can help. At the extremes, these tools can fall into one of a couple of buckets. End-to-end platforms that try to provide support for many aspects of the ML lifecycle, and specialized tools that offer deep functionality in a particular domain or area. At TWIMLcon: AI Platforms 2022, our panelists debated the merits of these approaches in The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools.

 Live from TWIMLcon! You're not Facebook. Architecting MLOps for B2B Use Cases with Jacopo Tagliabue - #596 | File Type: audio/mpeg | Duration: 2982

Much of the way we talk and think about MLOps comes from the perspective of large consumer internet companies like Facebook or Google. If you work at a FAANG company, these approaches might work well for you. But what about if you work at one of the many small, B2B companies that stand to benefit through the use of machine learning? How should you be thinking about MLOps and the ML lifecycle in that case? In this live podcast interview from TWIMLcon: AI Platforms 2022, Sam Charrington explores these questions with Jacopo Tagliabue, whose perspectives and contributions on scaling down MLOps have served to make the field more accessible and relevant to a wider array of practitioners.

 Building Foundational ML Platforms with Kubernetes and Kubeflow with Ali Rodell - #595 | File Type: audio/mpeg | Duration: 2604

Today we’re joined by Ali Rodell, a senior director of machine learning engineering at Capital One. In our conversation with Ali, we explore his role as the head of model development platforms at Capital One, including how his 25+ years in software development have shaped his view on building platforms and the evolution of the platforms space over the last 10 years. We discuss the importance of a healthy open source tooling ecosystem, Capital One’s use of various open source capabilites like kubeflow and kubernetes to build out platforms, and some of the challenges that come along with modifying/customizing these tools to work for him and his teams. Finally, we explore the range of user personas that need to be accounted for when making decisions about tooling, supporting things like Jupyter notebooks and other low level tools, and how that can be potentially challenging in a highly regulated environment like the financial industry. The complete show notes for this episode can be found at twimlai.com/go/595

Comments

Login or signup comment.