Future of Life Institute Podcast show

Future of Life Institute Podcast

Summary: FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Join Now to Subscribe to this Podcast

Podcasts:

 FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre | File Type: audio/mpeg | Duration: 01:45:20

Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity. Topics discussed in this episode include: - Views on the nature of reality - Quantum mechanics and the implications of quantum uncertainty - Identity, information and description - Continuum of objectivity/subjectivity You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/31/fli-podcast-identity-information-the-nature-of-reality-with-anthony-aguirre/ Timestamps: 3:35 - General history of views on fundamental reality 9:45 - Quantum uncertainty and observation as interaction 24:43 - The universe as constituted of information 29:26 - What is information and what does the view of reality as information have to say about objects and identity 37:14 - Identity as on a continuum of objectivity and subjectivity 46:09 - What makes something more or less objective? 58:25 - Emergence in physical reality and identity 1:15:35 - Questions about the philosophy of identity in the 21st century 1:27:13 - Differing views on identity changing human desires 1:33:28 - How the reality as information perspective informs questions of identity 1:39:25 - Concluding thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson | File Type: audio/mpeg | Duration: 02:03:19

In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world where it is possible to merge with AI directly, upload ourselves, copy and duplicate ourselves arbitrarily, or even manipulate and re-program our sense of identity. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction? Topics discussed in this episode include: -Identity from epistemic, ontological, and phenomenological perspectives -Identity formation in biological evolution -Open, closed, and empty individualism -The moral relevance of views on identity -Identity in the world today and on the path to superintelligence and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/15/identity-and-the-ai-revolution-with-david-pearce-and-andres-gomez-emilsson/ Timestamps:  0:00 - Intro 6:33 - What is identity? 9:52 - Ontological aspects of identity 12:50 - Epistemological and phenomenological aspects of identity 18:21 - Biological evolution of identity 26:23 - Functionality or arbitrariness of identity / whether or not there are right or wrong answers 31:23 - Moral relevance of identity 34:20 - Religion as codifying views on identity 37:50 - Different views on identity 53:16 - The hard problem and the binding problem 56:52 - The problem of causal efficacy, and the palette problem 1:00:12 - Navigating views of identity towards truth 1:08:34 - The relationship between identity and the self model 1:10:43 - The ethical implications of different views on identity 1:21:11 - The consequences of different views on identity on preference weighting 1:26:34 - Identity and AI alignment 1:37:50 - Nationalism and AI alignment 1:42:09 - Cryonics, species divergence, immortality, uploads, and merging. 1:50:28 - Future scenarios from Life 3.0 1:58:35 - The role of identity in the AI itself This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark | File Type: audio/mpeg | Duration: 01:00:58

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's views and intuitions about consciousness -How they ground and think about morality -Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk -The function of myths and stories in human society -How emerging science, technology, and global paradigms challenge the foundations of many of our stories -Technological risks of the 21st century You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/ Timestamps: 0:00 Intro 3:14 Grounding morality and the need for a science of consciousness 11:45 The effective altruism community and it's main cause areas 13:05 Global health 14:44 Animal suffering and factory farming 17:38 Existential risk and the ethics of the long-term future 23:07 Nuclear war as a neglected global risk 24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence 28:37 On creating new stories for the challenges of the 21st century 32:33 The risks of big data and AI enabled human hacking and monitoring 47:40 What does it mean to be human and what should we want to want? 52:29 On positive global visions for the future 59:29 Goodbyes and appreciations 01:00:20 Outro and supporting the Future of Life Institute Podcast This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team | File Type: audio/mpeg | Duration: 01:39:02

As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond. Topics discussed include: -Introductions to the FLI team and our work -Motivations for our projects and existential risk mitigation efforts -The goals and outcomes of our work -Our favorite projects at FLI in 2019 -Optimistic directions for projects in 2020 -Reasons for existential hope going into 2020 and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/27/existential-hope-in-2020-and-beyond-with-the-fli-team/ Timestamps: 0:00 Intro 1:30 Meeting the Future of Life Institute team 18:30 Motivations for our projects and work at FLI 30:04 What we strive to result from our work at FLI 44:44 Favorite accomplishments of FLI in 2019 01:06:20 Project directions we are most excited about for 2020 01:19:43 Reasons for existential hope in 2020 and beyond 01:38:30 Outro

 AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike | File Type: audio/mpeg | Duration: 00:58:05

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind.  Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety benchmarking at DeepMind -The potential modularity of AGI -Comments on the cultural and intellectual differences between the AI safety and mainstream AI communities -Joining the DeepMind safety team You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/ Timestamps:  0:00 intro 2:15 Jan's intellectual journey in computer science to AI safety 7:35 Transitioning from theoretical to empirical research 11:25 Jan's and DeepMind's approach to AI safety 17:23 Recursive reward modeling 29:26 Experimenting with recursive reward modeling 32:42 How recursive reward modeling serves AI safety 34:55 Pessimism about recursive reward modeling 38:35 How this research direction fits in the safety landscape 42:10 Can deep reinforcement learning get us to AGI? 42:50 How modular will AGI be? 44:25 Efforts at DeepMind for AI safety benchmarking 49:30 Differences between the AI safety and mainstream AI communities 55:15 Most exciting piece of empirical safety work in the next 5 years 56:35 Joining the DeepMind safety team

 FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert | File Type: audio/mpeg | Duration: 00:58:39

We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Topics discussed include: -The psychology of existential risk, longtermism, effective altruism, and speciesism -Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" -Various works and studies Stefan Schubert has co-authored in these spaces -How this enables us to be more altruistic You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/02/the-psychology-of-existential-risk-and-effective-altruism-with-stefan-schubert/ Timestamps: 0:00 Intro 2:31 Stefan's academic and intellectual journey 5:20 How large is this field? 7:49 Why study the psychology of X-risk and EA? 16:54 What does a better understanding of psychology here enable? 21:10 What are the cognitive limitations psychology helps to elucidate? 23:12 Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" 34:45 Messaging on existential risk 37:30 Further areas of study 43:29 Speciesism 49:18 Further studies and work by Stefan

 Not Cool Epilogue: A Climate Conversation | File Type: audio/mpeg | Duration: 00:04:39

In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.

 Not Cool Ep 26: Naomi Oreskes on trusting climate science | File Type: audio/mpeg | Duration: 00:51:13

It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we should listen to experts, how we can identify the best experts in a field, and why we should be open to the idea of more than one type of "scientific method." She also discusses industry-funded science, scientists’ misconceptions about the public, and the role of the media in proliferating bad research. Topics discussed include: -Why Trust Science? -5 tenets of reliable science -How to decide which experts to trust -Why non-scientists can't debate science -Industry disinformation -How to communicate science -Fact-value distinction -Why people reject science -Shifting arguments from climate deniers -Individual vs. structural change -State- and city-level policy change

 Not Cool Ep 25: Mario Molina on climate action | File Type: audio/mpeg | Duration: 00:35:10

Most Americans believe in climate change — yet far too few are taking part in climate action. Many aren't even sure what effective climate action should look like. On Not Cool episode 25, Ariel is joined by Mario Molina, Executive Director of Protect our Winters, a non-profit aimed at increasing climate advocacy within the outdoor sports community. In this interview, Mario looks at climate activism more broadly: he explains where advocacy has fallen short, why it's important to hold corporations responsible before individuals, and what it would look like for the US to be a global leader on climate change. He also discusses the reforms we should be implementing, the hypocrisy allegations sometimes leveled at the climate advocacy community, and the misinformation campaign undertaken by the fossil fuel industry in the '90s. Topics discussed include: -Civic engagement and climate advocacy -Recent climate policy rollbacks -Local vs. global action -Energy and transportation reform -Agricultural reform -Overcoming lack of political will -Creating cultural change -Air travel and hypocrisy allegations -Individual vs. corporate carbon footprints -Collective action -Divestment -The unique influence of the US

 Not Cool Ep 24: Ellen Quigley and Natalie Jones on defunding the fossil fuel industry | File Type: audio/mpeg | Duration: 00:54:24

Defunding the fossil fuel industry is one of the biggest factors in addressing climate change and lowering carbon emissions. But with international financing and powerful lobbyists on their side, fossil fuel companies often seem out of public reach. On Not Cool episode 24, Ariel is joined by Ellen Quigley and Natalie Jones, who explain why that’s not the case, and what you can do — without too much effort — to stand up to them. Ellen and Natalie, both researchers at the University of Cambridge’s Centre for the Study of Existential Risk (CSER), explain what government regulation should look like, how minimal interactions with our banks could lead to fewer fossil fuel investments, and why divestment isn't enough on its own. They also discuss climate justice, Universal Ownership theory, and the international climate regime. Topics discussed include: -Divestment -Universal Ownership theory -Demand side and supply side regulation -Impact investing -Nationally determined contributions -Low greenhouse gas emission development strategies -Just transition -Economic diversification For more on universal ownership: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457205

 AIAP: Machine Ethics and AI Governance with Wendell Wallach | File Type: audio/mpeg | Duration: 01:12:36

Wendell Wallach has been at the forefront of contemporary emerging technology issues for decades now. As an interdisciplinary thinker, he has engaged at the intersections of ethics, governance, AI, bioethics, robotics, and philosophy since the beginning formulations of what we now know as AI alignment were being codified. Wendell began with a broad interest in the ethics of emerging technology and has since become focused on machine ethics and AI governance. This conversation with Wendell explores his intellectual journey and participation in these fields.  Topics discussed in this episode include: -Wendell’s intellectual journey in machine ethics and AI governance  -The history of machine ethics and alignment considerations -How machine ethics and AI alignment serve to produce beneficial AI  -Soft law and hard law for shaping AI governance  -Wendell’s and broader efforts for the global governance of AI -Social and political mechanisms for mitigating the risks of AI  -Wendell’s forthcoming book You can find the page and transcript here: https://futureoflife.org/2019/11/15/machine-ethics-and-ai-governance-with-wendell-wallach/ Important timestamps:  0:00 intro 2:50 Wendell's evolution in work and thought 10:45 AI alignment and machine ethics 27:05 Wendell's focus on AI governance 34:04 How much can soft law shape hard law? 37:27 What does hard law consist of? 43:25 Contextualizing the International Congress for the Governance of AI 45:00 How AI governance efforts might fail 58:40 AGI governance 1:05:00 Wendell's forthcoming book

 Not Cool Ep 23: Brian Toon on nuclear winter: the other climate change | File Type: audio/mpeg | Duration: 01:03:02

Though climate change and global warming are often used synonymously, there’s a different kind of climate change that also deserves attention: nuclear winter. A period of extreme global cooling that would likely follow a major nuclear exchange, nuclear winter is as of now — unlike global warming — still avoidable. But as Cold War era treaties break down and new nations gain nuclear capabilities, it's essential that we understand the potential climate impacts of nuclear war. On Not Cool Episode 23, Ariel talks to Brian Toon, one of the five authors of the 1983 paper that first outlined the concept of nuclear winter. Brian discusses the global tensions that could lead to a nuclear exchange, the process by which such an exchange would drastically reduce the temperature of the planet, and the implications of this kind of drastic temperature drop for humanity. He also explains how nuclear weapons have evolved since their invention, why our nuclear arsenal doesn't need an upgrade, and why modern building materials would make nuclear winter worse. Topics discussed include: -Causes and impacts of nuclear winter -History of nuclear weapons development -History of disarmament -Current nuclear arsenals -Mutually assured destruction -Fires and climate -Greenhouse gases vs. aerosols -Black carbon and plastics -India/Pakistan tensions -US/Russia tensions -Unknowns -Global food storage and shortages For more: https://futureoflife.org/2016/10/31/nuclear-winter-robock-toon-podcast/ https://futureoflife.org/2017/04/27/climate-change-podcast-toon-trenberth/

 Not Cool Ep 22: Cullen Hendrix on climate change and armed conflict | File Type: audio/mpeg | Duration: 00:35:40

Right before civil war broke out in 2011, Syria experienced a historic five-year drought. This particular drought, which exacerbated economic and political insecurity within the country, may or may not have been caused by climate change. But as climate change increases the frequency of such extreme events, it’s almost certain to inflame pre-existing tensions in other countries — and in some cases, to trigger armed conflict. On Not Cool episode 22, Ariel is joined by Cullen Hendrix, co-author of “Climate as a risk factor for armed conflict.” Cullen, who serves as Director of the Sié Chéou-Kang Center for International Security and Diplomacy and Senior Research Advisor at the Center for Climate & Security, explains the main drivers of conflict and the impact that climate change may have on them. He also discusses the role of climate change in current conflicts like those in Syria, Yemen, and northern Nigeria; the political implications of such conflicts for Europe and other developed regions; and the chance that climate change might ultimately foster cooperation. Topics discussed include: -4 major drivers of conflict -Yemeni & Syrian civil wars -Boko Haram conflict -Arab Spring -Decline in predictability of at-risk countries: -Instability in South/central America -Climate-driven migration -International conflict -Implications for developing vs. developed countries -Impact of Syrian civil war/migrant crisis on EU -Backlash in domestic European politics -Brexit -Dealing with uncertainty -Actionable steps for governments

 Not Cool Ep 21: Libby Jewett on ocean acidification | File Type: audio/mpeg | Duration: 00:39:16

The increase of CO2 in the atmosphere is doing more than just warming the planet and threatening the lives of many terrestrial species. A large percentage of that carbon is actually reabsorbed by the oceans, causing a phenomenon known as ocean acidification — that is, our carbon emissions are literally changing the chemistry of ocean water and threatening ocean ecosystems worldwide. On Not Cool episode 21, Ariel is joined by Libby Jewett, founding Director of the Ocean Acidification Program at the National Oceanic and Atmospheric Administration (NOAA), who explains the chemistry behind ocean acidification, its impact on animals and plant life, and the strategies for helping organisms adapt to its effects. She also discusses the vulnerability of human communities that depend on marine resources, the implications for people who don't live near the ocean, and the relationship between ocean acidification and climate change. Topics discussed include: -Chemistry of ocean acidification -Impact on animals and plant life -Coral reefs -Variation in acidification between oceans -Economic repercussions -Vulnerability of resources and human communities -Global effects of ocean acidification -Adaptation and management -Mitigation -Acidification of freshwater bodies -Geoengineering

 Not Cool Ep 20: Deborah Lawrence on deforestation | File Type: audio/mpeg | Duration: 00:42:31

This summer, the world watched in near-universal horror as thousands of square miles of rainforest went up in flames. But what exactly makes forests so precious — and deforestation so costly? On the 20th episode of Not Cool, Ariel explores the many ways in which forests impact the global climate — and the profound price we pay when we destroy them. She’s joined by Deborah Lawrence, Environmental Science Professor at the University of Virginia whose research focuses on the ecological effects of tropical deforestation. Deborah discusses the causes of this year's Amazon rain forest fires, the varying climate impacts of different types of forests, and the relationship between deforestation, agriculture, and carbon emissions. She also explains why the Amazon is not the lungs of the planet, what makes tropical forests so good at global cooling, and how putting a price on carbon emissions could slow deforestation. Topics discussed include: -Amazon rain forest fires -Deforestation of the rainforest -Tipping points in deforestation -Climate impacts of forests: local vs. global -Evapotranspiration -Why tropical forests do the most cooling -Non-climate impacts of forests -Global rate of deforestation -Why the amazon is not the lungs of the planet -Impacts of agriculture on forests -Using degraded land for new crops -Connection between forests and other greenhouse gases -Individual actions and policies

Comments

Login or signup comment.