Future of Life Institute Podcast show

Future of Life Institute Podcast

Summary: FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Join Now to Subscribe to this Podcast

Podcasts:

 Governing Biotechnology: From Avian Flu to Genetically-Modified Babies With Catherine Rhodes | File Type: audio/mpeg | Duration: 00:32:40

A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning. As biotechnology and other emerging technologies become more powerful, the dual-use nature of research -- that is, research that can have both beneficial and risky outcomes -- is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats? On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues. Topics discussed in this episode include: ~ Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information ~ The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically ~ The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins ~ How scientists can anticipate whether the results of their research could be misused by someone else ~ To what extent does risk stem from technology, and to what extent does it stem from how we govern it?

 Avoiding the Worst of Climate Change with Alexander Verbeek and John Moorhead | File Type: audio/mpeg | Duration: 01:21:20

“There are basically two choices. We're going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don't care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” - Alexander Verbeek On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-security relationship. John is President of Drawdown Switzerland, an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming. He is a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com, and he advises and informs on climate solutions that are economy, society, and environment positive.

 AIAP: On Becoming a Moral Realist with Peter Singer | File Type: audio/mpeg | Duration: 00:51:14

Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI? On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Peter Singer. Peter is a world-renowned moral philosopher known for his work on animal ethics, utilitarianism, global poverty, and altruism. He's a leading bioethicist, the founder of The Life You Can Save, and currently holds positions at both Princeton University and The University of Melbourne. Topics discussed in this episode include: -Peter's transition from moral anti-realism to moral realism -Why emotivism ultimately fails -Parallels between mathematical/logical truth and moral truth -Reason's role in accessing logical spaces, and its limits -Why Peter moved from preference utilitarianism to hedonic utilitarianism -How objectivity in ethics might affect AI alignment

 On the Future: An Interview with Martin Rees | File Type: audio/mpeg | Duration: 00:53:02

How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks? In this special podcast episode, Ariel speaks with cosmologist Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Topics discussed in this episode include: - Why Martin remains a technical optimist even as he focuses on existential risks - The economics and ethics of climate change - How AI and automation will make it harder for Africa and the Middle East to economically develop - How high expectations for health care and quality of life also put society at risk - Why growing inequality could be our most underappreciated global risk - Martin’s view that biotechnology poses greater risk than AI - Earth’s carrying capacity and the dangers of overpopulation - Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars - The ethics of artificial meat, life extension, and cryogenics - How intelligent life could expand into the galaxy - Why humans might be unable to answer fundamental questions about the universe

 AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz | File Type: audio/mpeg | Duration: 00:51:12

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics. Topics discussed in this episode include: The sophisticated military robots developed by Soviets during the Cold War How technology shapes human decision-making in war “Automation bias” and why having a “human in the loop” is much trickier than it sounds The United States’ stance on automation with nuclear weapons Why weaker countries might have more incentive to build AI into warfare How the US and Russia perceive first-strike capabilities “Deep fakes” and other ways AI could sow instability and provoke crisis The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea The perceived obstacles to reducing nuclear arsenals

 AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill | File Type: audio/mpeg | Duration: 00:56:56

How are we to make progress on AI alignment given moral uncertainty?  What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty? Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space. In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics. Topics discussed in this episode include: -Will’s current normative and metaethical credences -The value of moral information and moral philosophy -A taxonomy of the AI alignment problem -How we ought to practice AI alignment given moral uncertainty -Moral uncertainty in preference aggregation -Moral uncertainty in deciding where we ought to be going as a society -Idealizing persons and their preferences -The most neglected portion of AI alignment

 AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins | File Type: audio/mpeg | Duration: 00:44:17

Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction? On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she's also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance. Topics discussed in this episode include: - Three lenses through which to view AI’s transformative power - Emerging international and national AI governance strategies - The risks and benefits of regulating artificial intelligence - The importance of public trust in AI systems - The dangers of an AI race - How AI will change the nature of wealth and power

 The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce | File Type: audio/mpeg | Duration: 01:45:56

What role does metaethics play in AI alignment and safety? How might paths to AI alignment change given different metaethical views? How do issues in moral epistemology, motivation, and justification affect value alignment? What might be the metaphysical status of suffering and pleasure?  What's the difference between moral realism and anti-realism and how is each view grounded?  And just what does any of this really have to do with AI? The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space. In this podcast, Lucas spoke with David Pearce and Brian Tomasik. David is a co-founder of the World Transhumanist Association, currently rebranded Humanity+. You might know him for his work on The Hedonistic Imperative, a book focusing on our moral obligation to work towards the abolition of suffering in all sentient life. Brian is a researcher at the Foundational Research Institute. He writes about ethics, animal welfare, and future scenarios on his website "Essays On Reducing Suffering."  Topics discussed in this episode include: -What metaethics is and how it ties into AI alignment or not -Brian and David's ethics and metaethics -Moral realism vs antirealism -Emotivism -Moral epistemology and motivation -Different paths to and effects on AI alignment given different metaethics -Moral status of hedonic tones vs preferences -Can we make moral progress and would this mean? -Moving forward given moral uncertainty

 Six Experts Explain the Killer Robots Debate | File Type: audio/mpeg | Duration: 02:00:12

Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre, artificial intelligence professor Toby Walsh, Article 36 founder Richard Moyes, Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty, and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro. If you don't have time to listen to the podcast in full, or if you want to skip around through the interviews, each interview starts at the timestamp below: Paul Scharre: 3:40 Toby Walsh: 40:50 Richard Moyes: 53:30 Mary Wareham & Bonnie Docherty: 1:03:35 Peter Asaro: 1:32:40

 AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy | File Type: audio/mpeg | Duration: 01:22:30

What role does cyber security play in alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway? AI Safety, Possible Minds, and Simulated Worlds is the third podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Roman Yampolskiy, a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. He is an author of over 100 publications including multiple journal articles and books.  Topics discussed in this episode include: -Cyber security applications to AI safety -Key concepts in Roman's papers and books -Is AI alignment solvable? -The control problem -The ethics of and detecting qualia in machine intelligence -Machine ethics and it's role or lack thereof  in AI safety -Simulated worlds and if detecting base reality is possible -AI safety publicity strategy

 Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams | File Type: audio/mpeg | Duration: 00:52:48

How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI's development? On this month's podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include everyone's voice. Charlie is the founder and CEO of the digital media strategy company Served Fresh Media, and she's also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies and discuss the implications of emerging tech on society. Randi is a doctoral student in the personal robotics group at the MIT Media Lab. She wants to understand children's interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems.

 AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala | File Type: audio/mpeg | Duration: 01:14:40

In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity? In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research. Topics discussed in this episode include: -The definition of and a taxonomy of suffering risks -How superintelligence has special leverage for generating or mitigating suffering risks -How different moral systems view suffering risks -What is possible of minds in general and how this plays into suffering risks -The probability of suffering risks -What we can do to mitigate suffering risks

 Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler | File Type: audio/mpeg | Duration: 00:42:26

With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these countries are monitored, Ariel spoke with Melissa Hanham and Dave Schmerler on this month's podcast. Melissa and Dave are both nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies, where they research weapons of mass destruction with a focus on North Korea. Topics discussed in this episode include: the progression of North Korea's quest for nukes, what happened and what’s next regarding the Iran deal, how to use open-source data to monitor nuclear weapons testing, and how younger generations can tackle nuclear risk. In light of the on-again/off-again situation regarding the North Korea Summit, Melissa sent us a quote after the podcast was recorded, saying: "Regardless of whether the summit in Singapore takes place, we all need to set expectations appropriately for disarmament. North Korea is not agreeing to give up nuclear weapons anytime soon. They are interested in a phased approach that will take more than a decade, multiple parties, new legal instruments, and new technical verification tools."

 What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville | File Type: audio/mpeg | Duration: 00:57:55

What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing. On this month's podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability of Nuclear War. The report examines 60 historical incidents that could have escalated to nuclear war and presents a model for determining the odds are that we could have some type of nuclear war in the future.

 AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell | File Type: audio/mpeg | Duration: 01:25:01

Inverse Reinforcement Learning and Inferring Human Preferences is the first podcast in the new AI Alignment series, hosted by Lucas Perry. This series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across a variety of areas, such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we will hope that you join in the conversations by following or subscribing to us on Youtube, Soundcloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Dylan Hadfield-Menell, a fifth year Ph.D student at UC Berkeley. Dylan’s research focuses on the value alignment problem in artificial intelligence. He is ultimately concerned with designing algorithms that can learn about and pursue the intended goal of their users, designers, and society in general. His recent work primarily focuses on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems. Topics discussed in this episode include: -Inverse reinforcement learning -Goodhart’s Law and it’s relation to value alignment -Corrigibility and obedience in AI systems -IRL and the evolution of human values -Ethics and moral psychology in AI alignment -Human preference aggregation -The future of IRL

Comments

Login or signup comment.