Future of Life Institute Podcast show

Future of Life Institute Podcast

Summary: FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Join Now to Subscribe to this Podcast

Podcasts:

 Bart Selman on the Promises and Perils of Artificial Intelligence | File Type: audio/mpeg | Duration: 01:41:03

Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to powerful AI systems -AI consciousness You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  1:35 Futures that Bart is excited about                   4:08 Positive futures in the short, medium, and long-terms 7:23 AGI timelines  8:11 Bart’s research on “planning” through the game of Sokoban 13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable?  15:28 What’s exciting about futures with AGI and superintelligence?  17:10 How long does it take for superintelligence to arise after AGI?  21:08 Would a superintelligence have something intelligent to say about income inequality?  23:24 Are there true or false answers to moral questions?  25:30 Can AGI and superintelligence assist with moral and philosophical issues? 28:07 Do you think superintelligences converge on ethics?  29:32 Are you most excited about the short or long-term benefits of AI?  34:30 Is existential risk from AI a legitimate threat?  35:22 Is the AI alignment problem legitimate?  43:29 What are futures that you fear?  46:24 Do social media algorithms represent an instance of the alignment problem?  51:46 The importance of educating the public on AI  55:00 Income inequality, cyber security, and negative futures  1:00:06 Lethal autonomous weapons  1:01:50 Negative futures in the long-term  1:03:26 How have your views of AI alignment evolved?  1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence 1:13:45 Policy recommendations for existing AIs and the AI ecosystem  1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives  1:18:17 Narratives of an international race to powerful AI systems  1:20:42 How does an international race to AI affect the chances of successful AI alignment?  1:23:20 Is AI a zero sum game?  1:28:51 Lethal autonomous weapons governance  1:31:38 Does the governance of autonomous weapons affect outcomes from AGI  1:33:00 AI consciousness  1:39:37 Alignment is important and the benefits of AI can be great This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century | File Type: audio/mpeg | Duration: 01:26:37

Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incentive structures -The short-term and long-term AI safety communities You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 1:29 How can humanity improve? 3:10 The importance of intelligence and coordination 8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans 15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks 17:15 How Jaan evaluates and thinks about existential risk 18:30 Nuclear weapons as the first existential risk we faced 20:47 The likelihood of unknown unknown existential risks 25:04 Why Jaan doesn't see nuclear war as an existential risk 27:54 Climate change 29:00 Existential risk from synthetic biology 31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge 36:23 AI adoption as a delegation process 42:52 Attractors in the design space of AI 44:24 The regulation of AI 45:31 Jaan's investments and philanthropy in AI 55:18 International coordination issues from AI adoption as a delegation process 57:29 AI today and the negative impacts of recommender algorithms 1:02:43 Collective, institutional, and interpersonal coordination 1:05:23 The benefits and risks of longevity research 1:08:29 The long-term and short-term AI safety communities and their relationship with one another 1:12:35 Jaan's current philanthropic efforts 1:16:28 Software as a philanthropic target 1:19:03 How do we move towards beneficial futures with AI? 1:22:30 An idea Jaan finds meaningful 1:23:33 Final thoughts from Jaan 1:25:27 Where to find Jaan This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures | File Type: audio/mpeg | Duration: 01:38:17

Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.  Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 3:17 What is truth and knowledge? 11:39 What is subjectivity and objectivity? 14:32 What is the universe ultimately? 19:22 Is the universe a cellular automaton? Is the universe ultimately digital or analogue? 24:05 Hilbert's hotel from the point of view of computation 35:18 Seeing the world as a fractal 38:48 Describing human consciousness 51:10 Meaning, purpose, and harvesting negentropy 55:08 The path to aligned AGI 57:37 Bottlenecks to beneficial futures and existential security 1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures? 1:19:39 Non-duality and collective coordination 1:22:53 What difficulties are there for an idealist worldview that involves computation? 1:27:20 Which features of mind and consciousness are necessarily coupled and which aren't? 1:36:40 Joscha's final thoughts on AGI This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI | File Type: audio/mpeg | Duration: 01:12:01

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  2:35 Roman’s primary research interests  4:09 How theoretical proofs help AI safety research  6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly  12:06 Impossibility results clarify what we can do  14:19 Roman’s results on unexplainability and incomprehensibility  22:34 Focusing on comprehensibility  26:17 Roman’s results on uncontrollability  28:33 Alignment as a subset of safety and control  30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment  33:40 What does it mean to solve AI safety?  34:19 What do the impossibility results really mean?  37:07 Virtual worlds and AI alignment  49:55 AI security and malevolent agents  53:00 Air gapping, boxing, and other security methods  58:43 Some examples of historical failures of AI systems and what we can learn from them  1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI  1:08:20 Are oracles a valid approach to AI safety?  1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons | File Type: audio/mpeg | Duration: 01:39:48

Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.  Topics discussed in this episode include: -The current state of the deployment and development of lethal autonomous weapons and swarm technologies -Drone swarms as a potential weapon of mass destruction -The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons -The difficulty of attribution, verification, and accountability with autonomous weapons -Autonomous weapons governance as norm setting for global AI issues You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 2:23 Emilia Javorsky on lethal autonomous weapons 7:27 What is a lethal autonomous weapon? 11:33 Autonomous weapons that exist today 16:57 The concerns of collateral damage, accidental escalation, scalability, control, and error risk 26:57 The proliferation risk of autonomous weapons 32:30 To what extent are global superpowers pursuing these weapons? What is the state of industry's pursuit of the research and manufacturing of this technology 42:13 A possible proposal for a selective ban on small anti-personnel autonomous weapons 47:20 Lethal autonomous weapons as a potential weapon of mass destruction 53:49 The unpredictability of autonomous weapons, especially when swarms are interacting with other swarms 58:09 The risk of autonomous weapons escalating conflicts 01:10:50 The risk of drone swarms proliferating 01:20:16 The risk of assassination 01:23:25 The difficulty of attribution and accountability 01:26:05 The governance of autonomous weapons being relevant to the global governance of AI 01:30:11 The importance of verification for responsibility, accountability, and regulation 01:35:50 Concerns about the beginning of an arms race and the need for regulation 01:38:46 Wrapping up 01:39:23 Outro This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 John Prendergast on Non-dual Awareness and Wisdom for the 21st Century | File Type: audio/mpeg | Duration: 01:46:16

John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existential risk issues. Topics discussed in this episode include: -The experience of egocentricity and ego-identification -Waking up into heart awareness -The movement towards and qualities of non-dual consciousness -The ways in which the condition of our minds collectively affect the world -How waking up may be relevant to the creation of AGI You can find the page for this podcast here: https://futureoflife.org/2021/02/09/john-prendergast-on-non-dual-awareness-and-wisdom-for-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 7:10 The modern human condition 9:29 What egocentricity and ego-identification are 15:38 Moving beyond the experience of self 17:38 The origins and structure of self 20:25 A pointing out instruction for noticing ego-identification and waking up out of it 24:34 A pointing out instruction for abiding in heart-mind or heart awareness 28:53 The qualities of and moving into heart awareness and pure awareness 33:48 An explanation of non-dual awareness 40:50 Exploring the relationship between awareness, belief, and action 46:25 Growing up and improving the egoic structure 48:29 Waking up as recognizing true nature 51:04 Exploring awareness as primitive and primary 53:56 John's dream of Sri Nisargadatta Maharaj 57:57 The use and value of conceptual thought and the mind 1:00:57 The epistemics of heart-mind and the conceptual mind as we shift levels of identity 1:17:46 A pointing out instruction for inquiring into core beliefs 1:27:28 The universal heart, qualities of awakening, and the ethical implications of such shifts 1:31:38 Wisdom, waking up, and growing up for the transgenerational issues of the 21st century 1:38:44 Waking up and its applicability to the creation of AGI 1:43:25 Where to find, follow, and reach out to John 1:45:56 Outro This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Beatrice Fihn on the Total Elimination of Nuclear Weapons | File Type: audio/mpeg | Duration: 01:17:56

Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world. Topics discussed in this episode include: -The current nuclear weapons geopolitical situation -The risks and mechanics of accidental and intentional nuclear war -Policy proposals for reducing the risks of nuclear war -Deterrence theory -The Treaty on the Prohibition of Nuclear Weapons -Working towards the total elimination of nuclear weapons You can find the page for this podcast here: https://futureoflife.org/2021/01/21/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/ Timestamps:  0:00 Intro 4:28 Overview of the current nuclear weapons situation 6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war 9:27 Accidental nuclear war and human systems 12:08 The risks of nuclear war in 2021 and nuclear stability 17:49 Toxic personalities and the human component of nuclear weapons 23:23 Policy proposals for reducing the risk of nuclear war 23:55 New START Treaty 25:42 What does it mean to maintain credible deterrence 26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons 28:00 Deterrence theoretic arguments for nuclear weapons 32:36 The reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons 39:13 Arguments for and against nuclear risk reduction policy proposals 46:02 Moving all of the United State's nuclear weapons to bombers and nuclear submarines 48:27 Working towards and the theory of the total elimination of nuclear weapons 1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons 1:14:26 Elevating activism around nuclear weapons and messaging more skillfully 1:15:40 What the public needs to understand about nuclear weapons 1:16:35 World leaders' views of the treaty 1:17:15 How to get involved This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year | File Type: audio/mpeg | Duration: 01:00:41

Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021. Topics discussed in this episode include: -FLI's perspectives on 2020 and hopes for 2021 -What our favorite projects from 2020 were -The biggest lessons we've learned from 2020 -What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety You can find the page for this podcast here: https://futureoflife.org/2021/01/08/max-tegmark-and-the-fli-team-on-2020-and-existential-risk-reduction-in-the-new-year/ Timestamps:  0:00 Intro 00:52 First question: What was your favorite project from 2020? 1:03 Max Tegmark on the Future of Life Award 4:15 Anthony Aguirre on AI Loyalty 9:18 David Nicholson on the Future of Life Award 12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation 14:03 Jared Brown on developing comments on the European Union's White Paper on AI through community collaboration 16:40 Tucker Davey on editing the biography of Victor Zhdanov 19:49 Lucas Perry on the podcast and Pindex video 23:17 Second question: What lessons do you take away from 2020? 23:26 Max Tegmark on human fragility and vulnerability 25:14 Max Tegmark on learning from history 26:47 Max Tegmark on the growing threats of AI 29:45 Anthony Aguirre on the inability of present-day institutions to deal with large unexpected problems 33:00 David Nicholson on the need for self-reflection on the use and development of technology 38:05 Emilia Javorsky on the global community coming to awareness about tail risks 39:48 Jared Brown on our vulnerability to low probability, high impact events and the importance of adaptability and policy engagement 41:43 Tucker Davey on taking existential risks more seriously and ethics-washing 43:57 Lucas Perry on the fragility of human systems 45:40 Third question: What is needed in 2021 to make progress on existential risk mitigation 45:50 Max Tegmark on holding Big Tech accountable, repairing geopolitics, and fighting the myth of the technological zero-sum game 49:58 Anthony Aguirre on the importance of spreading understanding of expected value reasoning and fixing the information crisis 53:41 David Nicholson on the need to reflect on our values and relationship with technology 54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue 56:00 Jared Brown on the need for robust government engagement 57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation 1:00:10 Outro This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox | File Type: audio/mpeg | Duration: 01:54:18

The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events.  Topics discussed in this episode include: -William Foege's and Victor Zhdanov's efforts to eradicate smallpox -Personal stories from Foege's and Zhdanov's lives -The history of smallpox -Biological issues of the 21st century You can find the page for this podcast here: https://futureoflife.org/2020/12/11/future-of-life-award-2020-saving-200000000-lives-by-eradicating-smallpox/ You can watch the 2020 Future of Life Award ceremony here: https://www.youtube.com/watch?v=73WQvR5iIgk&feature=emb_title&ab_channel=FutureofLifeInstitute You can learn more about the Future of Life Award here: https://futureoflife.org/future-of-life-award/ Timestamps:  0:00 Intro 3:13 Part 1: How William Foege got into smallpox efforts and his work in Eastern Nigeria 14:12 The USSR's smallpox eradication efforts and convincing the WHO to take up global smallpox eradication 15:46 William Foege's efforts in and with the WHO for smallpox eradication 18:00 Surveillance and containment as a viable strategy 18:51 Implementing surveillance and containment throughout the world after success in West Africa 23:55 Wrapping up with eradication and dealing with the remnants of smallpox 25:35 Lab escape of smallpox in Birmingham England and the final natural case 27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov 29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov 31:05 Michael Burkinsky's memories of Victor Zhdanov Sr. 39:26 Victor Zhdanov Jr.'s memories of Victor Zhdanov Sr. 46:15 Mushrooms with meat 47:56 Stealing the family car 49:27 Victor Zhdanov Sr.'s efforts at the WHO for smallpox eradication 58:27 Exploring Alissa's book on Victor Zhdanov Sr.'s life 1:06:09 Michael's view that Victor Zhdanov Sr. is unsung, especially in Russia 1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century 1:07:32 The origin and history of smallpox 1:10:34 The origin and history of variolation and the vaccine 1:20:15 West African "healers" who would create smallpox outbreaks 1:22:25 The safety of the smallpox vaccine vs. modern vaccines 1:29:40 A favorite story of William Foege's 1:35:50 Larry Brilliant and people central to the eradication efforts 1:37:33 Foege's perspective on modern pandemics and human bias 1:47:56 What should we do after COVID-19 ends 1:49:30 Bio-terrorism, existential risk, and synthetic pandemics 1:53:20 Foege's final thoughts on the importance of global health experts in politics This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress | File Type: audio/mpeg | Duration: 01:30:33

Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far.  Topics discussed in this episode include: -Important intellectual movements and their merits -The evolution of metaphysical and epistemological views over human history -Consciousness, free will, and philosophical blunders -Lessons for the 21st century You can find the page for this podcast here: https://futureoflife.org/2020/12/01/sean-carroll-on-consciousness-physicalism-and-the-history-of-intellectual-progress/ You can find the video for this podcast here: https://youtu.be/6HNjL8_fsTk Timestamps:  0:00 Intro 2:06 The problem of beliefs and the strengths and weaknesses of religion 6:40 The Age of Enlightenment and importance of reason 10:13 The importance of humility and the is--ought gap 17:53 The advantages of religion and mysticism 19:50 Materialism and Newtonianism 28:00 Duality, self, suffering, and philosophical blunders 36:56 Quantum physics as a paradigm shift 39:24 Physicalism, the problem of consciousness, and free will 01:01:50 What does it mean for something to be real? 01:09:40 The hard problem of consciousness 01:14:20 The multiple worlds interpretation of quantum mechanics and utilitarianism 01:21:16 The importance of being charitable in conversation 1:24:55 Sean's position in the philosophy of consciousness 01:27:29 Sean's metaethical position 01:29:36 Where to find and follow Sean This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity | File Type: audio/mpeg | Duration: 01:22:21

Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation. Topics discussed in this episode include: -How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible -The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation -How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers -How to combat the problem of ethics-washing in Big Tech You can find the page for this podcast here: https://futureoflife.org/2020/11/17/mohamed-abdalla-on-big-tech-ethics-washing-and-the-threat-on-academic-integrity/ The Future of Life Institute AI policy page: https://futureoflife.org/AI-policy/ Timestamps:  0:00 Intro 1:55 How Big Tech actively distorts the academic landscape and what counts as big tech 6:00 How Big Tobacco has shaped industry research 12:17 The four tactics of Big Tobacco and Big Tech 13:34 Big Tech and Big Tobacco working to appear socially responsible 22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities 32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists 51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility 1:00:24 Big Tech and being authentically socially responsible 1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems 1:16:56 Ethics-washing as systemic 1:17:30 Action items for solving Ethics-washing 1:19:42 Has Mohamed received criticism for this paper? 1:20:07 Final thoughts from Mohamed This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Maria Arpa on the Power of Nonviolent Communication | File Type: audio/mpeg | Duration: 01:12:43

Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication. Topics discussed in this episode include: -What nonviolent communication (NVC) consists of -How NVC is different from normal discourse -How NVC is composed of observations, feelings, needs, and requests -NVC for systemic change -Foundational assumptions in NVC -An NVC exercise You can find the page for this podcast here: https://futureoflife.org/2020/11/02/maria-arpa-on-the-power-of-nonviolent-communication/ Timestamps: 0:00 Intro 2:50 What is nonviolent communication? 4:05 How is NVC different from normal discourse? 18:40 NVC’s four components: observations, feelings, needs, and requests 34:50 NVC for systemic change 54:20 The foundational assumptions of NVC 58:00 An exercise in NVC This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism | File Type: audio/mpeg | Duration: 01:39:26

Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.  Topics discussed in this episode include: -The projects of awakening and growing the wisdom with which to manage technologies -What might be possible of embarking on the project of waking up -Facets of human nature that contribute to existential risk -The dangers of the problem solving mindset -Improving the effective altruism and existential risk communities You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/ Timestamps:  0:00 Intro 3:40 Albert Einstein and the quest for awakening 8:45 Non-self, emptiness, and non-duality 25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise 33:32 The importance of insight 49:45 The present moment, creativity, and suffering/pain/dukkha 58:44 Stephen's article, Embracing Extinction 1:04:48 The dangers of the problem solving mindset 1:26:12 Improving the effective altruism and existential risk communities 1:37:30 Where to find and follow Stephen This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Kelly Wanser on Climate Change as a Possible Existential Threat | File Type: audio/mpeg | Duration: 01:45:48

Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.  Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Climate intervention via marine cloud brightening and releasing particles in the stratosphere - The benefits and risks of climate intervention techniques  - The international politics of climate change and weather modification You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/ Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU Timestamps:  0:00 Intro 2:30 What is SilverLining’s mission?  4:27 Why is climate change thought to be very risky in the next 10-30 years?  8:40 Tipping points and tipping cascades 13:25 Is climate change an existential risk?  17:39 Earth systems that help to stabilize the climate  21:23 Days where it will be unsafe to work outside  25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in  41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions?  50:20 International politics of weather modification  53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight?  57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous?  59:33 What are the main points of persons skeptical of climate intervention approaches  01:13:21 The international problem of coordinating on climate change  01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks? 01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention?   01:37:48 What can listeners do to help with this issue?  01:40:00 Climate change and mars colonization  01:44:55 Where to find and follow Kelly This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

 Andrew Critch on AI Research Considerations for Human Existential Safety | File Type: audio/mpeg | Duration: 01:51:28

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.  Topics discussed in this episode include: - The mainstream computer science view of AI existential risk - Distinguishing AI safety from AI existential safety  - The need for more precise terminology in the field of AI existential safety and alignment - The concept of prepotent AI systems and the problem of delegation  - Which alignment problems get solved by commercial incentives and which don’t - The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives - Prepotent AI risk types that lead to unsurvivability for humanity You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/ Timestamps:  0:00 Intro 2:53 Why Andrew wrote ARCHES and what it’s about 6:46 The perspective of the mainstream CS community on AI existential risk 13:03 ARCHES in relation to AI existential risk literature 16:05 The distinction between safety and existential safety  24:27 Existential risk is most likely to obtain through externalities  29:03 The relationship between existential safety and safety for current systems  33:17 Research areas that may not be solved by natural commercial incentives 51:40 What’s an AI system and an AI technology?  53:42 Prepotent AI  59:41 Misaligned prepotent AI technology  01:05:13 Human frailty  01:07:37 The importance of delegation  01:14:11 Single-single, single-multi, multi-single, and multi-multi  01:15:26 Control, instruction, and comprehension  01:20:40 The multiplicity thesis  01:22:16 Risk types from prepotent AI that lead to human unsurvivability  01:34:06 Flow-through effects  01:41:00 Multi-stakeholder objectives  01:49:08 Final words from Andrew This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Comments

Login or signup comment.