Concerning AI | Existential Risk From Artificial Intelligence
Summary: Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?
- Visit Website
- RSS
- Artist: Brandon Sanders & Ted Sarvata
- Copyright: http://creativecommons.org/licenses/by-sa/4.0/
Podcasts:
Fiction is fun. And, we can't rely on it to help us figure out what's going to happen.
Human augmentation may be a way for humans to advance on par with non-biological beings (AIs), but do ethical guidelines make that less likely to happen?
Throughout history there have been doomsayers, yet we're still here. What makes today's doomsday scenarios different?
What would it mean to entangle with technology rather than leave the biosphere behind? Could we just send the AI to the moon?
Brandon is back and we talk at length about how AlphaGo works and what the implications are, such that we can see with our feeble human minds.
Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?
From now on, we seek to come from a place or empathy on this show, even when it seems to not make sense. There's nothing to win here.
Ben's frustrated with us. Let's see if we can figure out why.
This episode is the 2nd (and final) part of our conversation with Evan Prodromou, software developer, open source advocate and AI practitioner extraordinaire. Hope you enjoy it as much as we did!
Evan's awesome. Great to talk with a bona fide AI practitioner. Just getting things started.
We haven't found strong arguments on the "Don't be worried. Here's why ..." side of things. We know the arguments must exist, but we can't find them (send them to us!). So, what to do? Make some arguments up, that's what!
What the heck is deep learning?
Exponentials are powerful and very difficult to understand (because we think linearly)
Are we cosmists or terrans?
Are we missing "compassion" when thinking about our AI descendants?