Concerning AI | Existential Risk From Artificial Intelligence
Summary: Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?
- Visit Website
- RSS
- Artist: Brandon Sanders & Ted Sarvata
- Copyright: http://creativecommons.org/licenses/by-sa/4.0/
Podcasts:
3rd in a series about future of current narrow AIs.
Read After On by Rob Reid, before you listen or because you listen.
This is our 2nd episode thinking about possible paths to superintelligence focusing on one kind of narrow AI each show. This episode is about embodiment and robots. It's possible we never really agreed about what we were talking about and need to come back to robots. Future ideas for this series include: personal assistants (Siri, Alexa, etc) non-player characters search engines (or maybe those just fall under tools social networks or other big data / working on completely different time / size scale from humans collective intelligence simulation whole-brain emulation augmentation (computer / brain interface) self-driving cars See also: 0046: Paths to AGI #1: Tools Robots learning to pick stuff up Roomba mapping https://youtu.be/iZhEcRrMA-M https://youtu.be/97hOaXJ5nGU https://youtu.be/tynDYRtRrag https://youtu.be/FUbpCuBLvWM
For show notes, please see https://concerning.ai/2017/08/29/0048-ai-xprize-and-thrival-festival-special-mini-episode/
How might we get from today's narrow AIs to AGI? This episode focus is tools.
Is all AI-involved science fiction the same?
We talked about the Nexus Trilogy of novels as a way to further our thinking about the wizard hat idea Tim Urban wrote about in his article about Elon Musk's Neuralink.
Are we living our lives as if AI were an existential threat?
Listener Feedback this episode
Tim Urban's article at Wait But Why: Elon Musk's Neuralink and the Brain’s Magical Future
Mostly a listener feedback episode. Lots of great stuff here!
We need better language to talk about these difficult technical topics. See https://concerning.ai/2017/03/31/0039-we-need-more-sparrow-fables/ for notes.
See https://concerning.ai/2017/03/17/0038-we-dont-want-to-die/
Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?
Main topic of this show: Unexpected Consequences of Self Driving Cars by Rodney Brooks