Concerning AI | Existential Risk From Artificial Intelligence
Summary: Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?
- Visit Website
- RSS
- Artist: Brandon Sanders & Ted Sarvata
- Copyright: http://creativecommons.org/licenses/by-sa/4.0/
Podcasts:
How might we get to superintelligence? This episode explores some possible paths, or maybe simply directions.
Existential risk – One where an adverse outcome would either annihilate Earth- originating intelligent life or permanently and drastically curtail its potential
The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.
Following up on last episode's question, "What is intelligence?" this episode we attempt to unpack "consciousness."
Our goal on this episode is to establish a "good enough" definition of intelligence that we'll be able to refer back to in future episodes.
This is a reboot. After recording 4 episodes of what we thought would be the Friendly AI podcast, here is Episode 0000 of Concerning AI.
Does an AI need embodiment?
Shock levels
Is it better to run toward something (compelling) or away from something (scary)?
And So It Begins! We didn't used to be concerned about artificial intelligence, but now we are.