25. Rewriting Asimov's 3 Laws of Robotics | Prof Joanna Bryson




FringeFM - Exploring the Edges of Human Understanding | Future Tech | Longevity | Singularity | AI | Cryptocurrencies & Blockchain | Space Technology | Venture Capital | Startups show

Summary: Joanna Bryson (<a href="https://twitter.com/j2bryson?lang=en">@j2bryson</a>) is an Associate Professor in the Department of Computing at the <a href="http://www.cs.bath.ac.uk/~jjb/web/jb.html">University of Bath</a>. She works on Artificial Intelligence, ethics, and collaborative cognition.<br> In 2010 Bryson published her most controversial work, “Robots Should Be Slaves” and has helped the EPSRC to define the Principles of Robotics in 2010. She has also consulted The Red Cross on autonomous weapons and is a member of an All Party Parliamentary Group on Artificial Intelligence.<br> Joanna is focused on “Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems”. In 2017 she won an Outstanding Achievement award from Cognition X. She regularly appears in national media, talking about human-robot relationships and the ethics of AI<br> EPSRC’s Laws of Robotics:<br> <br> * Robots should not be designed as weapons, except for national security reasons<br> * Robots should be designed and operated to comply with existing law, including privacy<br> * Robots are products: as with other products, they should be designed to be safe and secure<br> * Robots are manufactured artifacts: the illusion of emotions and intent should not be used to exploit vulnerable users<br> * It should be possible to find out who is responsible for any robot<br> <br> <br> <a href="https://fringe.fm/itunestextlink">You can listen right here on iTunes</a><br> In our wide-ranging conversation, we cover many things, including:<br> <br> * Why robots and AI should not resemble people<br> * How Joanna helped the British replace Asimov’s laws of robotics<br> * How people confuse consciousness and intelligence and likely problems this creates<br> * Why Joanna is skeptical we’ll achieve AI superintelligence<br> * The big problem with conflicting interests creating filter bubbles, disinformation and overly aggressive Facebook<br> * Why robots cannot be liable/punished for their actions<br> * How people should think about ethics of robot design<br> * The ethical dilemmas with AI and robots in society<br> * How psychology, neuroscience, ethics and AI are merging<br> * The problems with control and governing AI usage<br> * How bad incentives create bad artificial intelligences<br> <br> <a href="http://fringe.fm/itunes"></a><br> <a href="http://fringe.fm/stitcher"></a><br> —<br> Make a Tax-Deductible Donation to Support FringeFM<br> <br> FringeFM is supported by the generosity of its readers and listeners. If you find our work valuable, please consider supporting us on <a href="http://fringe.fm/donate">Patreon,</a> via <a href="https://fringe.fm/paypal">Paypal</a> or with <a href="https://fringe.fm/donorbox">DonorBox powered by Stripe</a>.<br> <br>  <br>  <br>