Rude Bot Rises




Flash Forward show

Summary: Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.<br> <br> <br> <br> We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing. <br> <br> But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s. <br> <br> In 1990, a guy named Pavel Curtis founded something called LambdaMOO. Curtis was working at XEROX PARC, PARC, which we actually talked about last week in our episode about paper. Now, LamdaMOO is an online community, it’s also called an MUD, which stands for multi-user dungeons. It’s basically a text-based multiplayer role playing game. So the interface is totally text, and when you log in to LamdaMOO you use commands to move around and talk to the other players. The whole thing is set in a mansion, full of various rooms where you can encounter other players. People hang out in the living room, where they often hear a pet Cockatoo programmed to repeat phrases. They can walk into the front yard, go into the kitchen, the garage, the library, and even a Museum of generic objects. But the main point of LamndaMOO, the way that most people used it, was to chat with other players. You can actually still access LamdaMOO today, if you want to poke around. <br> <br> So in the 1990’s, LambdaMoo gained a pretty sizeable fan base. At one point there were nearly 10,000 users, and at any given time there were usually about 300 people connected to the system and walking around. In 1993 the admins actually started a ballot system, where users could propose and vote on new policies. There are a ton of really interesting things to say about LamndaMOO, and if this seems interesting to you, I highly recommend checking out the articles and books that have been written about it. But for now, let’s get back to Charles and his chatbot.<br> <br> Alongisde all the players in LambdaMOO, Charles and his team actually created a chatbot called cobot. It was really simple, and it was really dumb. But the users wanted it to be smart, they wanted to talk to it. So Charles and his team had to come up with a quick and easy way to make cobot appear smarter than it actually was. So they showed the robot a bunch of texts (they started, weirdly, with the Unabomber manifesto) and trained it to simply pick a few words that you said to it, search for those words in the things it had read, and spit those sentences back at you. <br> <br> The resulting conversations between users and cobot are…. very weird. You can read a few of them in this paper. <br> <br> And I wanted to start this episode about conscious AI with this story for a particular reason. And that’s because, cobot is not a conscious A, it’s a very very dumb robot. But what Charles and his team noticed was that even though cobot wasn’t even close to a convincing conscious AI, people wanted to interact with it as if it was. They spent hours and hours debating and talking to cobot.  And they would even change their own behavior to help the bot play along.<br> <br> We do this kind of thing all the time. When we talk to a 5 year old, we change the way we speak to help them participate in the conversation. We construct these complex internal lives for our pets that they almost certainly don’t have.