Will AI Help Identify Bias ... or Perpetuate It?, with Diyi Yang




The Interaction Hour show

Summary: Think about the most recent news headline you read. Was it completely objective, void of any presupposition of truth or language that may lead readers down one particular path of understanding? Or did it, more likely, contain subtle cues about how the message was being framed, casting doubt on its veracity or reliability. Every day, we are inundated with these types of texts that, on the surface, proclaim to be arbiters of truth but, due to simple word choice and message framing, can bias their consumers. Luckily, new tools are being developed to help us become more critical recipients of media. In this podcast, we chat with Diyi Yang about how artificial intelligence can help us identify this subjective bias in text – and how AI itself can reflect our own preexisting biases.