Unless you’re rooting for social media bots to become Nazis, Microsoft’s Tay was a resounding failure. When she was released “into the wild” on Twitter, she learned quickly based on her input data: interactions with users on the platform. As those users inundated Tay with misogyny, xenophobia, and racism, Tay started to spout out hateful messages. It’s been a couple years since Tay’s troubles, and Microsoft even tried another bot, Zo, which has likewise had a few problems. Bots are still in the news for their problems; in fact, bots and bad behavior now are almost synonymous, especially in light…
Recent Posts
- Philosophy of Technology in Rhetoric and Writing Studies
- Call for Blog Carnival 23: Digital Circulation in Rhetoric and Writing Studies
- Introduction to Robert Beck
- Introduction to Alex Mashny
- Introduction to Marie Pruitt
- Introduction to Toluwani Odedeyi
- Introduction to Mehdi Mohammadi
- Introduction to Thais Rodrigues Cons