Unless you’re rooting for social media bots to become Nazis, Microsoft’s Tay was a resounding failure. When she was released “into the wild” on Twitter, she learned quickly based on her input data: interactions with users on the platform. As those users inundated Tay with misogyny, xenophobia, and racism, Tay started to spout out hateful messages. It’s been a couple years since Tay’s troubles, and Microsoft even tried another bot, Zo, which has likewise had a few problems. Bots are still in the news for their problems; in fact, bots and bad behavior now are almost synonymous, especially in light…
Recent Posts
- Introduction to Marie Pruitt
- Introduction to Toluwani Odedeyi
- Introduction to Mehdi Mohammadi
- Introduction to Thais Rodrigues Cons
- DRC Roundup September 2024
- Blog Carnival 22: Editor’s Outro: “Digital Literacy, Multimodality, & The Writing Center”
- Digitizing Tutor Observations: A Look into Self-Observations of Asynchronous Tutoring
- AI (kind of) in the Writing Center