Conversational artificial intelligence (AI) systems like ChatGPT frequently generate falsehoods, but ChatGPT maintains that “I cannot lie” since “I don’t have personal beliefs, intentions, or consciousness.”1 ChatGPT’s assertion echoes the anti-anthropomorphic views of experts like Shanahan (2023) who warns, “It is a serious mistake to unreflectingly apply to AI systems the same intuitions that we deploy in our dealings with each other” (p. 1). Shanahan understandably discourages “imputing to [AI systems] capacities they lack” when Anthropomorphizing AI risks vulnerable users developing false senses of attachment (Deshpande et al., 2023) and traditional interpersonal relationships being degraded by increased human-computer interaction (Babushkina…
Recent Posts
- Introduction to Nicole Golden
- Introduction to Funmilola Fadairo
- Introduction to Erin Miller: Incidental Digital Rhetorics
- Introduction to Ali Alalem
- 2024-25 DRC Fellows End of Year Reflection
- Transnational Voices in Digital Rhetoric: A DRC Yack
- Syllabus Repository Update: Writing with Data
- Erica Stone: Designing a Life Through Writing, Work, and Intentionality