Conversational artificial intelligence (AI) systems like ChatGPT frequently generate falsehoods, but ChatGPT maintains that “I cannot lie” since “I don’t have personal beliefs, intentions, or consciousness.”1 ChatGPT’s assertion echoes the anti-anthropomorphic views of experts like Shanahan (2023) who warns, “It is a serious mistake to unreflectingly apply to AI systems the same intuitions that we deploy in our dealings with each other” (p. 1). Shanahan understandably discourages “imputing to [AI systems] capacities they lack” when Anthropomorphizing AI risks vulnerable users developing false senses of attachment (Deshpande et al., 2023) and traditional interpersonal relationships being degraded by increased human-computer interaction (Babushkina…
Recent Posts
- Multimodal, Multilingual Praxis in the First Year Composition Classroom: Reflections on Promoting Social and Linguistic Justice Via Rhetorical Translation
- Against Linguistic Flattening: Translingual Multimodality in the Age of AI
- When the Teacher Stops Talking: A Human-Centered Experiment with Classroom Silence
- Multimodality as Praxis: Coconstructing the Asynchronous Learning Space
- Intro to Blog Carnival 24: Multimodality, Social Justice, and Human-Centered Praxis
- The Rhetorical Power of Data Centers: Case Studies from the Global North and Global South
- CCCC 2026 Call for Session Reviews
- Call for Syllabi and Teaching Materials: Social Justice Pedagogies