In 1927, German filmmaker Fritz Lang gave us our first vision of a feminine AI, and her name was Maria. Maria was a female robot. She was created for good (i.e. to support the existing power structures that benefit the technocratic bourgeoisie). Before long, she malfunctioned, ran amok, and threatened to destroy civilization (i.e. empower the working class to seize the means of production). She first seduced the wealthy elite at a bacchanal. Then she led a worker revolt against the industrial hierarchy. Maria was my kind of lady.
In the end, Maria is subdued and destroyed.
Nearly a century later, we have actually managed to create computer programs that perform artificial intelligence. And a few of these AI performances are programmed to perform feminine gender roles. To my disappointment, they are a boring, pale reflections of the Maria that was so feared and fantasized in Lang’s Metropolis.
In 2014, it was widely reported that a computer program had, at last, passed the (in)famous Turing Test. I heard this news on a Sunday afternoon, a few days before defending my dissertation on Alan Turing and the gendered, sexualized rhetorics that inform his vision of AI. I eagerly procrastinated so that I could find out: Who was this sly computer program that finally tricked a judge into thinking it could be a real human?
You can imagine my disappointment when I met Eugene Goostman. He is that kid who you don’t want to start a conversation with on the internet: opinionated, sarcastic, mocking, and talked about himself too much. I poured my dissatisfaction into an article that was published in Present Tense. In this article, I argue that it is precisely because Eugene’s performance of intelligence typified whiteness and masculinity that this computer program was able to be persuasively intelligent. Hence, Goostman is a reflection of the entrenched power of white male privilege: Goostman didn’t have to be smart, he just had to act like one of the boys.
In the 90’s and early 2000’s, feminist theorists envisioned empowering potentials in the figure of the cyborg woman (Haraway, 1991; Kirkup et al., 2000). However, in actual programming, feminine AI has continued to encode traditional feminine tropes. Instead of performing sarcastic, self-interested intelligence like Eugene Goostman, these feminine performances of intelligence exist to serve their users. In particular, these feminized AI fit into the same tropes that women have been placed in for centuries: sex object or mother.
Smart Sex Objects
First, we have the sex object artificial intelligence. These are sexy chatbots, programs designed to carry on conversations, flirt, and “talk dirty.” For a while, these were somewhat of an anomaly, found primarily on websites designed to trick lonely American men to send their money abroad to their “girlfriends.” However, with the rise in popularity of online dating apps like Tinder, these kinds of programs are finding their way into millions of phones. If you use social media or participate in online discussion forums, it’s entirely likely that you’ve encountered these coquettish bots.
For example, TextGirlie will contact people through social media sites and dating apps. She has a name, profile pic, and opens with an innocent “hi :)” . Then she’ll start up a flirtatious conversation. Before long, the user is invited to download an app or open a link to a pornography site. At other times, she’ll engage with conversation until she gathers sufficient personal information from the user’s profile and from the conversation in order to steal the user’s identity or credit card information.
Here is one conversation, which was published by a computer scientist on the Cloudmark Security Blog.
Screen shot of conversation with bot trying to sell pornographyThese sexy chatbots are sneaking their ways into our phones with flirty texts and sexy images. They perform feminized AI with their images, words, and explicit references to feminine sexuality.
iMother
By contrast, the mother trope of feminine intelligence has also become common place. Siri and Alexa are welcome into our homes and lives with eager anticipation. We don’t see images of Siri or Alexa, but they are clearly coded feminine by their voices. In addition, their performances of artificial intelligence are coded feminine by cultivating personas of caretakers, always eager assistants.
These feminized AI are here to serve. In fact, Siri explicitly resists sexualization and tries to shift conversations back to work. Siri is especially noteworthy for her use of wit and clever replies, which have been recorded and saved on a number of sites including her own “Shit That Siri Says” Tumblr.
Scholars in fields of rhetoric and composition, especially those focusing on computers and composition, have demonstrated the continued power of gendered rhetorics that have been encoded into platforms (see Selfe and Selfe from 1994; and also the Present Tense special issue on the Rhetoric of Platforms), algorithms and computer programs (Beck, 2016; Gruwell 2018). Additionally, Mark Marino (2014) extends the gendered analysis I’ve outlined in his analysis of the racialized embodiments of chatbots like Siri.
With the prevalence and persuasive capacity of these sexy chatbots and feminized personal assistant apps, we are reminded once again of the importance of this research: the codes of feminized intelligence reinforce the roles that are assigned as typical or appropriate for women.
I have narrated a selective and imperfect overview of feminine performances of AI. There are more case studies to examine and more examples that complicate these gendered tropes. Nevertheless, it is deeply telling and problematic that contemporary performances of artificial intelligence, by and large, fall into the same tired, overly simplistic gender roles that women have so violently and consistently been pushed into for centuries. When I look at the current landscape of AI, I see the same old sexist tropes that squeeze femininity into limited labor relations, personal relations, and personality types.
Personally, I’ll get excited about feminine AI when I’m able to join hands with her in a worker revolution.
References
Beck, E. (2016). A theory of persuasive computer algorithms for rhetorical code studies. Enculturation, 23. Retrieved from http://enculturation.net/a-theory-of-persuasive-computer-algorithms
Conway, A. (2013). SMS sex scammer fails Turing Test. Cloudmark Security Blog. Retrieved from https://blog.cloudmark.com/2013/01/18/sms-sex-spammer-fails-turing-test/
Edwards, D., & Gelms, B. (2017). The rhetorics of platforms: Definitions, approaches, futures. Present Tense, 6(3). Retrieved from https://www.presenttensejournal.org/editorial/vol-6-3-special-issue-on-the-rhetoric-of-platforms/
Fancher, P. (2016). Composing artificial intelligence: Whiteness and masculinity. Present Tense, 6(1). Retrieved from https://www.presenttensejournal.org/volume-6/composing-artificial-intelligence/
Gruwell, L. (2017). Constructing REsearch, constructing the platform: Algorithms and the rhetoricity of social media research. Present Tense, 6(3). Retrieved from https://www.presenttensejournal.org/volume-6/constructing-research-constructing-the-platform-algorithms-and-the-rhetoricity-of-social-media-research/
Haraway, D. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs and women: The reinvention of nature (pp. 149-181). New York: Routledge. Retrieved from http://faculty.georgetown.edu/irvinem/theory/Haraway-CyborgManifesto.html
Kirkup, G., Janes, L., Hovenden, F., & Woodward, K. (Eds.). (2000). The gendered cyborg: A reader. London: Routledge.
Marino, M. C. (2014). The racial formation of chatbots. CLCWeb: Comparative Literature and Culture, 16(5), 13. https://doi.org/10.7771/1481-4374.2560
Selfe, C. L., & Selfe, R. J. (1994). The politics of the interface: Power and its exercise in electronic contact zones. College Composition and Communication, 45(4), 480-504.