As kids, my siblings and I would often play Super Mario Bros. on the NES, blasting through the first Mushroom Kingdom level to get to the “basement” and, eventually, to “water world” and to Bowser’s lair beyond. We’d also spend hours in my grandparents’ basement, where they had original Ms. Pac-Man and Crazy Kong arcade games.
Those memories stand out as my introduction to digital media and rhetoric. While we don’t all often get the chance to sit down around the same console, my siblings and I still game together when we can, but we’ve now graduated to World of Warcraft.
Professionally, this interest in digital media has translated to a curiosity about how humans and machines work with and against each other to create action. Asking these kinds of questions has taken me to many places, including working with small nonprofits to examine their use of social media. Now, I wonder things like: What does success on social media mean? And, perhaps more importantly, how do we sometimes define online success in ways that clash with or even contradict the mission and purposes of our community partners?
These days, I’ve followed how the conversations in our field have shifted from social media, to platforms, to the algorithms that govern those platforms. And the questions both academics and the public are asking of algorithms are big.
I think of Safia Noble’s recent Algorithms of Oppression where she talks about the ways that algorithms have been built—and sometimes have learned—to behave in discriminatory ways. I think of Tarleton Gillespie, who tells us to beware of algorithms that claim to know “the mind of the public” in objective, factual ways when, in reality, many algorithms are modeled using datasets that are proxies for what is actually being measured.
In a day and age where edge providers like Facebook and Twitter are called upon to do something about the hate speech and fake news circulating like wildfire on their platforms, these companies are looking for scalable algorithmic solutions that will let them minimize their human costs and maximize capital. But, currently, Facebook’s algorithms can only identify hate speech a measly 38% of the time, leaving the rest of the work up to the 7,500 content moderators the company now employs (Koebler & Cox, 2018).
These issues leave a lot of unanswered questions—questions that I hope to explore in part during my time as a DRC fellow. If you’d like to get in touch or collaborate, you can reach me at glotfeam@miamioh.edu or at @amglotfelter on Twitter.