Two Directions in AI
In the inaugural issue of AI & Society, published in 1987, Ajit Narayanan identified two directions that propelled the discipline of artificial intelligence. The first was “Implement and be damned” whereby programs are produced to replicate tasks performed by humans with relevant expertise (p. 60). Motivated by efficiency, these programs might only tangentially be identified as AI, Narayanan noted, because, rather than adhering to certain computing principles, they might simply be written in a particular programming language associated with AI. (See, for example, Lisp.) The second direction was “We’re working on it,” which he associated with “grandiose claims” about the future of AI systems that “‘could control a nuclear power station’” or “‘shield us from incoming missiles’.” But both directions in AI shared the same dangers, according to Narayanan: an economic imperative that would further displace the care of humans for that of profit and a misplaced belief in the power of computation to solve problems more accurately than humans, perhaps even perfectly. To combat these dangers, he pointed to the importance of accountability to the general public; for, “as long as AI is removed from the domain of ordinary people, AI will remain unaccountable for whatever products it produces” (p. 61).
AI Now
In the three decades since Narayanan made his argument, much has changed, with ordinary people being dialed into the everyday relevance of AI, as well as its potential for transformative societal effects. In addition to the near constant heralding of the practical benefits of AI on college campuses, to the aging, in music streaming, and with transportation, AI has also been celebrated for its potential in creative endeavors in IBM’s Watson advertisements that have featured Bob Dylan and Stephen King. (Much-needed parody of Dylan’s ad is available here.) And although such celebration may be premature, the success of Google’s AlphaGo points to the very real possibility of strategic, quotidian invention on the part of AI.
Yet, the “Implement and be damned” and “We’re working on it” directions in AI research have certainly not disappeared. Although deep learning that uses neural networks to sort and learn from large data sets was recently reported to be able to predict the death of hospital patients within the next 3-12 months for better end-of-life care, injury caused by AI—perhaps most infamously that of pedestrian struck and killed by an Uber self-driving car on March 18, 2017—brutally emphasizes that much of AI is still in its nascency. And the collaboration between IBM’s Thomas J. Watson Research Center and Soul Machines, as just one example, highlights the ways in which the development of things like human-like avatars (e.g., Rachel and Cora) are often motivated by “bettering customer interaction,” which, though seemingly innocuous, nevertheless replicates those dangers of which Narayanan warned. In fact, at a recent event with representatives from a mega tech company, I was struck by the sense of urgency with which people spoke about the need to meet expediently and efficiently the demands of their corporate customers by “cutting down on the subjectivity” that was deemed still too present in decision making today. Liberating data through AI and other computational methods was constructed in what often seemed to be a singular pathway by which to serve what one speaker identified as “the greater good.” (And yes, I couldn’t help but think of this.)
AI & Accountability
With the greater presence of AI in everyday life, the possibility for accountability in AI development and implementation has also increased, with a number of institutes having formed to ensure that AI serves the betterment of humanity:
- Machine Intelligence Research Institute (MIRI) — “We do foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.”
- AI Now Institute — “A research institute examining the social implications of artificial intelligence.”
- Future of Life Institute — “Mission: To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and changes.”
Although the existence of such institutes is undoubtedly important, to what degree they represent approaches to AI research, generally, is questionable. As Karamjit Gill recently observed:
It is perhaps not surprising that such a questioning of the super-intelligent machine attracts rather a tired response from even well-meaning researchers, which goes along the lines: Give me a break, don’t bother me with social responsibility sermons! I am only a researcher interested in the creative and disruptive innovation. I build tools and systems, it is up to society whether to use them for good or evil. Technology is neutral, it is humans who are not. (p. 137)
Of course, this instrumental approach to technology is hardly new, with Norbert Wiener having warned in the aftermath of World War II that he held out “very slight hope” that the future could be other than it had been in a world where cybernetics had fostered the worst of humankind (p. 28-29). Yet, the potential of AI, in the word of IBM’s Watson, to “outthink” solutions to problems that have come before and lie ahead also presents never before realized possibilities by which to escape those limitations that left Wiener so hopeless.
A Rhetorical Education
Of particular interest, at least for my purposes here, is the rhetorical education of AI. As part of a tradition that existed well before computation was ever imagined, let alone executed, the computational spirit of AI is by no means antithetical to rhetoric. After all, others have pointed out how rhetoric itself is procedural in not dissimilar ways to the algorithm (Beck, 2016; Brock, 2014; Brown, 2014; Vee, 2017). But proficiency in natural language continues to be at least one of AI’s Achilles heels, in spite of ongoing research, in, for example, the development of “rhetorical robots,” with Will Knight summarizing the problem in this way: “Yet despite . . . impressive advances, one fundamental capability remains elusive: language. Systems like Siri and IBM’s Watson can follow simple spoken or typed commands and answer basic questions, but they can’t hold a conversation and have no real understanding of the words they use. If AI is to be truly transformative, this must change.”
However, natural language, like any symbol system, is rhetorical and therefore an ethical (or, in some cases, an unethical) endeavor as well (see Aristotle’s Rhetoric). For instance, in Steven Katz’s analysis of a seemingly innocuous memo concerning the most efficient and expedient means by which to modify vans for optimal handling of their loads, language that “is too technical, too logical” obfuscates the horror that these vans were being used in Germany in 1942 to gas to death Jews, as well as other enemies of the Nazis (p. 257). It is therefore not merely syntactical validity or appropriate use of linguistic cues that AI must acquire but also the ethical disposition that would allow for a modern adaptation of Quintilian’s a “good man speaking well”; that is, “good AI computing well.”
However, “good AI computing well” is hardly assured given results that have thus far included AI-based systems that learn and propagate sexism and spew racism and advocate genocide. But, perhaps more insidious, are those systems that are black boxed, meaning the methods by which results are generated are rendered opaque. Take, for example, Julia Dressel and Hany Farid’s study of algorithmic systems used throughout judicial proceedings to predict recidivism. Although touted as ensuring increased accuracy and fairness, their analysis of a commonly used commercial system called COMPAS evidenced that its machine learning analytics were no more accurate in predicting recidivism than humans with no criminal justice expertise. Because of COMPAS’ proprietary black boxing, how exactly the system comes to its results is unclear; yet, the predictions generated are often treated as if they are somehow more reliable and fairer than what would otherwise be possible. However, unlike in the case of COMPAS or even with more traditional software applications where a conscious decision is made to close or open up human-readable code, deep learning is almost inherently black boxed because of its complexity.
The Future (and Now)
To resist reifying the mythology of technological progress necessitates understanding AI as an endeavor that will not somehow move the world beyond rhetoric but rather extend it, an argument that Helen Burgess, Tim Menzies, and I have similarly made in the context of big data. As the promises, as well as the warnings, generated in discussions of AI illustrate, the age of the intelligent machine is rhetorically muddy to say the least. For example, while Elon Musk warns in the documentary Do You Trust This Computer? of “killer robots” and a superintelligence that might become “an immortal dictator from which we could never escape,” Grady Booch argues:
. . . Our fears, our fears of being subjugated by some unfeeling artificial intelligence who is indifferent to our humanity: I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where driven by a refusal to accept the limits of our bodies and our minds, we are building machines of exquisite, beautiful complexity and grace that will extend the human experience in ways beyond our imagining.
What we see in such disparate prognostications is an AI future steeped in rhetoric at both the level of computational method and human experience. Because AI will ultimately have to make decisions, explain its decision-making processes beyond that of input/output, and be held accountable for those decisions, it must compute through a rhetorical and ethical understanding of the world, one that moves beyond the dangers of “Implement and be damned” and “We’re working on it.” Techno-sociologist Zeynep Tufekci further explains, “We are asking questions to computation that have no single right answers, that are subjective and open ended and value laden . . . Bringing math and computation to messy value-laden human affairs does not bring objectivity.” As rhetoricians, we can see that the nature of the computational world is and will continue to be rhetorical. The key will be ensuring that those who develop AI, those affected by AI, and AI itself see this too.
References
Andrist, Sean, Spannen, Erin, & Mutlu, Bilge. (2013). Rhetorical robots: Making robots more effective speakers using linguistic cues of expertise. HRI’13 Proceedings of the 8th ACM/IEEE international conference on human-robot interaction. Retrieved from http://pages.cs.wisc.edu/~bilge/pubs/2013/HRI13-Andrist.pdf.
Aristotle. (1984). Rhetoric. In Jonathan Barnes (Ed.), The complete works of Aristotle, Volume 2 (pp. 2152-269). Princeton, NJ: Princeton University Press.
Beck, Estee. (2016). A theory of persuasive computer algorithms for rhetorical code studies Enculturation. Retrieved from http://enculturation.net/a-theory-of-persuasive-computer-algorithms
Booch, Grady. (2016). Don’t fear superintelligent AI. TED@IBM. Retrieved from https://www.ted.com/talks/grady_booch_don_t_fear_superintelligence#t-177933
Brock, Kevin. (2014). Enthymeme as rhetorical algorithm. Present Tense: A Journal of Rhetoric in Society, 4(1). Retrieved from https://www.presenttensejournal.org/volume-4/enthymeme-as-rhetorical-algorithm/
Brown, James J., Jr. (2014). The machine that therefore I am. Philosophy and Rhetoric, 47(4), 494-514.
Conn, Ariel. (2017, November 30). When should machines make decisions? Future of Life Institute. Retrieved from https://futureoflife.org/2017/11/30/human-control-principle/
Dressel, Julia, & Farid, Hany. (2018, January 17). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1). Retrieved from http://advances.sciencemag.org/content/4/1/eaao5580
Emerging Technology from the arXiv. (2017, November 15). AI can be made legally accountable for its decisions. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/609495/ai-can-be-made-legally-accountable-for-its-decisions
Gardner, Lee. (2018, April 8). How A.I. is infiltrating every corner of the campus. Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/How-AI-Is-Infiltrating-Every/243022
Gill, Karamjit S. (2016). Artificial super intelligence: Beyond rhetoric. AI & Society, 31, 137-43.
Gunning, David. (n.d.). Explainable artificial intelligence (XAI). DARPA. Retrieved from https://www.darpa.mil/program/explainable-artificial-intelligence
Hof, Robert D. (2013). Deep learning. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/513696/deep-learning/
Hsu, Jeremy. (2018, January 16). Stanford’s AI predicts death for better end-of-life are. IEEE Spectrum. Retrieved from https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/stanfords-ai-predicts-death-for-better-end-of-life-care
Hunt, Elle. (2016, March 24). Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
Katz, Steven. (1992). The ethic of expediency: Classical rhetoric, technology, and the Holocaust. College English, 54(3), 255-275.
Kile, Frederick. (2013). Artificial intelligence and society: A furtive transformation. AI & Society, 28(1), 107-15.
Knight, Will. (2016, August 9). AI’s language problem. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/602094/ais-language-problem/
Knight, Will. (2017, April 11). The dark secret at the heart of AI. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
Maher, Jennifer. (2015). Software evangelism and the rhetoric of morality: Coding justice in a digital democracy. New York, NY: Routledge.
Maher, Jennifer. (2016). Artificial rhetorical agents and the computing of phronesis. Computational Culture. Retrieved from http://computationalculture.net/artificial-rhetorical-agents-and-the-computing-of-phronesis/
Maher, Jennifer, Burgess, Helen, & Menzies, Tim. (In press.). Good computing with big data. In John Jones & Lavinia Hirsu (Eds.), Rhetorical Machines: From Rhetorical Code to Computational Ethics. Tuscaloosa, AL: University of Alabama Press.
Metz, Cade. (2016, March 16). In two moves AlphaGo and Lee Sedol redefined the future. Wired. Retrieved from https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/
Moyer, Christopher. (2016, March 28). How Google’s AlphaGo beat a Go world champion. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611/
Narayanan, Ajit. (1987). AI and accountability. AI & Society, 1(1). 60-62.
Park, Dong Huk et al. (2018). Multimodal explanations: Justifying decisions and pointing to evidence. Retrieved from https://arxiv.org/pdf/1802.08129.pdf
Rieland, Randy. (2017, March 27). How will artificial intelligence help the aging? Smithsonian Magazine. Retrieved from https://www.smithsonianmag.com/innovation/how-will-artificial-intelligence-help-aging-180962682/
Russell, Jon. (2018, April 20). Musiio uses AI to help the music industry curate tracks more efficiently. TechCrunch. Retrieved from https://techcrunch.com/2018/04/20/musiio/
Simonite, Tom. (2017, August 21). Machines taught by photos learn a sexist view of women. Wired. Retrieved from https://www.wired.com/story/machines-taught-by-photos-learn-a-sexist-view-of-women/
Tufekci, Zeynep. (2016). Machine intelligence makes human morals more important.” TEDSummit. Retrieved from https://www.ted.com/talk/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important
Wakabayaski, Daisuke. (2018, March 19) Self-driving Uber car kills pedestrian in Arizona, where robots roam. New York Times. Retrieved from https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html
Woyke, Elizabeth. (2017, April 13). A self-driving bus that can speak sign language. MIT Review. Retrieved from https://www.technologyreview.com/s/604116/a-self-driving-bus-that-can-speak-sign-language/
Valencia, Sebastian. (2017, February 28). The lisp approach to AI. Medium. Retrieved from https://medium.com/ai-society/the-lisp-approach-to-ai-part-1-a48c7385a913
Vee, Annette. (2017). Coding literacy: How computer programing is changing writing. Cambridge, MA: MIT Press.
Wiener, Norbert. (1948). Cybernetics or control and communication in the animal and the machine (2nd ed.). Cambridge, MA: MIT Press.