broken-link-checker
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/drcprod/public_html/wp-includes/functions.php on line 6114bunyad
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/drcprod/public_html/wp-includes/functions.php on line 6114In the inaugural issue of AI & Society<\/i>, published in 1987, Ajit Narayanan identified<\/a> two directions that propelled the discipline of artificial intelligence. The first was \u201cImplement and be damned\u201d whereby programs are produced to replicate tasks performed by humans with relevant expertise (p. 60). Motivated by efficiency, these programs might only tangentially be identified as AI, Narayanan noted, because, rather than adhering to certain computing principles, they might simply be written in a particular programming language associated with AI. (See, for example, Lisp<\/a>.) The second direction was \u201cWe\u2019re working on it,\u201d which he associated with \u201cgrandiose claims\u201d about the future of AI systems that \u201c\u2018could control a nuclear power station\u2019\u201d or \u201c\u2018shield us from incoming missiles\u2019.\u201d But both directions in AI shared the same dangers, according to Narayanan: an economic imperative that would further displace the care of humans for that of profit and a misplaced belief in the power of computation to solve problems more accurately than humans, perhaps even perfectly. To combat these dangers, he pointed to the importance of accountability to the general public; for, \u201cas long as AI is removed from the domain of ordinary people, AI will remain unaccountable for whatever products it produces\u201d (p. 61).<\/p>\n In the three decades since Narayanan made his argument, much has changed, with ordinary people being dialed into the everyday relevance of AI, as well as its potential for transformative societal effects<\/a>. In addition to the near constant heralding of the practical benefits of AI on college campuses<\/a>, to the aging<\/a>, in music streaming<\/a>, and with transportation<\/a>, AI has also been celebrated for its potential in creative endeavors in IBM\u2019s Watson advertisements that have featured Bob Dylan and Stephen King<\/a>. (Much-needed parody of Dylan\u2019s ad is available here.) And although such celebration may be premature, the success of Google\u2019s AlphaGo<\/a> points to the very real possibility of strategic, quotidian invention<\/a> on the part of AI.<\/p>\n Yet, the \u201cImplement and be damned\u201d and \u201cWe\u2019re working on it\u201d directions in AI research have certainly not disappeared. Although deep learning that uses neural networks to sort and learn from large data sets was recently reported to be able to predict the death<\/a> of hospital patients within the next 3-12 months for better end-of-life care, injury caused by AI\u2014perhaps most infamously that of pedestrian struck and killed<\/a> by an Uber self-driving car on March 18, 2017\u2014brutally emphasizes that much of AI is still in its nascency. And the collaboration between IBM\u2019s Thomas J. Watson Research Center<\/span><\/a> and Soul Machines<\/a>, as just one example, highlights the ways in which the development of things like human-like avatars (e.g., Rachel<\/a> and Cora) are often motivated by \u201cbettering customer interaction,\u201d which, though seemingly innocuous, nevertheless replicates those dangers of which Narayanan warned. In fact, at a recent event with representatives from a mega tech company, I was struck by the sense of urgency with which people spoke about the need to meet expediently and efficiently the demands of their corporate customers by \u201ccutting down on the subjectivity\u201d that was deemed still too present in decision making today. Liberating data through AI and other computational methods was constructed in what often seemed to be a singular pathway by which to serve what one speaker identified as \u201cthe greater good.\u201d (And yes, I couldn’t help but think of this<\/a>.)<\/p>\n With the greater presence of AI in everyday life, the possibility for accountability in AI development and implementation has also increased, with a number of institutes having formed to ensure that AI serves the betterment of humanity:<\/p>\n Although the existence of such institutes is undoubtedly important, to what degree they represent approaches to AI research, generally, is questionable. As Karamjit Gill recently observed<\/a>:<\/p>\n It is perhaps not surprising that such a questioning of the super-intelligent machine attracts rather a tired response from even well-meaning researchers, which goes along the lines: Give me a break, don\u2019t bother me with social responsibility sermons! I am only a researcher interested in the creative and disruptive innovation. I build tools and systems, it is up to society whether to use them for good or evil. Technology is neutral, it is humans who are not. (p. 137)<\/p><\/blockquote>\n Of course, this instrumental approach to technology is hardly new, with Norbert Wiener having warned<\/a> in the aftermath of World War II that he held out \u201cvery slight hope\u201d that the future could be other than it had been in a world where cybernetics had fostered the worst of humankind (p. 28-29). Yet, the potential of AI, in the word of IBM\u2019s Watson, to \u201coutthink<\/a>\u201d solutions to problems that have come before and lie ahead also presents never before realized possibilities by which to escape those limitations that left Wiener so hopeless.<\/p>\n Of particular interest, at least for my purposes here, is the rhetorical education of AI. As part of a tradition that existed well before computation<\/a> was ever imagined, let alone executed, the computational spirit of AI is by no means antithetical to rhetoric. After all, others have pointed out how rhetoric itself is procedural in not dissimilar ways to the algorithm (Beck, 2016<\/a>; Brock, 2014<\/a>; Brown, 2014<\/a>; Vee, 2017<\/a>). But proficiency in natural language continues to be at least one of AI\u2019s Achilles heels, in spite of ongoing research, in, for example, the development of \u201crhetorical robots<\/a>,\u201d with Will Knight summarizing<\/a> the problem in this way: \u201cYet despite . . . \u00a0impressive advances, one fundamental capability remains elusive: language. Systems like Siri and IBM\u2019s Watson can follow simple spoken or typed commands and answer basic questions, but they can\u2019t hold a conversation and have no real understanding of the words they use. If AI is to be truly transformative, this must change.\u201d<\/p>\n However, natural language, like any symbol system, is rhetorical and therefore an ethical (or, in some cases, an unethical) endeavor as well (see Aristotle\u2019s Rhetoric<\/i>). For instance, in Steven Katz\u2019s analysis<\/a> of a seemingly innocuous memo concerning the most efficient and expedient means by which to modify vans for optimal handling of their loads, language that \u201cis too <\/i>technical, too<\/i> logical\u201d obfuscates the horror that these vans were being used in Germany in 1942 to gas to death Jews, as well as other enemies of the Nazis (p. 257). It is therefore not merely syntactical validity or appropriate use of linguistic cues that AI must acquire but also the ethical disposition that would allow for a modern adaptation of Quintilian\u2019s a “good man speaking well\u201d; that is, \u201cgood AI computing well.\u201d<\/p>\n However, \u201cgood AI computing well\u201d is hardly assured given results that have thus far included AI-based systems that learn and propagate sexism<\/a> and spew racism and advocate genocide<\/a>. But, perhaps more insidious, are those systems that are black boxed, meaning the methods by which results are generated are rendered opaque. Take, for example, Julia Dressel and Hany Farid\u2019s study<\/a> of algorithmic systems used throughout judicial proceedings to predict recidivism. Although touted as ensuring increased accuracy and fairness, their analysis of a commonly used commercial system called COMPAS evidenced that its machine learning analytics were no more accurate in predicting recidivism than humans with no criminal justice expertise. Because of COMPAS\u2019 proprietary black boxing, how exactly the system comes to its results is unclear; yet, the predictions generated are often treated as if they are somehow more reliable and fairer than what would otherwise be possible. However, unlike in the case of COMPAS or even with more traditional software applications where a conscious decision is made to close or open up<\/a> human-readable code, deep learning is almost inherently black boxed<\/a> because of its complexity.<\/p>\n To resist reifying the mythology of technological progress necessitates understanding AI as an endeavor that will not somehow move the world beyond rhetoric but rather extend it, an argument that Helen Burgess, Tim Menzies, and I have similarly made in the context of big data. As the promises, as well as the warnings, generated in discussions of AI illustrate, the age of the intelligent machine is rhetorically muddy to say the least. For example, while Elon Musk warns<\/a> in the documentary Do You Trust This Computer?<\/i> of \u201ckiller robots\u201d and a superintelligence that might become \u201can immortal dictator from which we could never escape,\u201d Grady Booch argues<\/a>:<\/p>\n . . . Our fears, our fears of being subjugated by some unfeeling artificial intelligence who is indifferent to our humanity: I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where driven by a refusal to accept the limits of our bodies and our minds, we are building machines of exquisite, beautiful complexity and grace that will extend the human experience in ways beyond our imagining.<\/p><\/blockquote>\n What we see in such disparate prognostications is an AI future steeped in rhetoric at both the level of computational method and human experience. Because AI will ultimately have to make decisions<\/a>, explain its decision-making processes<\/a> beyond that of input\/output, and be held accountable<\/a> for those decisions, it must compute through a rhetorical and ethical understanding of the world, one that moves beyond the dangers of \u201cImplement and be damned\u201d and \u201cWe\u2019re working on it.\u201d Techno-sociologist Zeynep Tufekci further explains<\/a>, \u201cWe are asking questions to computation that have no single right answers, that are subjective and open ended and value laden . . . Bringing math and computation to messy value-laden human affairs does not bring objectivity.\u201d As rhetoricians, we can see that the nature of the computational world is and will continue to be rhetorical. The key will be ensuring that those who develop AI, those affected by AI, and AI itself see this too.<\/p>\n Andrist, Sean, Spannen, Erin, & Mutlu, Bilge. (2013). Rhetorical robots: Making robots more effective speakers using linguistic cues of expertise. HRI\u201913 Proceedings of the 8th ACM\/IEEE international conference on human-robot interaction<\/i>. Retrieved from http:\/\/pages.cs.wisc.edu\/~bilge\/pubs\/2013\/HRI13-Andrist.pdf.<\/p>\n Aristotle. (1984). Rhetoric. In Jonathan Barnes (Ed.), The complete works of Aristotle, Volume 2<\/i> (pp. 2152-269). Princeton, NJ: Princeton University Press.<\/p>\n Beck, Estee. (2016). A theory of persuasive computer algorithms for rhetorical code studies Enculturation. Retrieved from http:\/\/enculturation.net\/a-theory-of-persuasive-computer-algorithms<\/em><\/p>\n Booch, Grady. (2016). Don\u2019t fear superintelligent AI. TED@IBM.<\/i> Retrieved from https:\/\/www.ted.com\/talks\/grady_booch_don_t_fear_superintelligence#t-177933<\/p>\n Brock, Kevin. (2014). Enthymeme as rhetorical algorithm. Present Tense: A Journal of Rhetoric in Society,<\/i> 4<\/i>(1). Retrieved from https:\/\/www.presenttensejournal.org\/volume-4\/enthymeme-as-rhetorical-algorithm\/<\/p>\n Brown, James J., Jr. (2014). The machine that therefore I am. Philosophy and Rhetoric,<\/i> 47<\/i>(4), 494-514.<\/p>\n Conn, Ariel. (2017, November 30). When should machines make decisions? Future of Life Institute<\/i>. Retrieved from https:\/\/futureoflife.org\/2017\/11\/30\/human-control-principle\/<\/p>\n Dressel, Julia, & Farid, Hany. (2018, January 17). The accuracy, fairness, and limits of predicting recidivism. Science Advances,<\/i> 4<\/i>(1). Retrieved from http:\/\/advances.sciencemag.org\/content\/4\/1\/eaao5580<\/p>\n Emerging Technology from the arXiv. (2017, November 15). AI can be made legally accountable for its decisions. MIT Technology Review.<\/i> Retrieved from https:\/\/www.technologyreview.com\/s\/609495\/ai-can-be-made-legally-accountable-for-its-decisions<\/p>\n Gardner, Lee. (2018, April 8). How A.I. is infiltrating every corner of the campus. Chronicle of Higher Education<\/i>. Retrieved from https:\/\/www.chronicle.com\/article\/How-AI-Is-Infiltrating-Every\/243022<\/p>\n Gill, Karamjit S. (2016). Artificial super intelligence: Beyond rhetoric. AI & Society,<\/i> 31<\/i>, 137-43.<\/p>\n Gunning, David. (n.d.). Explainable artificial intelligence (XAI). DARPA<\/i>. Retrieved from https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence<\/p>\n Hof, Robert D. (2013). Deep learning. MIT Technology Review<\/i>. Retrieved from https:\/\/www.technologyreview.com\/s\/513696\/deep-learning\/<\/p>\n Hsu, Jeremy. (2018, January 16). Stanford\u2019s AI predicts death for better end-of-life are. IEEE Spectrum. <\/i>Retrieved from https:\/\/spectrum.ieee.org\/the-human-os\/biomedical\/diagnostics\/stanfords-ai-predicts-death-for-better-end-of-life-care<\/p>\n Hunt, Elle. (2016, March 24). Tay, Microsoft\u2019s AI chatbot, gets a crash course in racism from Twitter. The Guardian<\/i>. Retrieved from https:\/\/www.theguardian.com\/technology\/2016\/mar\/24\/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter<\/p>\n Katz, Steven. (1992). The ethic of expediency: Classical rhetoric, technology, and the Holocaust. College English, 54<\/i>(3), 255-275.<\/p>\n Kile, Frederick. (2013). Artificial intelligence and society: A furtive transformation. AI & Society,<\/i> 28<\/i>(1), 107-15.<\/p>\n Knight, Will. (2016, August 9). AI\u2019s language problem. MIT Technology Review<\/i>. Retrieved from https:\/\/www.technologyreview.com\/s\/602094\/ais-language-problem\/<\/p>\n Knight, Will. (2017, April 11). The dark secret at the heart of AI. MIT Technology Review<\/i>. Retrieved from https:\/\/www.technologyreview.com\/s\/604087\/the-dark-secret-at-the-heart-of-ai\/<\/p>\n Maher, Jennifer. (2015). Software evangelism and the rhetoric of morality: Coding justice in a digital democracy<\/i>. New York, NY: Routledge.<\/p>\n Maher, Jennifer. (2016). Artificial rhetorical agents and the computing of phronesis. Computational Culture. <\/i>Retrieved from http:\/\/computationalculture.net\/artificial-rhetorical-agents-and-the-computing-of-phronesis\/<\/p>\n Maher, Jennifer, Burgess, Helen, & Menzies, Tim. (In press.). Good computing with big data. In John Jones & Lavinia Hirsu (Eds.), Rhetorical Machines: From Rhetorical Code to Computational Ethics<\/i>. Tuscaloosa, AL: University of Alabama Press.<\/p>\n Metz, Cade. (2016, March 16). In two moves AlphaGo and Lee Sedol redefined the future. Wired. <\/i>Retrieved from https:\/\/www.wired.com\/2016\/03\/two-moves-alphago-lee-sedol-redefined-future\/<\/p>\n Moyer, Christopher. (2016, March 28). How Google\u2019s AlphaGo beat a Go world champion. The Atlantic. <\/i>Retrieved from https:\/\/www.theatlantic.com\/technology\/archive\/2016\/03\/the-invisible-opponent\/475611\/<\/p>\n Narayanan, Ajit. (1987). AI and accountability. AI & Society<\/i>, 1<\/i>(1). 60-62.<\/p>\n Park, Dong Huk et al. (2018). Multimodal explanations: Justifying decisions and pointing to evidence. Retrieved from https:\/\/arxiv.org\/pdf\/1802.08129.pdf<\/p>\n Rieland, Randy. (2017, March 27). How will artificial intelligence help the aging? Smithsonian Magazine<\/i>. Retrieved from https:\/\/www.smithsonianmag.com\/innovation\/how-will-artificial-intelligence-help-aging-180962682\/<\/p>\n Russell, Jon. (2018, April 20). Musiio uses AI to help the music industry curate tracks more efficiently. TechCrunch<\/i>. Retrieved from https:\/\/techcrunch.com\/2018\/04\/20\/musiio\/<\/p>\n Simonite, Tom. (2017, August 21). Machines taught by photos learn a sexist view of women. Wired<\/i>. Retrieved from https:\/\/www.wired.com\/story\/machines-taught-by-photos-learn-a-sexist-view-of-women\/<\/p>\n Tufekci, Zeynep. (2016). Machine intelligence makes human morals more important.\u201d TEDSummit<\/i>. Retrieved from https:\/\/www.ted.com\/talk\/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important<\/p>\n Wakabayaski, Daisuke. (2018, March 19) Self-driving Uber car kills pedestrian in Arizona, where robots roam. New York Times<\/i>. Retrieved from https:\/\/www.nytimes.com\/2018\/03\/19\/technology\/uber-driverless-fatality.html<\/p>\n Woyke, Elizabeth. (2017, April 13). A self-driving bus that can speak sign language. MIT Review. <\/i>Retrieved from https:\/\/www.technologyreview.com\/s\/604116\/a-self-driving-bus-that-can-speak-sign-language\/<\/p>\n Valencia, Sebastian. (2017, February 28). The lisp approach to AI. Medium<\/i>. Retrieved from https:\/\/medium.com\/ai-society\/the-lisp-approach-to-ai-part-1-a48c7385a913<\/p>\n Vee, Annette. (2017). Coding literacy: How computer programing is changing writing<\/i>. Cambridge, MA: MIT Press.<\/p>\nAI Now<\/h2>\n
AI & Accountability<\/h2>\n
\n
A Rhetorical Education<\/h2>\n
The Future (and Now)<\/h2>\n
References<\/h2>\n