Much of our day to day life is influenced by the presence of “the hacker” in often subtle, but powerful ways. The networks of today are what they are because the hacks of yesterday and the fear of the hacks of the future. Patches are constantly rolled out. User populations are constantly trained. New protocols are constantly developed. Yet even with all this prevention the hacker still proliferates, in the news there is a never ending parade of big name companies falling victim to the hacker. Meanwhile, the hacker lives a second life in the popular imagination. Heroic and/or villainous hacker proliferate in the media in everything from massive film franchises like the Matrix to children’s media like the PBS cartoon Cyberchase. Real life hackers like Kevin Mitnick, Julian Assange, and Phineas Fisher have become folk-heroes, or villains, depending on who you ask. We live in a world awash in hacking, hacks, and hackers. Yet, as scholars like Parikka & Sampson (2009) and Bellinger (2018) show, researchers (both humanists and more the explicitly technical) tend to ignore “not-workings” like the hacker in academic research.
In Hacking in the Humanities, Dr. Aaron Mauro goes against the grain. Mauro centers the hacker, describing how the both the archetype of hacker and real-life hacker praxis can inform humanist practices. Such an approach is critical, Mauro tells us, because we live in a world of cybercriminals, authoritarian governments, and trolls out to undermine and attack knowledge workers. By understanding the hacker – and cybersecurity more broadly – Mauro argues knowledge workers can better equip themselves with the tools they need not only to protect themselves in a dangerous digital world, but also become more capable actors who can effect real, positive change.
In this discussion Dr. Mauro and host Christoffer Turpin discuss his book and a range of similar topics like the role of rhetoric in social engineering actions, the need for cybersecurity in digital humanist projects, the value and limits of the hacker imaginary or archetype, and the need for interdisciplinary communication that crosses the traditional STEM/humanities divide if we want to solve the most pressing crises of cybersecurity.
Listen Here
People
Aaron Mauro is Associate Professor of Digital Media at Brock University in the Centre for Digital Humanities. He teaches in both Interactive Arts and Science (IASC) and the PhD in Interdisciplinary Humanities (HUMA) programs on topics relating to digital culture, natural language processing, and app development.
Christoffer Turpin is a first generation ABD PhD candidate studying rhetoric and cybersecurity at the Ohio State University. He’s a hacker turned academic. His dissertation leverages rhetorical theory to solve some of the most pressing problems in cybersecurity, while also demonstrating what the field of rhetoric can learn from hacker praxis. He teaches on topics relating to digital platforms, digital media, cybersecurity, software development, and rhetoric and composition.
Transcript
Christoffer Turpin: [00:00:01] In the cult classic 90s B-movie Hackers. Dade Murphy, the movie’s hero, is arrested by the police while being shoved into the back of a police car. Murphy screams out this now famous, if a bit cheesy line bite the bullet. And three decades later, the archetype at the heart of the movie, the hacker continues to occupy a place in the public conscience. In a sense, Murphy’s statement was a prediction that came true. The concept of hacking has touched nearly every aspect of our lives from how we do business to conduct war, to perform activism. Even Martha Stewart’s in on the game, offering her five best egg hacks on her website in 2023. The planet is well and truly hacked. Hello, my name is Chris Turpin. I’m a first generation PhD candidate at the Ohio State University studying rhetoric and cybersecurity. Today, I’m happy to present an interview with Dr. Aaron Mauro. In this interview, we discuss hackers and cybersecurity, their role in the digital Humanist project and how they can inform our pedagogical practices. We hope you enjoy this interview.
So guess we’ll just start with the basic stuff. Who are you? Where do you work and what are you working on now?
Aaron Mauro: [00:01:15] I am Dr. Aaron Mauro. I’m an assistant professor of digital media and author of Hacking and the Humanities, which was published by Bloomsbury in 2022. And I work in the Department of Digital Humanities at Brock University, which is located in the beautiful Niagara region of Ontario, Canada. And I teach in our interactive arts and science program on topics related to digital culture, user interface and user experience, design, app and game development, as well as project management. And what am I working on currently? That’s a tricky question, but just to two things. I’m working on my second book with Bloomsbury, which is currently titled Defend Forward The Rhetoric of Cyber Attacks. And it explores phishing and social engineering attacks through the logic of the lure. I’m thinking about how kind of, you know, phishing attacks and how they’re really a kind of critical rhetorical moment that occur by the billions every single day. And I’m looking at how user interface and user experience feature in luring users to kind of compromise their own systems.
C.T.: [00:02:34] I was wondering if we could go back for our for our listeners, since this is for the Sweetwater Digital Rhetoric Collective and talk a bit more about social engineering, because that might be a term that our listeners are not necessarily familiar with, which is, I think, a really good gateway from rhetoric into this world of hacking. Um, I could define social engineering, but I think it’d be more interesting to hear you sort of define that term for those who might be unfamiliar with it.
A.M.: [00:02:57] Right. Social engineering is a sub domain within cybersecurity. That has to do with really the the social element of an attack. Sometimes the first touch point and it’s been supported and well understood by social psychology principles. And so these have to do with manipulating, you know, users tendency to be helpful, to be uncertain, to be kind of demure in the face of authority. And all of those tendencies that we have can be preyed upon by attackers to often give that first point of access to a system. And so social engineering preys on on our our habits as people to to really be that often a very first point of contact in an attack.
C.T.: [00:03:57] I have there’s a Galen Larson quote they wrote a book in 2022 called Think. It’s just called Social Engineering, but it has like a, you know, a semicolon subtitle. And they they quote, I forget exactly who it is. They quote, I’d have to look it up, but they gloss. Social engineering is short for effectively bullshit, you know, bullshitting your users to get them to do things that they normally wouldn’t do. That’s right. That’s a major core like aspect of my dissertation. Like I’m coming out of the rhetoric department at Ohio State where I go to and social engineering is sort of my natural in. I don’t think people understand the amount of rhetoric that goes into your standard, you know, cyber, you know, cybersecurity incident. It’s not so much that hackers attack computers with code. They attack people with rhetoric and they attack people with bullshit effectively.
A.M.: [00:04:44] That’s right. Think you know that you’ve got your finger right on it there. And it’s it’s quite often the case that well managed systems are difficult to to penetrate especially for, you know, you’re kind of run of the mill, as you describe it in your in your questions, kind of the run of the mill kind of script kiddie, you know, low level cyber criminal. But that that kind of access is often one that is is highly non-technical. It has to do with impersonation and persistence, repeated communications. And in large measure, they’re not successful. But because these messages can be automated and send en masse that there is a small chance of success. And it highlights, I think, the asymmetry in any cybersecurity situation that attackers need only be right once to get a foothold, whereas defenders or just citizens need to manage their their online presence consistently in an ongoing way every single day. And so there’s there’s a fundamental asymmetry between that relationship that gets exposed by the sheer scale of the problem that goes into social engineering attacks.
C.T.: [00:06:10] I think it’s good to note, too, that like, you know, phishing emails, these sort of like mass emails are, you know, a pretty common form of social engineering. But then you get into the more like rhetorically in depth kinds, you know, the your spear phishing where people might do a lot of research on targets. I recently read about a phishing attack, a successful phishing attack against the CEO of a British energy company. And what the attackers did was record or not record, but get access to recordings of the CEO of that company speaking at various public events. And they ran that through a an AI to generate a program that could effectively you type in whatever you want and it generates text or voice that sounds like the boss. And then they use that to trick another member of the organization into wiring money. So, you know, you have those sort of mass sort of social engineering attacks, but you also have the more precise ones. What did think surprises people when I talk about it a lot is the DNC email leak from 2016 that got started with a phishing email sent to John Podesta. You know, so these these phishing attacks, they’re low level, but they can have high level, you know, results and things that emerge from them.
So what got you interested in this sort of work? Like what got you into the hacker? What got you into sort of the darker side of digital rhetorics and things like that?
A.M.: [00:07:30] You know, it started innocently enough. I was teaching in a university context, teaching app development and really kind of just applied humanities work in digital media where, you know, my students would build web applications and we would we would think about their consequences. But I found myself teaching, you know, through the course of building, you know, apps in Python or React security principles, um, every single day. And so I would be teaching students about sanitizing inputs and, you know, how to properly set up databases and configure S3 buckets and, you know, just typosquatting these different issues that, that face developers in a very practical, workaday type of of practice. And I realized that if I was going to teach this kind of real world approach to software development in a humanities context that I needed to teach good security practices every single day. And so it became a feature of of my work at the undergraduate level, and it quickly moved into a broader understanding of really the cultural consequences of of these security practices that are our lives are increasingly lived online in a 24 over seven kind of way and that that that kind of a life and the way that that we express ourselves and store our our our activities as participants in culture requires us to secure ourselves as, as kind of digital entities online. And so it it really spiraled into a broader approach, but began with a very practical, applied, you know, set of practices that that many think developers somewhat take for granted as just what they do. And so I saw that there was a bigger cultural question there for us to explore.
C.T.: [00:09:54] You do a lot of very technically heavy teaching and research, it seems like. What’s your background? Did you come from a humanities background and then move into the more technical stuff, or did you come from a more technical background and move into the more humanities stuff?
A.M.: [00:10:07] Yeah, I have a PhD in English literature from Queen’s University in Kingston, Ontario, and I was, you know, studying contemporary American literature in a very traditional program. Um, a very, um, kind of even, even a program that was suspicious of new media approaches. And so I began looking at, you know, kind of communication theory, which is pretty, I think, common in Canada beginning in, in my approach from Harold Innis and Marshall McLuhan and thinking about technology in a in a really broad way from my PhD where I moved into digital humanities. And so thinking about digital humanities as, as really an expression of, of approaching digital culture from a humanities perspective, but also this, this generations long digitization effort that we’re engaged in, we have to be thinking about a lot of these these questions and think our friends in, you know, galleries and libraries and archives and museums are very well acquainted with with data preservation, research, data management and those types of principles. But I think in academic contexts, we’ve often been regarding questions of security as something that’s reserved for it. And I think that it’s increasingly the case that that our our projects and the things that we produce as digital humanists online are increasingly exposed to the web and all the risks that they that that entails. If we’re building applications as a kind of research output in the humanities, then we we better take steps to secure our projects and make sure that, again, our piece of that mass project of digitizing the human cultural legacy is, is preserved not only from a from a data preservation side of things, but also from a security side of things.
C.T.: [00:12:32] I think that’s one of the major thrusts of your book. It’s that we in the digital humanities are we’re producing a lot of foreign digital scholarship and we use a lot of digital tools and digital infrastructure to do the scholarship that we do. And that’s very vulnerable to the synthesis, to trolls, to, you know, the outright those sorts of people. And we don’t really pay enough attention to the security of these sorts of things. But at the same time, I feel like people think that security is this to some degree a technical thing. How do you suggest that, say a researcher doing work on digital rhetoric who may not know much about cyber security or how to make their projects more secure? What would you recommend that they do to begin that sort of pathway towards improved cyber security in their research and even their teaching?
A.M.: [00:13:18] Yeah, these these are large questions. And I think that research security is something that is increasingly in the conversation. I think on a national level, we’ve certainly reoriented our approach to certain international partnerships in the research space. And so I think the broader sphere of research security is is starting to get some of that attention. And as individuals, we are housed within institutions quite often. So if you are within a research university, then that university is is likely has some kind of security practice or or tutoring that’s going on. And and I find that at the at our current stage, it’s quite limited. It may depend on your discipline by by large measure. So folks in engineering or, you know, bio sciences may have a familiarity with that, but that it is increasingly just a feature of our online experience that we have to secure this. And so as individuals, though, I resist the metaphor of hygiene, it’s often termed as as kind of like, you know, cyber hygiene, um, good, good practices that can can secure your work, right? And so there’s a whole range of things that go into that from multi-factor authentication to, you know, the way that we run our machines and, you know, access communications and the communications channels that we use in our research. And so while that’s not necessarily a single answer for researchers, the bigger question that within that broader.
Speaker3: [00:15:07] Of of kind of.
A.M.: [00:15:09] Cyber hygiene is what it looks like when we’re working in large teams that increasingly in digital humanities, we’re not working as sole researchers who are responsible as as individuals for our research materials, whether it’s data or other outputs. And so the real trick for us is how do we operate as teams in a in a secure way. And so we have people like Bruce Schneier, who’s a very famous commentator and author in the security space, you know, who reminds us that security is a team sport. And so the challenge, I think, is one of our research culture and designing a research culture that is security first, where we’re we’re really thinking about, you know, everybody from, you know, an undergraduate to, you know, the the the primary investigator who is maintaining certain protocols and procedures that are appropriate for that research project. And so we have to start thinking about risk assessment, you know, knowing which assets and things of value exist within a project and ensuring that everyone along the chain, including collaborators that may be unnamed in open source code repositories and and other collaborators within it, that those folks are also participating and ensuring that the entire system is secure. And so that is a tall order. And it’s a it’s one that has something to do with project management, but also something to do with updating our research culture.
C.T.: [00:17:02] So I guess we’ll move on to sort of some more theoretical questions. And one question that I had for you and that I focused on in my research is when you talk to somebody, when you talk to to people, you’re likely going to get two different definitions of the hacker, right. And of what hacking is. And I was just wondering, how do you define hacking and why do you think there’s such this sort of, um, disagreement over what the term hacking means?
A.M.: [00:17:25] Yeah. Like, you know, hacking is it’s somewhat tongue in cheek, right? That that to to I do think that there is a deadly seriousness to being a hacker and I hope that you put that in italics but that there is also something fun about it and there is, I think, a broader sensibility with it. And so within those three kind of approaches, you know, there’s there’s something fun in like watching like the 90s movies, movie hackers, um, you know, Lawnmower Man and all of that kind of early expression of, of hacker culture that we saw, you know, displayed visually. At least there’s something fun about that and kind of joyous in being kind of a rebel. But it’s also meant to be a little bit silly and a little bit tongue in cheek. And so when we when we kind of scream hack the planet, it’s it’s meant, but it’s also not. And so there’s a little bit of fun. I think that hacking is also just a a a a way of looking at the world. It’s, um, you know, there’s, there’s several kind of cognate terms that might be attached to it, like tinkering, playing, building. It’s people who it tends to represent people who see a technical system and want to play with it to see what it can do. There’s a healthy amount of curiosity in in people who who like to hack and play with things and that that curiosity sometimes gets them into trouble. But it’s also a desire to have a fulsome understanding of the digital environments that we find ourselves in. And that is, I think, a very healthy approach to what are increasingly fairly limited platforms and really access controlled systems that give us some kind of a window onto the online world. And I’m thinking of, you know, social media as one of those systems that they really are highly controlled and limited. And that is that’s deeply frustrating to people who like to tinker. There’s there’s also the quality of, you know, the hacker who is is directly engaged in.
Speaker3: [00:20:07] In, you know.
A.M.: [00:20:09] Political action, whether it has to do with, you know, exploring systems that that may be committing some kind of harm, may have to do with collecting and preserving information before it’s deleted. And it might have to do with just simply breaking the law. And I think that while it’s not certainly the hacktivist mantle is not a, you know, a license to break the law and to harm others, it’s certainly an active political position that is necessary in a in a democracy that is is fully engaged. And and I think that there’s a there’s a quality of being a digital citizen that requires us to have the ability to to protest, to take action against authorities and those in power. And if we lose the ability to be literate in those spaces, then we simply lose the ability to to speak back, to act in in ways that that are non condoned. And so it’s it’s certainly a gray area. And I describe a kind of a gray hat approach. And I would in no way condone all hacktivist activities but that the literacy required to engage in meaningful political dissent is, is something that I value in that kind of hacker ethos, and it’s something that we shouldn’t give up.
C.T.: [00:21:56] I completely agree with that. It’s the these skills to be a politically aware, capable actor, a capable hacker, are also the skills that you need to be defensive in today’s world to be able to, you know, protect yourself and protect your organizations, protect your institutions and things like that. You can sort of see your literature background when you talk about this in your book because you talk a lot about the sort of hacker imaginary that emerges in like cyberpunk fiction and things like that. And I was just wondering if you could talk a bit more about that. What is the value that we as digital humanists can get from looking at these sort of archetypical hacker characters from fiction?
A.M.: [00:22:34] Yeah, I think that these are highly romantic, idealized figures that the hacker hero is is a fantasy, but it’s a useful one. It is not a it is not an identity that is not without issue that when it really originated, it was it was a boys club. It was highly heteronormative. It was very kind of masculinist and frustrated. And I think that those attitudes certainly need to be updated. And I think that digital humanities and and humanists can do good work in expanding that identity to really account for for anyone who’s who’s thinking in in that space. And I turned to Benjamin’s race against technology, where she describes the need for socially just imaginaries. And these socially just imaginaries fit within that fictional hacker hero, the person who’s who’s so possessed of digital culture that they can, you know, act against in that asymmetrical way, act against the the systems of oppression that are manifested in the digital realm. And and I think that that is wonderfully romantic, but also not that there are realistic examples. And I often think about Phineas Fisher, who, you know, advocated for hacking back, that they they built a, you know, a really a recipe for stealing from banks and giving to to charity charity workers in in South America , and so, like someone like that represents not only a non-heteronormative identity, but is also someone who is is directly breaking the law. And I find it I’m glad that there are kind of rogues in the world that we would be impoverished without them.
C.T.: [00:00:26] I think I completely agree with you on that too. But I wonder, does the hacker imaginary have any sort of double edged ness to it? Because think about the sort of QAnon discourse that goes on about hackers attacking election machines. And often when the government comes around trying to pass some new laws about privacy or data privacy or what have you, they’ll often leverage the same hacker imaginary to sort of give themselves justification to do that. So I was wondering if you thought maybe that the hacker imaginary also has that sort of double edged this to it. Can it be used as much for guess not to be too reductive, but as much for evil as it can for good?
A.M.: [00:01:04] Yeah. I think that there are charlatans who will take the hacker identity and and use it in any way that they see fit. A good example of that would be the in the 2016 election in which Trump, you know, evoked the the quote, £400 hacker working in in his bedroom. And so that’s clearly, you know, a straw man that is being evoked to provide cover for for clearly, you know, a a broken logic a broken point of of argument. And so I think that that that is like you said bullshit. Um and and that that these these names are being evoked by people with very limited technical understanding um to to cover for, for their, their own failures in literacy. And so I think that in, in large measure it has something to do with or maybe is, is exposing a need for greater education that you know some of the conspiracy minded individuals, um, could really benefit from from a more serious education in in technical areas, whether it has to do with, you know, the science of vaccination, the the security protocols that govern digital voting or or really any any kind of, you know, understanding whether it goes from contrails to, you know, chemical cleanup, which is something that is facing the people of Ohio currently.
A.M.: [00:03:01] And so these are these are are questions that are, I think, independent necessarily of of the that hacker imaginary that you describe, but that charlatans will use it because it is an active entity. It’s an active force. And they’ll they’ll attempt to evacuate it of of any kind of urgency because it does pose a threat to those who simply are incapable of of fully grasping how these systems work. So education is is the is the key. And I think that, you know, that there’s there’s an equality of of or there’s a pedagogical practice to the the hacker imaginary that and I like the phrase that you coined but there’s a pedagogical practice to this hacker imaginary in a common phrase that you don’t you don’t you don’t learn to hack, you hack to learn and that and that by playing and tinkering with these systems, it is, it is fundamentally about learning. Um, and, and anybody who evokes the hacker as a, as a mere rhetorical conceit is simply exposing their ignorance.
C.T.: [00:04:23] I completely agree with you. There’s an article, it’s sort of an old article. It’s by Reed Skybell and it’s called The Hacker Myth. And in it Skybell is like basically the way people understand the hacker is effectively a mythological construct, like a trickster figure that is completely divorced from the realities of cybercrime. And I agree with Skybell on that. But the thing that Skybell I don’t think gets that you’re touching on is that even though the myth is divorced from the realities of cybercrime, the myth still does stuff in the world. It’s a myth, but it’s still real, so to speak. My background is in hacktivism and when we were doing our hacktivism, we knew sort of that the media and. And our targets and things like that. They didn’t really have a really strong technical understanding. So we can manipulate that sort of lack of technical understanding to manipulate the media and manipulate the way that our group and our activities were seen. So I guess my question here to you is, does the hacker sort of emerge from this? Place of technological illiteracy, because if we look at these sort of cyberpunk hackers to the worlds in which they exist, generally speaking, are not realistic worlds. Right? These are not how computer systems actually work. And like Neuromancer, that’s not how computer systems work. And in your William Gibson’s that’s not how computer systems works. So I wonder what you think about that. Like does the hacker to some degree as an entity, as an as an imaginary, require people having this fantastical understanding of technology?
A.M.: [00:05:58] Yeah, I certainly think that that you’re right about that. I, you know, I could I could simply answer yes. In the book I describe it through the language of polymorphism. And I think that this this is a useful, um, kind of term that bridges between this, this fictionalization of the hacker, the romantic kind of hacker hero, but also grounds it within a, a technical understanding of how code works. And within the object oriented programming paradigm, there’s a concept called polymorphism. This this many kind of shaped, um, idea of of how code can work. And so really simply put in object oriented programming. Um, it’s possible to build a class that is able to serve as a blueprint for, for other functions, for other capabilities and subroutines. And if you write good polymorphic code, which authors of malware tend to do, um, the code can operate in in unexpected ways. It can can somewhat adapt to different inputs, which makes it very difficult for for people who are attempting to defend against this malware to, to test and probe a piece of software that that may shift, that may change depending on different inputs and different contexts, different variables that it’s that it understands. And so polymorphic software is adaptable, it’s aware of context and and it can do unexpected things. And I think that if we’re thinking about a polymorphic identity for this hacker hero, it certainly can can be the stereotype, It can be the stereotype of, you know, the frustrated, you know, hacker who is the hero, who wins the girl and the money and his freedom.
A.M.: [00:08:28] But it can also be the hacker who is, you know, working in a 9 to 5 job and is put into a position where they’re witnessing, you know, gross injustice and operates as a whistle blower. Or it might be someone who is working as a spy. It might be somebody who is simply in it for the lulls and just wants to break things. And all of those things are certainly valid as as a as an approach. And so I’m you know, I’m evoking here Gabriel Coleman’s hacker hooks or whistleblower spy book, which is does a great job of following anonymous through and describing some of the the kind of almost an ethnography of of this of real world hackers. And I think in in her understanding of of anonymous there is a certain kind of variability there that is part of the identity that an anonymous being anonymous allows for that that that fluidity and that fluidity as as an identity, as a as an active political position is is highly charged and valuable. And we can see that that that identity and fluidity has value because we’re now seeing social media companies starting to charge for verified identities, you know, the real name policies, all of these things that serve, you know, surveillance capitalism and the the surveillance state around real identities is is certainly running counter to the fluidity. And as I term it, a kind of a polymorphic hacker culture.
C.T.: [00:10:34] I love I love that idea of polymorphic hacker culture. I think you can really see it play out. You have these sort of hero hacker characters, but at the same time you have companies like Facebook. Their address is one hacker way. They’re trying to, you know, about the the least possibly imaginable, like outside rogue sort of social justice forces, Facebook. But they’re trying to also embrace this hacker ethic. You know, they’re trying to at least what they want the public to. Thank that they’re embracing this hacker ethic. So you see how poly morphic it can be. It can be many things in many contexts. So you talk about teaching a lot of different technical subjects in your in your in your teaching. What do these sorts of things look like when you talk about cyber security and the hacker? What sort of assignments do you give your students? What sort of exercises and readings do you do? If I wanted to or if anybody else wanted to maybe develop a module where they talk about hacking and cyber security in their class, which might not usually talk about that, what would you recommend they do?
A.M.: [00:11:30] It’s a huge question. Certainly in the humanities context, I think it begins with story. Um, I think that that’s, that’s part of the appeal for me in, in returning to cyberpunk as a literary genre. Um, because it helps students kind of frame this within that broader imaginary. Um, I think that certainly video games are a big part of that as well. Um, that these are the video games like cyberpunk are, are informing this generation and is really the dominant media form um, for them. And so I think, you know, in terms of of direct assignments that it has so much to do with with for me in terms of of describing a story. Um, now that might have something to do with a, a risk assessment that in developing an application I would ask my students to understand their assets. To define them, whether they’re technical assets or data assets or are people, humans or assets to, and think about ways that they would need to protect them and the kinds of risks that they might face. And so risk assessment has something to do with measuring the relative value and the relative risk. And so these are these kinds of assignments fit well within the humanities as a as a type of analysis, because it really forces them to kind of use their analytical, creative capacities to to write something that has not happened yet to to forecast into the future to work in a speculative way.
A.M.: [00:13:35] And so those are those are things that I ask my students to do in a in a project management context. I ask them to certainly think about the kinds of infrastructure that they’re accessing, whether it’s open source software, doing code review and understanding, you know, code repositories and the risks that exist within that. And so it does have something to do with this kind of work a day process towards some other goal that quite often it’s I find in my teaching at least, that it’s it’s something that is attendant to existing pedagogical goals. And so, you know, in an example, I often teach students game development. And we’re often using, you know, software packages from around the web or building in networking options. And those kinds of things are inherently risky. And so we need to be able to understand and assess that as part of our planning within a larger within a larger project based course. So I think it’s an experience of education, but perhaps not necessarily the always the focus. But I do teach in my intro class. You know, we we do talk about hacking as, as kind of a within this cultural frame. We talk about movies and and comic books, video games as a way of animating some of these conversations.
C.T.: [00:15:21] I like that. You start with the story, you start with games, you get them into the imaginary and then you sort of move out from there into the more technical sort of world of things.
A.M.: [00:15:30] Exactly.
C.T.: [00:15:31] I found a little bit of pushback. So my dissertation, what I’m doing in my dissertation is I’m trying to develop a better program, a better seat of program, a better program for the average user to get training, to resist social engineering in the common hacking attempts. And the way I’ve designed this is built on the work of a guy named Michael Cranch. I’m not sure if you’re familiar with Michael Cranch. He is a researcher. He has a defense first approach to cybersecurity education, where he argues that teaching students the offense, how hackers do what they do is generally going to be more effective than defense first, because it teaches them to be adversarial thinkers, to be able to look at the way systems break down and be proactive in that. And when I started sort of floating this idea around, I got a lot of pushback. I got a lot of pushback from people in my department who were like, Oh, you’re over here teaching your students to be cyber criminals. And I’m trying to tell them, No, I’m not here teaching them to be cyber criminals. I’m teaching them how hackers work so that they can better resist hackers and also better make decisions about privacy and things like that. And I was and a lot of our listenership at the DRC probably isn’t super familiar with this sort of idea of ethical hacking. I was just wondering, how do you how do you talk to other people who may not who may like sort of conflate the idea of hacking with cybercrime? How do you talk to those people? How do you divorce those two ideas in their head?
A.M.: [00:16:52] Yeah, I it’s a great question. I think that it has something to do with understanding the risks that we that we legitimately face, um, and that there’s a difference that we might not appreciate with, you know, analog crime, let’s call it. We would watch, you know, kids play cops and robbers. You know, as as children learning the dynamic between, you know, property law and the role of the police. We might think about, you know, a PG 13 film about a bank heist and and, you know, see that that is fun and have some kind of an enjoyment in that or even, you know, a murder mystery. Right. That, you know, a murder mystery might be something that, you know, your grandmother might watch. And so we have we have encounters with criminality, um, as as a kind of entertainment in a in a daily way. But I think the relative newness of, of cyber crime is, is unsettling for some. And so I think situating it within a broader kind of understanding about society and the the norms and rules that we follow within society. Um, you know, there’s, there’s always a potential for, for criminality and some of the times that criminality is justified. And so we would, we would see and we celebrate some of these these crimes you know, in in our history, we can think about the civil rights movement, for instance.
A.M.: [00:18:58] Um, that these are these kinds of crimes are historically situated and that it may not always fall in the favor of the hacktivist and neither nor should it, but that these are part of our behavior within a broader social compact that are certainly possible. And so I think that that this is something of the growing pains of the first few decades of the 21st century. It’s a feature of of international relations and of increasingly of war fighting. And we’ll see it as an increasing part of what it is to grow up online. That, you know, young people may find reason to hack, you know, their version of Grand Theft Auto five so that they can kick players from the multiplayer screen or whatever is happening these days. And and that is is certainly, you know, illegal. But are these moments of transgression signaling broader trends or changing opinions within society? And so the hacktivist is mantle is not a, you know, a license to to commit crime, but it’s it’s certainly a, um, an approach that we want to value and think about, um, is as, as part of what it is to exist and, and kind of come of age in digital spaces.
C.T.: [00:20:52] I believe that’s about all we have time for. I want to thank Dr. Aaron Morrow for conducting this interview with me. Again, I’m Chris Turpin. If you want to follow up with either Dr. Morrow or myself, you can find our email addresses at the Digital Rhetoric Collective website. Thank you again for listening.
Works Cited
Bellinger, M. (2016). The rhetoric of error in digital media. Computational Culture, (5).
Mauro, A. (2022). Hacking in the Humanities: Cybersecurity, Speculative Fiction, and Navigating a Digital Future. Bloomsbury Publishing.
Parikka, J., & Sampson, T. D. (2009). The spam book: On viruses, porn and other anomalies from the dark side of digital culture. Hampton Press.