In my teens I dropped out of high school and got involved in the hacktivism scene. I did a lot of stuff during this time, but the thing that probably says the most about me is an art project called “Anonymous vs. Art.” In this project I used some half-borrowed, half-handwritten software to temporarily knock the websites of a few of the world’s largest – and most problematic – art galleries offline (the Tate, the Met, etc). The idea of the art project was that the medium is the message, but in hyperdrive; the internet can be a container of art and expression, but it can also be the medium of expression itself. I’ve grown up a lot since my teens and early twenties, but my experiences as a pissed off, punky, hacker kid from the hills fundamentally shaped how I understand the internet, digital rhetoric, and societies intertwined relationship with computer technology.
(A staging of Anonymous vs Art in 2012)
Ultimately, I became a little jaded with the hacktivist scene. Nothing was really changing and the line between what I viewed as legitimate activism and criminal activity started to blur. After a few dead-end jobs I decided maybe college was right for me. So, I enrolled in the local community college intending to be a computer science major. In a project for an English class I discovered Donna Haraway’s “Cyborg Manifesto,” which led me to discovering science, technology, and society (STS) studies. Ultimately, I became more interested in the big questions about how computers influence society rather than the finer points of corporate software production. I switched to an English major to give me the time and framework to explore these ideas. My thinking is hugely influenced by scholars like N. Katherine Hayles, Bruno Latour, Wiebe Bijker, and Karen Barrad. One thing led to another, I went from a small community college in the hills of my native Appalachia to Columbus, Ohio where I’m pursuing a PhD in Rhetoric, Composition, and Literacy studies under the watchful eye of Brutus Buckeye.
Right now I’m most interested in the rhetoric and pedagogy of cybersecurity. Despite the popular image of the hacker, the principal attack vector in the vast majority of cybercrime is social engineering, an array of low-tech rhetorical and psychological strategies designed to trick and manipulate victims (Junger et al, 2017). Equally, the very notion of security, cyber or otherwise, is largely rhetorically constructed, enforced, and maintained (Klimberg-Witjes & Wentland, 2021). Notions of the hacker and cybersecurity are rhetorical tokens manipulated by stakeholders with various agendas ranging from hacktivists, to cybercriminals, to everyday-users, to governments, to cybersecurity professionals, and so on. Don’t be tricked by the buzz, it’s all (mostly) just rhetoric. The problem is that the rhetoric and pedagogy of cybersecurity, like other human factors in cybersecurity, has been largely ignored (Nobels, 2018 / Joeng et al, 2019 / Rahman et al 2021 ). Because of this our current cybersecurity practices are simply not working (Weiss et al, 2022). Just check the news, it seems like every week a company with almost unlimited resources gets hacked. My work hopes to address this lack of attention paid to the rhetoric and pedagogy of cybersecurity.
My dissertation looks at – among other things – how security awareness training and education (SETA) programs fail and what can be done to improve them. SETA programs are educational programs designed to teach critical cybersecurity concepts and skills to average, everyday users (the population most targeted by cybercriminals). Like most approaches to cybersecurity, SETA programs don’t actually improve cybersecurity outcomes and may actively harm them (Menges et al, 2022). There are a few reasons for this, but they can be boiled down to the fact users find these programs boring, that they foster a reactive instead of proactive user population, and that they treat users as a problem to be solved not as the capable, sophisticated actors they can (and must) be. My approach to SETA pedagogy hopes to produce agile, knowledgeable, cybersecurity practitioners by embracing materialist pedagogical practices and what Kranch (2019) describes as an offense first approach to cybersecurity education
As a DRC fellow I hope to use this platform to draw attention to the need for greater cybersecurity knowledge among digital humanists and the need for more humanistic knowledge among cybersecurity professionals. For way too long these two disciplines have been effectively siloed off from each other. Bringing them together will pay dividends.
You can contact me here turpin.48@buckeyemail.osu.edu
Check out my interactive CV here.
Works Cited
- Jeong, J., Mihelcic, J., Oliver, G., & Rudolph, C. (2019, December). Towards an improved understanding of human factors in cybersecurity. In 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC) (pp. 338-345). IEEE.
- Junger, M., Montoya, L., & Overink, F. J. (2017). Priming and warnings are not effective to prevent social engineering attacks. Computers in human behavior, 66, 75-87.
- Klimburg-Witjes, N., & Wentland, A. (2021). Hacking humans? Social Engineering and the construction of the “deficient user” in cybersecurity discourses. Science, Technology, & Human Values, 46(6), 1316-1339.
- Kranch, M. (2019, June). Why You Should Start with the Offense: How to Best Teach Cybersecurity’s Core Concepts. In Colloquium for Information Systems Security Education (Vol. 23, No. 1, pp. 1-12).
- Menges, U., Hielscher, J., Buckmann, A., Kluge, A., Sasse, M. A., & Verret, I. (2021, October). Why IT Security Needs Therapy. In European Symposium on Research in Computer Security (pp. 335-356). Springer, Cham.
- Nobles, C. (2018). Botching human factors in cybersecurity in business organizations. HOLISTICA–Journal of Business and Public Administration, 9(3), 71-88.Rahman, T., Rohan, R., Pal, D., & Kanthamanon, P. (2021, June). Human factors in cybersecurity: a scoping review. In The 12th International Conference on Advances in Information Technology (pp. 1-11).