We were ahead of the curve. In the summer of 2022, four months before ChatGPT burst onto the scene, we sent out our usual July message to tutors at UConn’s Writing Center, reminding them about August orientation dates and that everyone would need to bring a draft essay based on a pair of readings. Here’s part of that message:
Read the two articles linked below, both about writing and artificial intelligence, then ask, How should writing centers anticipate and respond to the AI futures discussed in these articles? You can take any number of different angles—practical, philosophical, ethical, cultural, educational, playful, personal, etc.—but you must engage with the substance of the articles and work in at least one quote from them.
“The Computers are Getting Better at Writing” from The New Yorker
“AI is Mastering Writing. Should We Trust What It Says?” from the NY Times Magazine
We suggest that you mess around a little with the OpenAI/GPT-3 natural language generator (use the “Text Completion” and “Playground” features). It’s free, but you have to create an account. There are tutorials to get you started. You could even go meta on this assignment and paste the first paragraph of your draft into the playground to see the next sentence or paragraph AI generates for you.
Write the full draft of an essay that runs 600-900 words and bring a hard copy to our orientation. This is a draft, not a final version, so feel free to take risks and don’t worry if you haven’t fully figured out your analysis or argument. You’ll share this with others during a practice tutorial—we might even make them part of some larger project on AI and writing.
Everyone wrote the essay because, well, they had to. Everyone included at least one quote from the readings because we hire pretty dutiful rule-followers. And everyone raised interesting queries and questions because we also hire critical and creative thinkers.
But only a few actually opened GPT-3 and messed around with it.
Six months later, in early spring 2023, talk ChatGPT was ubiquitous (I like Jason Crider’s phrasing that “AI assistants now ’haunt’ the space of writing”). I put out a call to our staff, asking if anyone would like to get involved in some research on AI and writing and/or writing centers. I had already been meeting weekly with Alex, then an undergraduate computer science major (though not a writing center tutor), to plan some research on student attitudes toward generative AI. Now, I wanted to think more specifically about writing center implications and applications—and who better to do that with than writing center tutors?
Of the 25 undergrad tutors on our staff, only 1 responded to my call. And Noah was really motivated by the prospect of getting involved in undergraduate research—it could have been on pretty much any topic.
Noah, Alex, and I got to work, meeting weekly, and by the end of that year, we had administered a survey and piloted some ways of using LLMs creatively and responsibly in the writing center. We described our findings at the 2023 NEWCA Conference and in an August 2023 Another Word blog post.
That short, practical piece was among the earliest to appear on AI/writing center connections, so it circulated pretty widely. Again, our center was ahead of the curve.
But were we, really?
As a daily observer of our operations, I hardly ever see our tutors using AI–even in the limited ways that Noah models in our Another Word blog post, which all our staff has read and that we have discussed in a staff meeting. Occasionally, I ask tutors directly if they’ve used AI lately in their coaching, and most say that their tutorials just didn’t seem to call for it, so no, not really.
This is what I think is going on: we hire confident and accomplished student writers who naturally default to what helped them succeed so well in school. And at least so far, that hasn’t been AI.
This habit of defaulting to our lived experience aligns with what Kara Poe Alexander, Becca Cassady and Michael-John DePalma found in their recent study of how tutors take up multimodality in tutoring: they “draw primarily from their own multimodal writing experiences when tutoring writers on multimodal texts” (87). And even then, they draw on those experiences unevenly.
The same applies to AI. Even when tutors have read about it. Even after we’ve done some staff training. Even when a director and a fellow tutor outline practical methods. Even as we imply through publications that our center is on the leading edge of engaging with AI.
So defaulting to personal experience is in play, and so is its cousin entrenchment (see Anson, p. 77, in Threshold Concepts in Writing Studies). There’s a fine line between engaging in productive writing rituals (for me, going to a coffee shop, committing to 2 hours of keyboard time, later asking ally reader for some feedback, still later printing a draft to copyedit) and getting entrenched in habits at the expense of trying new things (and I’m complicit in that too: I avoided AI until forced to experiment with it by a colleague with whom I was collaborating on a grant project that focused on nudging neurodiverse graduate students to try new applications, but I still hardly ever AI tools it in my own writing process).
We see entrenchment of habits among veteran tutors (as any writing center director who tries to introduce new methods will attest!). But we see it even among new tutors who are disinclined to experiment with generative AI because it was never a part of the longstanding writing and “student-ing” success that motivated and qualified them to be a writing tutor in the first place.
Successful students can be among the most resistant to changing their constructs of writing. Both 20-year-old research from Sommers and Saltz about the Harvard Study of Writing and recent findings from the Teaching for Transfer (TFT) project have documented that those who navigated high school and early college writing tasks with good grades and teacher praise are often less willing to authentically re-evaluate the very writing beliefs and practices that have served them as academic writers so well, so long.
Our tutors are wise enough to know that they shouldn’t be too entrenched in any one practice or process; but in the middle of a stressful session, more often than not, you go with what you know best.
Our staff is diverse in many ways—by major, gender, race, language, etc.—but they are also akin to those successful school writers we see described in the Harvard and TFT studies. That includes, for example, process. They’re all in on brainstorming, drafting, talking through options, and revising. That kind of iterating, promoted by teachers since grade school as a vital part of the writing process, feels familiar, natural, and authentic. But prompting ChatGPT and repeatedly refining results? That kind of iterating feels unfamiliar and, well, artificial.
Maybe we can do a better job of grafting AI to their entrenched notions of process—not to mention general rhetorical considerations like audience and purpose, or core writing center values like conversation. When composing our blog post, Alex, Noah, and I were careful to center those kinds of traditional writing center values and practices as we recommended ways to extend them with AI. But reading about that wasn’t enough to shift behavior on our staff.
Perhaps we need to simply wait until AI-users age in, bringing their experience with them. In other words, live with the lag. Case in point from our center: we’re only now getting around to enhancing multimodality with respect to material and tactile dimensions of writing and tutoring, even though scholarship on that has been with us for a while, and we’ve encouraged tutors to experiment with such methods for years. This year, a small group of first-year tutors, as their final project for our practicum course, proposed that we assemble an art cart with different kinds of paper, colored pens, post-its, scissors, fidget toys, and the like, all ready for writers and tutors to employ. They made that happen, and staff uptake has been stronger than uptake for AI tools. While scholarship on materiality and neurodiversity has been with us for a while, I think it has gained actual traction with tutors now because more are bringing their own and their friends’ lived experiences with neurodiversity to the job. (By the way, no one opted to take up a final project on AI, even though I encouraged it!)
I can live with lag, because, well, I think we probably have to.
Still, we may be able to shorten it by adapting our tutor recruitment methods. When I was first exploring AI in the writing center, my undergraduate collaborator Alex–an AI-enthusiast but not a tutor, and more significantly someone not invested in his identity as a writer—was pivotal to the project. He would probably never consider applying for a job in the writing center. How do we balance drawing in students like Alex with all the other important diversity priories in our recruitment?
It’s good to be ahead of the curve—or even to seem like you’re ahead of the curve. But in practice, we’re usually catching up.