Abstract
Innovation and technological adoption are continuous processes, which makes them difficult to periodize. At the same time, acquiring new tools and literacies inspires in the adopters a reflection, however brief, on their preparedness for the acquisition. Adopters may face the new technologies with confidence, excitement, curiosity, trepidation, or all the above. The emotions often result from a sense of how equipped adopters feel to receive the innovation. Yet the speed of innovation, and the social and professional need to keep up, might obstruct self-analysis that would ideally help define and sharpen the relevant skills and knowledge. This talking picture book documents how the Hendrix College Writing Center staff reflects collectively on the transition that the arrival of generative artificial intelligence has ignited. As of the Summer of 2024, our writing center has not yet implemented solid AI-related policies and procedures, working instead on research. By responding to four questions about encounters with AI with a still image and an accompanying oral, recorded narration, four student consultants and the center’s director make material memories about the current moment, which the rapid technological development has rendered elusive and even distant. The idea is to create a nostalgia for the present to intensify our recollections of the experiences and abilities that would enable us to interact and grow with AI when it becomes part of our regular operations. Keywords : technological adoption, the speed of technological change, assistive technologies, reflection, still photograph and the imaginary, voice recording and the real, preparedness This work—a collection of still images and voice recordings—examines a part of the process by which a writing center adopts a new technology—a reflection on the staff’s readiness. The Hendrix College Writing Center serves a small, liberal arts, private institution with around 1200 undergraduate students. With that in mind, we are designing procedures (for individual appointments, workshops, course collaborations, and so on) to tackle the AI-related needs of students and faculty. We have not formally implemented any of those procedures under the belief that we still need to learn more. Whether we will know when we have reached a critical mass of knowledge for the implementation to happen remains an open question (although we are certain the learning process will not stop). What we do know is how much self-reflection the recent prominence of text-generating AI has ignited in our center. Contemplation must eventually give way to actionable conclusions for the current moment, even if they might come with an expiration date. That fact does not mean we can’t extend the contemplation a bit longer for the purposes of investigating our Center and our campus at what will certainly be an inflection point. This piece attempts to stage two artificialities to give us more room to think and match the condition of its subject. The first artificiality concerns something that technological development never deliberately affords most citizens: a pause to consider who citizens are (a sense of their place in their lives and in their communities), and how ready they feel, before adopting a new technology. Everett M. Rogers’s (1962) technology adoption life cycle indicates that citizens incorporate technical advancements at different times, classifying them into five groups: “innovators,” “early adopters,” “early majority,” “late majority,” and “laggards” (p. 161). Given the particularity of the experiences and circumstances around every citizen, Rogers warns that models to track the timeline of technology diffusion across populations are “conceptual,” a useful tool to understand the impact of a continuous phenomenon and to identify trends. Something that becomes clear from following the spread of innovations is that innovators rarely spend time speaking to consumers about the effects and implications of their work before that work is widely available. Educational, legal, and governmental institutions struggle to anticipate technologically driven change. Instead, they react to every development. The lag happens because, for Preeta Bansal (quoted in Wadhwa, 2014), codified behaviors require social consensus, while technological innovation does not. The speed of the “technological vitalism” (p. 45) of which Paul Virilio (1986) speaks runs right past the much more difficult optimization of agreement. Our project is similar to Rogers’s in that it also exists on a conceptual plane: it conceives of a reflective stoppage in technological adoption as a situated, almost nostalgically defined period. This talking picture book imagines what it would be like to expand the reflection before a community (in this case, the writing center) creates protocols to mark the perhaps irreversible presence of artificial intelligence in their practice. Like Rogers’s device, making visual and aural mementos of the current moment means to contain, however abstractly, an ungraspable and ongoing process. Yet we differ from Rogers in one respect: “Each adopter of an innovation in a social system could be described, but this would be a tedious task” (p. 159). As believers in the counterhistorical value of the anecdote, however, we propose describing this small group of adopters in some detail, so that a fuller picture of AI’s spread comes into view—one harder to categorize in one of the five groups above. We distinguish between that pause and the preliminary groundwork for institutional change because, so far, the preparation we have undertaken has relied on current, forward-looking research. The past, the a priori of our technological and disciplinary knowledge, always informs the envisioning of our future. Still, our center has not defined that past in concrete terms. We have not named what we possess that would let us inhabit a practice alongside AI. Defining our past would, in turn, clarify our present, a perpetually in-flux moment that never stands still long enough to comprehensively assimilate it. An analog detailing of the conditions that shape the adoption of new tools at the writing center appears in research on the selection of assistive technologies for writers. Nankee et al. (2009), for example, break down the factors involved in writing: visual perception, neuromuscular abilities, motor skills, cognitive skills, and social-emotional behaviors (p. 4). While the authors composed this list to select assistive technologies for students with disabilities, reading the factors makes it clear that anyone who intends to write or even assist in writing needs to consider them. The same can be said of the writing process itself. In a discussion about assistive technologies in writing centers, DePaul University blogger Maggie C (2015) cites a study by Raskind and Higgins (2014) that shows text-to-speech software enhanced proofreading for students with learning disabilities. In their analysis, Maggie C observes that the issues “that all writers struggle with (proofreading, catching errors, etc.) [aren’t] unique because the people in this study had learning disabilities” (para. 3). Indeed, this kind of capabilities analysis can apply to the writing center staffers as well. Even if right now we do not treat AI as an assistive technology, framing its adoption in terms of what prepares and allows us to incorporate it reveals areas of interest to influence our eventual policies. So we propose taking stock not just of our capacities but of our collective mood before letting AI take residence in our writing center. The piece represents how we have identified the signals of change, or how we have developed a notion, however tenuous, that a (perhaps paradigmatic) shift is coming. We are conscious that the past and present we will try to articulate are largely fictional—the second artificiality this work hopes to render. Artificial intelligence, and its applications to writing, have been with us for some time now. While students, faculty and staff at Hendrix College work, together and apart, to respond to its challenges and fulfill its opportunities, AI has made its way into our practice. To some extent or another, often inadvertently, we have adopted AI, further complicating our identification of a pre-AI moment. That fiction, however, remains useful because it will allow us to recognize (and perhaps even invent) qualities upon which we may rely to work with AI. Generative speculation represents a significant part of the exercise, as we list skills that both intuitively and counterintuitively empower us to face AI. It will also give us a reference point, a purposefully constructed memory of a period that we might need to revisit moving forward. It will provide a starting place for an approach to understanding the transition. Call it a preemptive act of writing center archaeology. We are building evidence for future excavations. To create a reflective pause, generate a fictional past, and capture a mood during transition, we turn to a multimodal approach combining photographs with voice narration. The process began with four questions: The authors shared still photos that reminded them of their encounters with AI. Then, they recorded spoken descriptions of the photos, explaining their relevance to the questions and the memories they elicit. At times, the question prompted only the recorded reflection. In those cases, the door to our old writing center supplies the background image. The result is organized by the questions but also allows the audience to view and hear it in any order as if browsing through a family album. The choices of modalities follow the ideas of theorists Vilém Flusser and Friedrich Kittler. For Flusser (2004), photography “ has interrupted the stream of history. Photographs are dams placed in the way of the stream of history, jamming historical happenings” (p. 128). It’s this “jamming” that makes still images an appropriate medium for this project, which temporarily and imaginatively arrests time to acquire an advantageous perspective on our history. On a personal level, we might be familiar with the connection between still images and remembrance. The essay is, in part, a picture book of our days before adding AI to our mission statement. The photographs literalize the piece’s title. As for the voice recordings, we recall how Kittler (1999), in his psychoanalytic analysis of media, associated the gramophone and its capacity to mechanically store and reproduce sounds with the Lacanian Real, or the part of the world that exists beyond human signification (p. 37). For Kittler, when we record someone’s voice, we capture words, but also the uninflected, unintentional, unstructured noises that reveal something true about the speaker. Our tone, tics, and silences (those sounds free of signifiers) express the authenticity of our responses to AI and our ideas of how it will alter our writing assistance. Kittler, incidentally, would have something else to say about photography to elaborate on Flusser’s thoughts. As a mechanically constructed image of the world, the photograph belongs to the Imaginary—it creates a double of the world onto which viewers can project their ideals. In short, the affordances of still photographs and voice recordings allow us to weave our imagined past and pair it with the real hopes, mysteries, and anxieties involved in our incorporation of AI. Our goal is to evoke our world before that revolution. Before moving on to the picture book, here are a few words of the Hendrix College Writing Center staff who participated in this project: In the writing center, I begin my sessions away from the page. I start a conversation sparked by questions like What do you want to say? What’s blocking you from that right now? What gets you fired up about this piece? I sprinkle in camaraderie and a touch of humor: Oh yeah that class is ridiculously hard or yeah one time someone came in here twenty minutes before their paper was due! The specifics vary, but the point is to create a space at the intersection of talking, thinking, and human connection. That’s where writing begins. It doesn’t spring magically into existence out of the end of a pen. I’m critical of that sort of “natural” approach to human writing. The idea that writing should “flow.” There’s nothing natural about the act of writing. It’s agonizing. It’s counterintuitive. So, I tend to start with conversation. I ask the writers who visit me to say what they’re trying to communicate. I let them think aloud until something greater than the separate pieces of our conversation emerges. Only then do we shape those thoughts into written form. I suppose I should mention my skepticism about AI. I’m not convinced AI can or will allow something greater to emerge. I’m reminded of Verlyn Klinkenborg’s (2012) description of cliché as “the debris of someone else’s thinking” (p. 45). Might that be an apt description of AI as well? To me, a writing center’s strength lies in its ability to create human connections. Before implementing AI in the writing center, we should ask ourselves how it supports that strength. My general approach to writing assistance is to analyze works for structural issues (how do ideas flow, satisfactory resolutions to concepts set up earlier, etc.) first and foremost and to center any aid around my findings. To me, AI has the downside of cheapening this process by reducing the structure of an essay into a template of what it could be, reducing the potential impact a work could hold. In addition, AI isn’t very good at following along with these threads of ideas when fed a paper, so it doesn’t do me much good to ask ChatGPT or so such about a paper I’m meant to look over. I approach my duties as a writing consultant as if I am helping a friend with their homework without doing it for them. I see myself as the bridge that connects their contemplation of the assignment to their final project. This approach consists of talking to me as if I am a friend, where I listen without judgment. They simply describe what they think the rubric means or, if they’ve already begun writing, what thought they are struggling to put on paper. From there, we work to make the thought clearer and the assignment criteria more reachable. I have seen firsthand how AI is a tool that can make the rubric digestible. It is a tool that can also help with spelling and grammar. This can be helpful because patrons are then able to enter the appointment already understanding the assignment, thus having questions and drafts ready. At the same time, however, AI can interfere as it makes it easier for someone to lapse in their work ethic, comprehension, creativity, and originality. When those lines are crossed, so is academic integrity. During my time as a writing consultant, I was a student majoring in psychology and minoring in biology. I think that my background in science afforded me a unique approach to writing assistance and writing in general, which contributes to my reservations about using AI in spaces of writing assistance. AI, by nature, does not allow that uniqueness or human variability, which can sometimes make all the difference in writing and helping others to write. In my experience, there are times in which the person-to-person conversations and connections create a soundboard that facilitates breakthroughs in a peer’s writing far more than any technical edits. Maybe it is arrogant, but even as AI continues to develop and earn its place as a supplement to writing assistance, I do not think it will ever replicate the peer-to-peer experience. As long as we respect AI’s limitations and honor the value of traditional writing assistance, I believe the two can work together to empower individuals in their writing journeys. If I invoke some clichés about mixed emotions at the arrival of generative AI, it is because they feel true. They also feel appropriate because I believe writing and writing assistance are about mixed emotions. I believe that, to find ways to express thoughts, writers and their readers need to embrace being a bit unsettled. I try to cultivate comfort with uncertainty as a necessary mindset for successful, truly exploratory writing. After advocating for such a double consciousness for years, I feel generative AI is the biggest challenge so far in practicing what I preach. Looking at the pictures we put together for this piece, I find great serenity— a reminder of how we reacted when we first realized how quickly a full-fledged essay could appear on an app’s screen.
- Journal
- The Peer Review
- Published
- 2025-04
- CompPile
- Search in CompPile ↗
- Open Access
- OA PDF Gold
- Topics
- Export
- BibTeX RIS
Citation Context
Citation data not yet available for this article.
Citation data is not available for The Peer Review. This journal's publisher does not deposit reference lists with CrossRef.
Related Articles
-
Pedagogy Apr 2025modern rhetorical theory rhetorical criticism african american rhetorics cultural rhetorics first-year composition writing pedagogy basic writing graduate education two-year college teacher development writing centers technical communication professional writing labor and working conditions digital rhetoric multimodality social media literacy studies race and writing gender and writing community literacy literary studies editorial matter
-
Pedagogy Jan 2022modern rhetorical theory rhetorical criticism genre theory discourse analysis african american rhetorics decolonial rhetorics first-year composition writing pedagogy basic writing writing across the curriculum graduate education teacher development argument collaborative writing transfer assessment portfolios writing program administration writing centers peer tutoring technical communication professional writing archival research digital rhetoric social media grammar and mechanics literacy studies race and writing gender and writing disability studies public rhetoric community literacy literary studies editorial matter
-
Pedagogy Oct 2025modern rhetorical theory rhetorical criticism genre theory cultural rhetorics first-year composition writing pedagogy advanced composition creative writing writing across the curriculum graduate education two-year college service learning teacher development technical communication professional writing labor and working conditions archival research multimodality artificial intelligence literacy studies race and writing gender and writing disability studies literary studies editorial matter
-
Pedagogy Jan 2024
-
The Peer Review Sep 2025Ana Raquel Fialho Ferreira Campos; João Tiago Gaspar Cozechen; Elaine Pereira Lustosa; Marcos Angel De Carvalho Eing; Leonardo Schimiloski