Journal of Writing Research
4 articlesFebruary 2026
-
Abstract
Since the launch of ChatGPT, the use of and debate around generative AI has grown rapidly. Professionals whose work depends on writing have expressed concern about the potential impact of such tools on their roles. But are these concerns justified? Can ChatGPT truly take on the responsibilities of a professional writer? This study investigates that question by comparing the performance of ChatGPT with that of professional editors tasked with optimizing business communication. We conducted two studies, using both qualitative and quantitative methods. In the first, three experienced editors were asked to rewrite four business letters. Their editing processes were recorded using the Microsoft Snipping Tool, and immediately afterward, we conducted retrospective interviews using stimulated recall. These interviews were transcribed and analyzed. Insights from the observations and interviews informed the design of the prompt instructions used in the second study. In the second study, we asked ChatGPT to revise the same four letters using three different prompt types. The Simple prompt instructed the model to “make this text reader-focused.” The B1 prompt referred explicitly to the CEFR B1 language level, requiring ChatGPT to tailor the text for intermediate readers. Finally, the Process prompt simulated the editing steps observed in the professional editors’ workflows. To evaluate outcomes, we conducted both a qualitative comparison of the revised texts and a quantitative readability analysis using LiNT, a validated tool developed for Dutch texts. Our results show that the human editors substantially improved the readability of the original letters, reducing the use of unfamiliar words, shortening complex sentences, and increasing personal engagement through pronoun use. Among the AI outputs, ChatGPT B1 achieved results most comparable to the editors, both in readability and accuracy. In contrast, ChatGPT Simple fell short in terms of clarity and introduced errors through faulty inferences. Surprisingly, ChatGPT Process also underperformed compared to ChatGPT B1 and the human editors. Only the editors' and ChatGPT B1versions were free from errors. In the discussion, we reflect on how generative AI is reshaping the concept of writing within organizations, the skills required to produce effective written communication and the impact on writing pedagogy. Rather than replacing human editors, we argue that generative AI can play a valuable role as a collaborative tool in the organizational writing process.
February 2016
-
Abstract
This article proposes novel methods for computational rhetorical analysis to analyze the use of citations in a corpus of academic texts. Guided by rhetorical genre theory, our analysis converts texts to graph-theoretic graphs in an attempt to isolate and amplify the predicted patterns of recurring moves that are associated with stable genres of academic writing. We find that our computational method shows promise for reliably detecting and classifying citation moves similar to the results achieved by qualitative researchers coding by hand as done by Karatsolis (this issue). Further, using pairwise comparisons between advisor and advisee texts, valuable applications emerge for automated computational analysis as formative feedback in a mentoring situation.
August 2010
-
Abstract
A particular application of corpus analysis, automated essay scoring (AES) can reveal much about students’ writing skills. In this article we present research undertaken at Educational Testing Service (ETS) as part of its ongoing commitment to developing effective AES systems. AES systems have certain advantages. They can: (a) produce scores similar to those assigned trained human raters, (b) provide a single consistent metric for scoring, and (c) automate linguistic analyses. However, to understand student writing, we may need to look beyond the final essay in various ways, to consider both the process and the product. By broadening our definition of corpora, to capture the dynamics of written composition, it may become possible to identify profiles of writing behavior.
-
Abstract
Based on explorations of the Michigan Corpus of Upper-level Student Papers (MICUSP), the present paper provides an introduction to the central techniques in corpus analysis, including the creation and examination of word lists, keyword lists, concordances, and cluster lists. It also presents a MICUSP-based case study of the demonstrative pronoun this and the distribution and use of its attended and unattended forms in different disciplinary subsets of the corpus. The paper aims to demonstrate how corpus linguistics and corpus methods can contribute to writing research and provide fruitful insights into student academic writing.