Can ChatGPT do the same? ChatGPT and professional editors compared
Abstract
Since the launch of ChatGPT, the use of and debate around generative AI has grown rapidly. Professionals whose work depends on writing have expressed concern about the potential impact of such tools on their roles. But are these concerns justified? Can ChatGPT truly take on the responsibilities of a professional writer? This study investigates that question by comparing the performance of ChatGPT with that of professional editors tasked with optimizing business communication. We conducted two studies, using both qualitative and quantitative methods. In the first, three experienced editors were asked to rewrite four business letters. Their editing processes were recorded using the Microsoft Snipping Tool, and immediately afterward, we conducted retrospective interviews using stimulated recall. These interviews were transcribed and analyzed. Insights from the observations and interviews informed the design of the prompt instructions used in the second study. In the second study, we asked ChatGPT to revise the same four letters using three different prompt types. The Simple prompt instructed the model to “make this text reader-focused.” The B1 prompt referred explicitly to the CEFR B1 language level, requiring ChatGPT to tailor the text for intermediate readers. Finally, the Process prompt simulated the editing steps observed in the professional editors’ workflows. To evaluate outcomes, we conducted both a qualitative comparison of the revised texts and a quantitative readability analysis using LiNT, a validated tool developed for Dutch texts. Our results show that the human editors substantially improved the readability of the original letters, reducing the use of unfamiliar words, shortening complex sentences, and increasing personal engagement through pronoun use. Among the AI outputs, ChatGPT B1 achieved results most comparable to the editors, both in readability and accuracy. In contrast, ChatGPT Simple fell short in terms of clarity and introduced errors through faulty inferences. Surprisingly, ChatGPT Process also underperformed compared to ChatGPT B1 and the human editors. Only the editors' and ChatGPT B1versions were free from errors. In the discussion, we reflect on how generative AI is reshaping the concept of writing within organizations, the skills required to produce effective written communication and the impact on writing pedagogy. Rather than replacing human editors, we argue that generative AI can play a valuable role as a collaborative tool in the organizational writing process.
- Journal
- Journal of Writing Research
- Published
- 2026-02-17
- DOI
- 10.17239/jowr-2026.17.03.02
- CompPile
- Search in CompPile ↗
- Open Access
- OA PDF Diamond
- Topics
- Export
- BibTeX RIS
Citation Context
Cited by in this index (0)
No articles in this index cite this work.
Cites in this index (0)
No references match articles in this index.
Related Articles
-
Prompt: A Journal of Academic Writing Assignments Jan 2026Justin Cook
-
Business and Professional Communication Quarterly Dec 2025Daneshwar Sharma; Shiva Kakkar; Ashima Agrawal
-
College Composition and Communication Sep 2025Kristi Girdharry
-
The Peer Review Sep 2025Ana Raquel Fialho Ferreira Campos; João Tiago Gaspar Cozechen; Elaine Pereira Lustosa; Marcos Angel De Carvalho Eing; Leonardo Schimiloski
-
The Peer Review Apr 2025Alexandra Krasova; Mahmoud Othman