Abstract

Large language models continue to evolve at a far faster pace than policies at colleges and universities. Writing instruction and peer-tutoring, in consequence, will have to change faster still. In six months of testing by the researchers, ChatGPT began to produce prose with ever greater clarity, analysis, and varied (if often formulaic) stylistic choices. At the same time, all AIs tested struggled with copyrighted materials, sometimes refusing to employ them or quoting sources while claiming not to have done so. The authors include preliminary suggestions for those who staff and direct writing centers, specifically methods for adopting generative AI rather than flatly opposing it. We draw from student responses to a campus survey administered in 2023 and 2024, plus one partnership between AI and sixteen first-year students. Such adaptation to AI may prove particularly useful for those helping writers otherwise marginalized by socioeconomic background, neurodiversity, or personal identity. Finally, we advocate getting ahead of any administrative efforts to dictate terms for use of AI that may lead to reduced status, or outright elimination, of human tutors.

Journal
The Peer Review
Published
2025-04
CompPile
Search in CompPile ↗
Open Access
OA PDF Gold
Subjects
Generative AI, LLMs, pedagogy, prompt-engineering, praxis, drafts, working conditions, neoliberalism, employment
Topics
Export

Citation Context

Citation data not yet available for this article.

Citation data is not available for The Peer Review. This journal's publisher does not deposit reference lists with CrossRef.