Using the Developmental Path of Cause to Bridge the Gap between AWE Scores and Writing Teachers’ Evaluations
Abstract
Supported by artificial intelligence (AI), the most advanced Automatic Writing Evaluation (AWE) systems have gained increasing attention for their ability to provide immediate scoring and formative feedback, yet teachers have been hesitant to implement them into their classes because correlations between the grades they assign and the AWE scores have generally been low. This begs the question of where improvements in evaluation may need to be made, and what approaches are available to carry out this improvement. This mixed-method study involved 59 cause and effect essays collected from English language learners enrolled in six different sections of a college level academic writing course and utilized theory proposed by Slater and Mohan (2010) regarding the developmental path of cause. The study compared the results of raters who used this developmental path with the accuracy of AWE scores produced by Criterion, an AWE tool developed by Educational Testing Service (ETS), and the grades reported by teachers. Findings suggested that if Criterion is to be used successfully in the classroom, writing teachers need to take a meaning-based approach to their assessment, which would allow them and their students to understand more fully how language constructs cause and effect. Using the developmental path of cause as an analytical framework for assessment may then help teachers assign grades that are more in sync with AWE scores, which in turn can help students gain more trust in the scores they receive from both their teachers and Criterion.
- Journal
- Writing and Pedagogy
- Published
- 2015-07-04
- DOI
- 10.1558/wap.v7i2-3.26376
- CompPile
- Search in CompPile ↗
- Open Access
- Closed
- Topics
- Export
- BibTeX RIS
Citation Context
Cited by in this index (0)
No articles in this index cite this work.
Cites in this index (0)
No references match articles in this index.
Related Articles
-
Computers and Composition Jun 2026“Article laundry” or “tutor in pocket?”: Multilingual writers’ generative AI-assisted writing in professional settings ↗Qianqian Zhang-Wu
-
Computers and Composition Mar 2026Chinese EFL learners’ engagement with ChatGPT feedback on academic writing: A case study in Malaysia ↗Zhang Kailin; Murad Abdu Saeed
-
Journal of Writing Research Feb 2026Generative AI use in college writing classes: An analysis of student chat logs and writing projects ↗Sarah Madsen Hardy; Pary Fassihi; Shuang Geng; Christopher McVey; Matt Parfitt
-
Assessing Writing Jan 2026Generative artificial intelligence for automated essay scoring: Exploring teacher agency through an ecological perspective ↗Jessie S. Barrot
-
Assessing Writing Jan 2026Albert W. Li; Steve Graham