Writing and Pedagogy
6 articlesDecember 2025
-
Trusting Each Other, Trusting Machines: Undergraduate Students’ Perceptions of Copresence Afforded by Writing Technologies, Networked Platforms, and Generative AI in Their Academic Writing Practices ↗
Abstract
This article examines how students use and perceive digital writing tools, including chat platforms and generative AI, within academic writing environments. It describes a qualitative study of 15 undergraduate students in guided focus group discussions. In a grounded theory analysis of focus group transcripts, the researchers explored undergraduates’ sense of copresence—their perception of support through both human interaction with both peers and instructors and AI technologies during their writing processes. Findings reveal that students’ trust in both peer feedback and AI assistance plays a crucial role in their writing, shaping their decisions about which tools to use and how they integrate human and AI feedback in the development and revisions of their writing. The study sheds light on students’ nuanced understanding of the affordances and limitations of multimodal chat platforms and generative AI technologies. We conclude by highlighting the need for pedagogical practices that support students’ choice of tools when collaborating in digital spaces. We suggest future research directions that will enable us to better understand how copresence and trust influence students’ writing in these contexts.
May 2016
-
Academic literacy and student diversity: The case for inclusive practice Ursula Wingate (2015) and Genre-based automated writing evaluation for L2 research writing: From design to evaluation and enhancement Elena Cotos (2014) ↗
Abstract
Academic literacy and student diversity: The case for inclusive practice Ursula Wingate (2015) ISBN-13: 978-1783093472. Pp. 208. Genre-based automated writing evaluation for L2 research writing: From design to evaluation and enhancement Elena Cotos (2014) ISBN-13: 978-1137333360. Pp. 302.
July 2015
-
Abstract
This article aims to engage specialists in writing pedagogy, assessment, genre study, and educational technologies in a constructive dialog and joint exploration of automated writing analysis as a potent instantiation of computer-enhanced assessment for learning. It recounts the values of writing pedagogy and, from this perspective, examines legitimate concerns with automated writing analysis. Emphasis is placed on the need to substantiate the construct-driven debate with systematic empirical evidence that would corroborate or refute interpretations, uses, and consequences of automated scoring and feedback tools intended for specific contexts. Such evidence can be obtained by adopting a validity argument framework. To demonstrate an application of this framework, the article presents a novel genre-based approach to automated analysis configured to support research writing and provides examples of validity evidence for using it with novice scholarly writers.
-
Using the Developmental Path of Cause to Bridge the Gap between AWE Scores and Writing Teachers’ Evaluations ↗
Abstract
Supported by artificial intelligence (AI), the most advanced Automatic Writing Evaluation (AWE) systems have gained increasing attention for their ability to provide immediate scoring and formative feedback, yet teachers have been hesitant to implement them into their classes because correlations between the grades they assign and the AWE scores have generally been low. This begs the question of where improvements in evaluation may need to be made, and what approaches are available to carry out this improvement. This mixed-method study involved 59 cause and effect essays collected from English language learners enrolled in six different sections of a college level academic writing course and utilized theory proposed by Slater and Mohan (2010) regarding the developmental path of cause. The study compared the results of raters who used this developmental path with the accuracy of AWE scores produced by Criterion, an AWE tool developed by Educational Testing Service (ETS), and the grades reported by teachers. Findings suggested that if Criterion is to be used successfully in the classroom, writing teachers need to take a meaning-based approach to their assessment, which would allow them and their students to understand more fully how language constructs cause and effect. Using the developmental path of cause as an analytical framework for assessment may then help teachers assign grades that are more in sync with AWE scores, which in turn can help students gain more trust in the scores they receive from both their teachers and Criterion.
September 2014
-
Abstract
This study examined the impact of different forms of feedback on the writing of a group of 82 adolescent students in secondary English classes. During a 6-week intervention, students were randomly assigned to one of three feedback groups: peer feedback on pen-and-paper drafts, teacher feedback delivered electronically through a course management system, and automated feedback generated through computer-based writing evaluation software. Pre- and post-measures of student writing quality, length, and correctness were analyzed, and survey data explored student perceptions of their experiences. Findings indicate that all students, regardless of which form of feedback they received, wrote longer essays and scored higher on holistic ratings at post test than they did at pretest. Neither language status nor group assignment had a greater or lesser impact on performance on length or holistic quality. However, differences between feedback groups spiked on the proximal measure that examined mastery of particular aspects of the genre being taught. Both peer feedback and teacher feedback delivered electronically had a statistically significant impact on student performance in the genre of open-ended response. The article concludes with a discussion of the implications of these findings for future research and instruction in the secondary context.
January 2010
-
Abstract
The teaching and learning of writing was examined in ten diverse K-12 schools in which all of the students in one or more classrooms had individual access to laptop computers. Substantial positive changes were observed in each stage of the writing process, including better access to information sources for planning and pre-writing; easier drafting of papers, especially for students with physical or cognitive disabilities that made handwriting laborious; more access to feedback, both from teachers, who could read printed papers much more quickly than handwritten ones, and, in some schools, by automated writing evaluation programs; more frequent and extensive revision; and greater opportunities to publish final papers or otherwise disseminate them to real audiences.