Abstract
Peer review has been viewed as a promising solution for improving students' writing, which still remains a great challenge for educators. However, one core problem with peer review of writing is that potentially useful feedback from peers is not always presented in ways that lead to revision. Our prior investigations found that whether students implement feedback is significantly correlated with two feedback features: localization information and concrete solutions. But focusing on feedback features is time-intensive for researchers and instructors. We apply data mining and Natural Language Processing techniques to automatically code reviews for these feedback features. Our results show that it is feasible to provide intelligent support to peer review systems to automatically assess students' reviewing performance with respect to problem localization and solution. We also show that similar research conclusions about helpfulness perceptions of feedback across students and different expert types can be drawn from automatically coded data and from hand-coded data.
- Journal
- Journal of Writing Research
- Published
- 2012-11-01
- DOI
- 10.17239/jowr-2012.04.02.3
- CompPile
- Search in CompPile ↗
- Open Access
- OA PDF Diamond
- Topics
- Export
- BibTeX RIS
Citation Context
Cited by in this index (0)
No articles in this index cite this work.
Cites in this index (0)
No references match articles in this index.
Related Articles
-
Journal of Writing Research Feb 2025Olena Vasylets; Javier Marín
-
The Peer Review Sep 2024Alexis Stewart
-
College Composition and Communication Feb 2022A Social-Constructionist Review of Feedback and Revision Research: How Perceptions of Written Feedback Might Influence Understandings of Revision Processes ↗Crook Stephanie
-
Rhetorica Jan 2022David L. Marshall
-
Rhetorica Jan 2022Denise Stodola