M. M. Patchan
3 articles-
Abstract
Peer assessment is a technique with many possible benefits for instruction across the curriculum. However, the value obtained from receiving peer feedback may critically depend upon the relative abilities of the author and the reviewer. We develop a new model of such relative ability effects on peer assessment based on the well-supported Flower and Hayes model of revision processes. To test this model across the stages of peer assessment from initial text quality, reviewing content, revision amount, and revision quality, 189 undergraduate students in a large, introductory course context were randomly assigned to consistently receive feedback from higher-ability or lower-ability peers. Overall, there were few main effects of author ability or reviewer ability. Instead, as predicted, there were many interactions between the two factors, suggesting the new model is useful for understanding ability factors in peer assessment. Often lower-ability writers benefitted more from receiving feedback from lower-ability reviewers, while higher-ability writers benefitted equally from receiving feedback from lower-ability and higher-ability reviewers. This result leads to the practical recommendation of grouping students by ability during peer assessment, contrary to student beliefs that only feedback from high ability peers is worthwhile.
-
Writing in natural sciences: Understanding the effects of different types of reviewers on the writing ↗
Abstract
In undergraduate natural science courses, two types of evaluators are commonly used to assess student writing: graduate-student teaching assistants (TAs) or peers. The current study examines how well these approaches to evaluation support student writing. These differences between the two possible evaluators are likely to affect multiple aspects of the writing process: first draft quality, amount and types of feedback provided, amount and types of revisions, and final draft quality. Therefore, we examined how these aspects of the writing process were affected when undergraduate students wrote papers to be evaluated by a group of peers versus their TA. Several interesting results were found. First, the quality of the students' first draft was greater when they were writing for their peers than when writing for their TA. In terms of feedback, students provided longer comments, and they also focused more on the prose than the TAs. Finally, more revisions were made if the students received feedback from their peers-especially prose revisions. Despite all of the benefits seen with peers as evaluators, there was only a moderate difference in final draft quality. This result indicates that while peer-review is helpful, there continues to be a need for research regarding how to enhance the benefits.
-
A validation study of students’ end comments: Comparing comments by students, a writing instructor, and a content instructor ↗
Abstract
In order to include more writing assignments in large classrooms, some instructors have been utilizing peer review. However, many instructors are hesitant to use peer review because they are uncertain of whether students are capable of providing reliable and valid ratings and comments. Previous research has shown that students are in fact capable of rating their peers papers reliably and with the same accuracy as instructors. On the other hand, relatively little research has focused on the quality of students' comments. This study is a first in-depth analysis of students' comments in comparison with a writing instructor's and a content instructor's comments. Over 1400 comment segments, which were provided by undergraduates, a writing instructor, and a content instructor, were coded for the presence of 29 different feedback features. Overall, our results support the use of peer review: students' comments seem to be fairly similar to instructors' comments. Based on the main differences between students and the two types of instructors, we draw implications for training students and instructors on providing feedback. Specifically, students should be trained to focus on content issues, while content instructors should be encouraged to provide more solutions and explanations.