Eliciting formative assessment in peer review

Ilya M. Goldin Carnegie Mellon University ; Kevin D. Ashley

Abstract

Computer-supported peer review systems can support reviewers and authors in many different ways, including through the use of different kinds of reviewing criteria. It has become an increasingly important empirical question to determine whether reviewers are sensitive to different criteria and whether some kinds of criteria are more effective than others. In this work, we compared the differential effects of two types of rating prompts, each focused on a different set of criteria for evaluating writing: prompts that focus on domain-relevant aspects of writing composition versus prompts that focus on issues directly pertaining to the assigned problem and to the substantive issues under analysis. We found evidence that reviewers are sensitive to the differences between the two types of prompts, that reviewers distinguish among problem-specific issues but not among domain-writing ones; that both types of ratings correlate with instructor scores; and that problem-specific ratings are more likely to be helpful and informative to peer authors in that they are less redundant.

Journal
Journal of Writing Research
Published
2012-11-01
DOI
10.17239/jowr-2012.04.02.5
CompPile
Search in CompPile ↗
Open Access
OA PDF Diamond
Topics
Export

Citation Context

Cited by in this index (0)

No articles in this index cite this work.

Cites in this index (0)

No references match articles in this index.