JILL A. HATCH
3 articles-
Abstract
This study addressed the question, “How consistently do students perform on holistically scored writing assignments?” Instructors from 13 introductory writing classes at two colleges were asked to provide essay sets written by their students in response to the three to five most important writing assignments in their classes. In all, 796 essays were collected from 241 students. The study drew on a pool of 15 experienced judges to evaluate the essays. Each essay set was scored holistically and independently by 6 of the judges who either ranked or graded the essays in the set. All papers written by a particular student were scored by the same judges. Pairwise correlations of the scores assigned to each essay set were computed for each judge and then averaged across judges. The average of these correlations was 0.16, indicating very low consistency of holistically scored student performance from essay to essay. This result suggests that drawing conclusions from one or even a few writing samples is problematic.
-
Abstract
In many literacy studies, it is important to establish the reliability of independent observers' judgments. Reliability most commonly is measured either by the percentage of agreement or the correlation between the observers' judgments. This article argues that the percentage of agreement measure is more difficult to interpret than are correlation measures because of the following: (a) the effects of chance agreement are not accounted for automatically by the percentage of agreement measure; and (b) rates of chance agreement are strongly influenced by the variability of the data, by “ceiling” and “floor” effects, and by the scoring of near agreement as perfect agreement. For these reasons, the authors recommend that the field of literacy research adopt correlation as the standard method for estimating the reliability of observers' judgments.
-
Abstract
Although social psychologists have studied how people form impressions of others either through viewing them, listening to them speak, or reading written descriptions of them, researchers have not looked extensively at the ways in which readers form impressions of writers' personalities while reading their texts. This article reports on a series of studies in which different groups of readers were asked to respond to essays written by high school students applying for college admission. Our findings suggest that independent readers' impressions of writers' personalities overlap far more than would be expected by chance, that readers' impressions of writers' personalities can have practical consequences for writers, and that texts can be revised so as to influence, in predicted ways, the types of personality traits that readers are likely to infer.