Gary A. Troia
2 articles-
Abstract
The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers—narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned automated writing evaluation models in three conditions: (1) register-related factors alone, (2) linguistic features alone, and (3) the combination of these two. A total of 1006 essays ( n = 327, 342, and 337 for informative, narrative, and opinion, respectively) written by 92 fourth- and fifth-graders were examined. A series of hierarchical linear regression analyses controlling for the effects of demographics were conducted to select the most useful features to capture text quality, scored by humans, in the three registers. These features were in turn entered into automated writing evaluation predictive models with tuning of the parameters in a tenfold cross-validation procedure. The average validity coefficients (i.e., quadratic-weighed kappa, Pearson correlation r, standardized mean score difference, score deviation analysis) were computed. The results demonstrate that (1) diverse feature sets are utilized to predict quality in the three registers, and (2) the combination of register-related factors and linguistic features increases the accuracy and validity of all human and automated scoring models, especially for the registers of informative and opinion writing. The findings from this study suggest that students’ register knowledge and register-specific motivation add additional predictive information when evaluating writing quality across registers beyond that afforded by linguistic features of the paper itself, whether using human scoring or automated evaluation. These findings have practical implications for educational practitioners and scholars in that they can help strengthen consideration of register-specific writing skills and cognitive and motivational forces that are essential components of effective writing instruction and assessment.
-
Abstract
This study examined multiple measures of written expression as predictors of narrative writing performance for 362 students in grades 4 through 6. Each student wrote a fictional narrative in response to a title prompt that was evaluated using a levels of language framework targeting productivity, accuracy, and complexity at the word, sentence, and discourse levels. Grade-related differences were found for all of the word-level and most of the discourse-level variables examined, but for only one sentence-level variable (punctuation accuracy). The discourse-level variables of text productivity, narrativity, and process use, the sentence-level variables of grammatical correctness and punctuation accuracy, and the word-level variables of spelling/capitalization accuracy, lexical productivity, and handwriting style were significant predictors of narrative quality. Most of the same variables that predicted story quality differentiated good and poor narrative writers, except punctuation accuracy and narrativity, and variables associated with word and sentence complexity also helped distinguish narrative writing ability. The findings imply that a combination of indices from across all levels of language production are most useful for differentiating writers and their writing. The authors suggest researchers and educators consider levels of language measures such as those used in this study in their evaluations of writing performance, as a number of them are fairly easy to calculate and are not plagued by subjective judgments endemic to most writing quality rubrics.