Assessing Writing
29 articlesApril 2026
January 2026
October 2025
January 2025
-
A meta-analysis of relationships between syntactic features and writing performance and how the relationships vary by student characteristics and measurement features ↗
Abstract
Students’ proficiency in constructing sentences impacts the writing process and writing products. Linguistic demands in writing differ in terms of both student characteristics and measurement features. To identify various syntactic demands considering these features, we conducted a meta-analysis examining the relationships between syntactic features (complexity and accuracy) and writing performance (quality, productivity, and fluency) and moderating effects of both student characteristics and measurement features. A total of 109 studies (effect sizes: 871; the total number of participants: 24,628) met the inclusion criteria. Results showed that there was a weak relationship for syntactic accuracy (r = .25) and complexity (r = .16). Writers' characteristics, including grade level and language proficiency, and measurement features, writing genres, writing outcomes, whether the writing task is text-based or not, and type of syntactic complexity measures, were significant moderators for certain syntactic features. The findings highlighted the importance of writer and measurement factors when considering the relationships between linguistic features in writing and writing performance. Implications were discussed regarding the selection of syntactic features in assessing language use in writing, gaps in the literature, and significance for writing instruction and assessment. • Aimed to depict the relationships between syntactic features and writing performance. • Found weak relationships between syntactic features and writing outcomes. • Relationships vary as a function of student characteristics and measurement features. • Noun phrase complexity might be more valid than some traditional syntactic complexity measures. • Findings have important implications for writing assessments.
October 2024
-
Effects of a genre and topic knowledge activation device on a standardized writing test performance ↗
Abstract
The aim of this article was twofold: first, to introduce a design for a writing test intended for application in large-scale assessments of writing, and second, to experimentally examine the effects of employing a device for activating prior knowledge of topic and genre as a means of controlling construct-irrelevant variance and enhancing validity. An authentic, situated writing task was devised, offering students a communicative purpose and a defined audience. Two devices were utilized for the cognitive activation of topic and genre knowledge: an infographic and a genre model. The participants in this study were 162 fifth-grade students from Santiago de Chile, with 78 students assigned to the experimental condition (with activation device) and 84 students assigned to the control condition (without activation device). The results demonstrate that the odds of presenting good writing ability are higher for students who were part of the experimental group, even when controlling for text transcription ability, considered a predictor of writing. These findings hold implications for the development of large-scale tests of writing guided by principles of educational and social justice. • Genre and topic knowledge are forms of prior knowledge relevant to writing. • Higher odds for better writing in students exposed to prior knowledge activation. • Results support use of prior knowledge activation in standardized assessment.
April 2024
-
Abstract
The ability to produce fluent and coherent written text impacts learning and attainments. Valid and reliable assessments of writing are needed to monitor progression, develop goals for writing and identify struggling writers. In order to inform practice and research a systematic review was conducted to investigate which writing productivity measures captured writing development and identified struggling writers in elementary school. Sixty-seven empirical studies were identified for inclusion, appraised, and their data extracted under the themes of writing genre, duration of writing task, use of priming of topic knowledge prior to the writing assessment, use of planning time, writing modality, gender, age of participants and learning difficulties. Total Number of Words and Correct Word Sequences were the most common means of measuring productivity. Productivity varied significantly between genres and durations of writing tasks and was higher in girls than boys. Students with learning difficulties scored significantly lower in writing productivity when compared to typically developing peers. Insufficient research was available to draw conclusions regarding the effects of priming of topic knowledge, planning and modality on writing productivity. Study limitations, links to the assessment of writing and recommended further research are discussed.
-
Visualizing formative feedback in statistics writing: An exploratory study of student motivation using DocuScope Write & Audit ↗
Abstract
Recently, formative feedback in writing instruction has been supported by technologies generally referred to as Automated Writing Evaluation tools. However, such tools are limited in their capacity to explore specific disciplinary genres, and they have shown mixed results in student writing improvement. We explore how technology-enhanced writing interventions can positively affect student attitudes toward and beliefs about writing, both reinforcing content knowledge and increasing student motivation. Using a student-facing text-visualization tool called Write & Audit, we hosted revision workshops for students (n = 30) in an introductory-level statistics course at a large North American University. The tool is designed to be flexible: instructors of various courses can create expectations and predefine topics that are genre-specific. In this way, students are offered non-evaluative formative feedback which redirects them to field-specific strategies. To gauge the usefulness of Write & Audit, we used a previously validated survey instrument designed to measure the construct model of student motivation (Ling et al. 2021). Our results show significant increases in student self-efficacy and beliefs about the importance of content in successful writing. We contextualize these findings with data from three student think-aloud interviews, which demonstrate metacognitive awareness while using the tool. Ultimately, this exploratory study is non-experimental, but it contributes a novel approach to automated formative feedback and confirms the promising potential of Write & Audit.
-
Abstract
Research into the contribution of multimodality to language learning is gaining momentum. While most studies pave the way for new understandings of language teaching and learning, there is an increasing demand for comprehensive assessment practices, particularly within higher education contexts. A few studies have emphasized the importance of reflecting on and establishing criteria for the assessment of multimodal literacy. This is necessary to understand students’ contributions in detail and to provide them with effective support in developing their multimodal skills. This study discusses the assessment of multimodal writing in English for Specific Purposes (ESP) contexts. It presents the design of an analytical tool for assessing multimodal texts and provided an example of its application. This tool covers assessment categories such as language use, content expression, interpersonal meaning, multimodality, and creativity and originality. As an example, we focus on the multimodal writing of a video game narrative, a genre that requires the integration of multiple modes of communication to convey meaning more effectively. Finally, this study offers pedagogical insights into the assessment of multimodal literacy in ESP.
July 2023
-
Abstract
Learning how to write occluded genres is an elusive task (Swales, 1996) – even more so in the case of students writing in a second or additional language. To achieve discourse competence in the use of one of these genres, in this case the ‘statement of purpose’ typical of post-graduate programme admission forms, it is first necessary to fully understand its features at both the macrotextual and microlinguistic levels (Gillaerts, 2003; Bhatia, 2004). This qualitative study focuses on the writing of learners of Spanish as an additional language to analyse whether feedback provided by peers impacts the quality of the statements of purpose they write. Through a dual discourse analysis of their written work and in-class interactions during peer- feedback sessions, our study finds that, when properly trained and using tailored assessment tools, students can use peer-assessment profitably to improve the quality of their statements of purpose, as well as to acquire appropriate metalanguage to guide others. Our results thus reconfirm the beneficial effects of helping students to achieve feedback literacy.
April 2023
-
Genre pedagogy: A writing pedagogy to help L2 writing instructors enact their classroom writing assessment literacy and feedback literacy ↗
Abstract
As part of a larger case study, this single exploratory case study aims to explore the potential of genre-based pedagogy (GBP) to allow L2 writing instructors to enact their writing assessment literacy and feedback literacy. The findings demonstrate that GBP afforded the participating writing instructor of a genre-based EAP writing course to carry out effective writing classroom assessment practices and thus enact their2 writing assessment literacy and feedback literacy. GBP allowed effective writing classroom assessment practices such as diagnostic assessment and learner involvement in assessment. More specifically, genre exploration tasks led to diagnostic assessment and helped the instructor coordinate effective classroom discussions to elicit evidence of the students’ knowledge of the target genre that they would study. Second, students’ production of texts in target genres not only allowed the instructor to collect evidence of the students’ specific genre knowledge, but it also afforded learner involvement through self-reflection. The instructor could also efficiently interpret this evidence and provide formative feedback through pre-established genre specific assessment criteria.
July 2022
January 2022
-
Appropriateness as an aspect of lexical richness: What do quantitative measures tell us about children's writing? ↗
Abstract
Quantitative measures of vocabulary use have added much to our understanding of first and second language writing development. This paper argues for measures of register appropriateness as a useful addition to these tools. Developing an idea proposed by Durrant and Brenchley (2019), it explores what such measures can tell us about vocabulary development in the L1 writing of school children in England and critically examines how results should be interpreted. It shows that significant patterns of discipline- and genre-specific vocabulary development can be identified for measures related to four distinct registers, though the strongest patterns are found for vocabulary associated with fiction and academic writing. Follow-up analyses showed that changes across year groups were primarily driven, not by the nature of individual words, but by the overall quantitative distribution of register-specific vocabulary, suggesting that the traditional distinction between measures of lexical diversity and lexical sophistication may not be helpful for understanding development in this context. Closer analysis of academic vocabulary showed development of distinct vocabularies in Science and English writing in response to sharply differing communicative needs in those disciplines, suggesting that development in children’s academic vocabulary should not be seen as a single coherent process.
January 2021
January 2019
October 2018
-
Abstract
The British Academic Written English (BAWE) corpus (www.coventry.ac.uk/BAWE) comprises almost 3,000 pieces of university student writing distributed across four domains (Arts & Humanities, Life Sciences, Social Sciences, Physical Sciences) and four levels of study (from first year undergraduate to taught Master's level). The texts had all been submitted as part of regular university coursework, and had been awarded top grades, indicating that they had met disciplinary requirements in terms of level and task. The corpus was compiled to enable identification of the linguistic and generic features associated with successful university student writing. Our detailed analyses of the corpus led to the identification of thirteen genre families, and supports the premises that university students write in a wider variety of genres than is commonly recognised, and that student writing differs across genres, disciplines and levels of university study. This review introduces the BAWE corpus and the associated genre family classification, then explains how they can be accessed and used for teaching and research purposes, how they have been used to deepen our understanding of academic writing in English, and where