Assessing Writing
115 articlesJanuary 2026
October 2025
July 2025
April 2025
January 2025
October 2024
July 2024
April 2024
January 2024
October 2023
July 2023
April 2023
January 2023
October 2022
July 2022
April 2022
January 2022
-
Appropriateness as an aspect of lexical richness: What do quantitative measures tell us about children's writing? ↗
Abstract
Quantitative measures of vocabulary use have added much to our understanding of first and second language writing development. This paper argues for measures of register appropriateness as a useful addition to these tools. Developing an idea proposed by Durrant and Brenchley (2019), it explores what such measures can tell us about vocabulary development in the L1 writing of school children in England and critically examines how results should be interpreted. It shows that significant patterns of discipline- and genre-specific vocabulary development can be identified for measures related to four distinct registers, though the strongest patterns are found for vocabulary associated with fiction and academic writing. Follow-up analyses showed that changes across year groups were primarily driven, not by the nature of individual words, but by the overall quantitative distribution of register-specific vocabulary, suggesting that the traditional distinction between measures of lexical diversity and lexical sophistication may not be helpful for understanding development in this context. Closer analysis of academic vocabulary showed development of distinct vocabularies in Science and English writing in response to sharply differing communicative needs in those disciplines, suggesting that development in children’s academic vocabulary should not be seen as a single coherent process.
October 2021
July 2021
April 2021
January 2021
October 2020
July 2020
April 2020
January 2020
October 2019
July 2019
April 2019
January 2019
October 2018
July 2018
April 2018
-
Going online: The effect of mode of delivery on performances and perceptions on an English L2 writing test suite ↗
Abstract
In response to changing stakeholder needs, large-scale language test providers have increasingly considered the feasibility of delivering paper-based examinations online. Evidence is required, however, to determine whether online delivery of writing tests results in changes to writing performance reflected in differential test scores across delivery modes, and whether test-takers hold favourable perceptions of online delivery. The current study aimed to determine the effect of delivery mode on the two writing tasks (reading-into-writing and extended writing) within the Trinity College London Integrated Skills in English (ISE) test suite across three proficiency levels (CEFR B1-C1). 283 test-takers (107 at ISE I/B1, 109 at ISE II/B2, and 67 at ISE III/C1) completed both writing tasks in paper-based and online mode. Test-takers also completed a questionnaire to gauge perceptions of the impact, usability and fairness of the delivery modes. Many-facet Rasch measurement (MFRM) analysis of scores revealed that delivery mode had no discernible effect, apart from the reading-into-writing task at ISE I, where the paper-based mode was slightly easier. Test-takers generally held more positive perceptions of the online delivery mode, although technical problems were reported. Findings are discussed with reference to the need for further research into interactions between delivery mode, task and level.