Assessing Writing

43 articles
Year: Topic: Clear
Export:
literacy studies ×

April 2026

  1. Developing students’ feedback literacy in disciplinary academic writing through generative artificial intelligence
    doi:10.1016/j.asw.2026.101030

January 2026

  1. Generative artificial intelligence for automated essay scoring: Exploring teacher agency through an ecological perspective
    Abstract

    Generative artificial intelligence (AI) is increasingly used in writing assessment, particularly for automated essay scoring (AES) and for generating formative feedback within automated writing evaluation (AWE). While AI-driven AES enhances efficiency and consistency, concerns regarding accuracy, bias, and ethical implications raise critical questions about its role in assessment. This paper examines the impact of generative AI on teacher agency through an ecological perspective, which considers agency as shaped by personal, institutional, and sociocultural factors. The analysis highlights the need for teachers to critically mediate AI-generated scores and feedback to align them with pedagogical goals, ensuring AI functions as an assistive tool rather than a determinant of assessment outcomes. Although AI can streamline assessment, over-reliance risks diminishing teachers’ evaluative expertise and reinforcing biases embedded in AI systems. Ethical concerns, including transparency, data privacy, and fairness, further complicate its adoption. To address these challenges, this paper proposes a framework for responsible AI integration that prioritizes bias mitigation, data security, and teacher-driven decision-making. The discussion concludes with pedagogical implications and directions for future research on AI-assisted writing assessment. • Teachers can actively mediate AI-generated scores to maintain agency. • Dependence on AES may weaken teachers’ evaluative skills. • Bias, data privacy, and AI opacity can undermine teachers’ decision-making. • AI literacy and hybrid assessment models can promote teacher autonomy. • A framework for protecting teacher agency in generative AI–based AWE is presented.

    doi:10.1016/j.asw.2025.100990
  2. How reliable and valid is peer evaluation in adolescents’ L2 argumentative writing?
    Abstract

    Peer evaluation is widely recognized for its educational benefits; however, its reliability and validity, particularly among adolescent second-language (L2) writers at the early stages of English language and literacy development, remain insufficiently explored. This explanatory sequential mixed-methods study investigated the reliability and validity of peer evaluation in English argumentative writing among 35 Grade 10 and 37 Grade 12 students from a public high school in Beijing, China. Twelve of the participating students (six at each grade) were interviewed about the validity, reliability, and value of peer evaluation. The findings indicated that peer evaluations demonstrated high levels of reliability and validity, with peer-assessed writing scores closely aligning with inter-teacher assessments. Notably, variations were observed among Grade 10 students, particularly in the evaluation of lower-order writing skills, such as grammar and vocabulary, which exhibited reduced validity. These results underscore the potential of peer evaluation in assessing higher-order content-level writing across varying levels of L2 English writing proficiency. The study also highlights areas where adolescent L2 writers may require additional support to enhance the effectiveness of peer evaluation practices in English argumentative writing. Implications for improving English argumentative writing instruction and refining peer evaluation strategies in high school L2 English classrooms are discussed. • Peer evaluation shows high reliability, similar to inter-teacher rating. • Peer evaluation works well for higher-order skills in L2 argumentative writing. • 10th graders struggled with evaluating lower-order skills like grammar. • 12th graders evaluate lower- and higher-order skills with greater validity than 10th graders.

    doi:10.1016/j.asw.2025.100992

April 2025

  1. Towards a better understanding of integrated writing performance: The influence of literacy strategy use and independent language skills
    Abstract

    This study explores the influence mechanism of literacy strategy use and independent language skills (e.g., reading and writing) on integrated writing (IW) performance. 322 Secondary Four students from four schools in Hong Kong completed single-text reading, multiple-text reading, independent writing, and IW tasks, along with questionnaires investigating their reading strategy use and IW strategy use. Path analyses revealed that multiple-text reading and independent writing had comparable significant impacts on IW, mediating the influence of single-text comprehension. In addition, reading strategy use impacted IW indirectly through independent literacy skills and IW strategy use, while IW strategies exerted a direct influence on IW. Our findings underscore the critical role of language skills in mediating the influence of reading strategies on IW performance among young first language (L1) learners. The implications for research and practice, are discussed, emphasizing the complexity of the IW construct and the need for balanced language skills and strategy instruction to enhance IW task performance. • A noble exploration of concurrent effects of strategies and independent skills on IW. • Multiple-text reading and independent writing directly influence IW performance. • Independent skills mediate the impact of reading strategies on IW performance. • Reading strategy indirectly affect IW through independent skills and IW strategy. • Balanced language skills and strategy instruction are crucial for IW performance.

    doi:10.1016/j.asw.2025.100922
  2. Designing a rating scale for an integrated reading-writing test: A needs-oriented approach
    Abstract

    To meet the current trends in higher education, there is accountability on EAP programmes to prepare and assess students’ access to higher education. Thus, multimodal tasks including integrated writing (IW) assessments have seen a resurgence because they arguably closely mirror academic writing. However, test practicality constraints and variability in the use and format of these assessments mean rating scales often fall short in substantiating the central claims of IW assessment. We developed an integrated reading-writing scale taking into account reading-writing requirements and empirical research on IW tests designed to assess readiness for first-year humanities and social science courses. We approached test development as part of the ongoing validation efforts, detailing the considerations involved in the scale development process. We argue that alignment with academic writing requirements should guide the development of IW tests, thereby acknowledging and comprehending nuances of academic writing. The paper demonstrates considerations and decisions in scale design as the validation process from the start, which is a reminder that assessment is not just a quantitative exercise but a multifaceted process. • The design of a rating scale for first-year undergraduate academic writing is detailed. • Emphasis is placed on the role of reading in integrated writing scales. • Academic argumentation, rather than solely source-use mechanics, is considered. • Implications for construct operationalisation in academic evaluations are offered.

    doi:10.1016/j.asw.2025.100918
  3. Does student assessment literacy matter between motivational constructs and engagement in L2 writing? A survey of Chinese EFL undergraduates
    doi:10.1016/j.asw.2025.100916

October 2024

  1. Effects of writing feedback literacies on feedback engagement and writing performance: A cross-linguistic perspective
    doi:10.1016/j.asw.2024.100889

July 2024

  1. Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy
    doi:10.1016/j.asw.2024.100874
  2. Corrigendum to “Assessing metacognition-based student feedback literacy for academic writing” [Assessing Writing 59 (2024) 100811]
    doi:10.1016/j.asw.2024.100869

April 2024

  1. Assessing video game narratives: Implications for the assessment of multimodal literacy in ESP
    Abstract

    Research into the contribution of multimodality to language learning is gaining momentum. While most studies pave the way for new understandings of language teaching and learning, there is an increasing demand for comprehensive assessment practices, particularly within higher education contexts. A few studies have emphasized the importance of reflecting on and establishing criteria for the assessment of multimodal literacy. This is necessary to understand students’ contributions in detail and to provide them with effective support in developing their multimodal skills. This study discusses the assessment of multimodal writing in English for Specific Purposes (ESP) contexts. It presents the design of an analytical tool for assessing multimodal texts and provided an example of its application. This tool covers assessment categories such as language use, content expression, interpersonal meaning, multimodality, and creativity and originality. As an example, we focus on the multimodal writing of a video game narrative, a genre that requires the integration of multiple modes of communication to convey meaning more effectively. Finally, this study offers pedagogical insights into the assessment of multimodal literacy in ESP.

    doi:10.1016/j.asw.2024.100809
  2. Writing assessment and feedback literacy: Where do we stand and where can we go?
    doi:10.1016/j.asw.2024.100829

January 2024

  1. Unlocking writing success: Building assessment literacy for students and teachers through effective interventions
    doi:10.1016/j.asw.2023.100804
  2. Assessing metacognition-based student feedback literacy for academic writing
    doi:10.1016/j.asw.2024.100811

October 2023

  1. Profiling support in literacy development: Use of natural language processing to identify learning needs in higher education
    doi:10.1016/j.asw.2023.100787
  2. Feedback literacy in writing research and teaching: Advancing L2 WCF research agendas
    Abstract

    Research on corrective feedback (CF) has developed from its original focus on identifying which type of CF is most effective for developing L2 language learners’ grammatical accuracy to focusing on how learners use CF. Underpinning this is the assumption that learners know what to do with CF when they receive it. The concept of “feedback literacy” challenges this assumption. Carless and Boud (2018), define feedback literacy as “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (p. 1316). Our intention in this paper is to reflect on the manner in which theoretical and empirical work on feedback literacy can contribute to advancing L2 written corrective feedback (WCF) research agendas. Central in our proposal is the partially under-researched aspect of experience in terms of the L2 writers’ educational background experience, particularly experience with L1 and L2 writing. We further argue that how learners were taught L1 writing and how the L1 educational culture/ society values writing can impact on how learners approach L2 writing tasks and accompanying feedback. Implications of this inclusive view of the learner for future research and pedagogy is discussed.

    doi:10.1016/j.asw.2023.100786
  3. Understanding EFL students’ feedback literacy development in academic writing: A longitudinal case study
    doi:10.1016/j.asw.2023.100770

July 2023

  1. Peer-feedback of an occluded genre in the Spanish language classroom: A case study
    Abstract

    Learning how to write occluded genres is an elusive task (Swales, 1996) – even more so in the case of students writing in a second or additional language. To achieve discourse competence in the use of one of these genres, in this case the ‘statement of purpose’ typical of post-graduate programme admission forms, it is first necessary to fully understand its features at both the macrotextual and microlinguistic levels (Gillaerts, 2003; Bhatia, 2004). This qualitative study focuses on the writing of learners of Spanish as an additional language to analyse whether feedback provided by peers impacts the quality of the statements of purpose they write. Through a dual discourse analysis of their written work and in-class interactions during peer- feedback sessions, our study finds that, when properly trained and using tailored assessment tools, students can use peer-assessment profitably to improve the quality of their statements of purpose, as well as to acquire appropriate metalanguage to guide others. Our results thus reconfirm the beneficial effects of helping students to achieve feedback literacy.

    doi:10.1016/j.asw.2023.100756
  2. Developing feedback literacy through dialogue-supported performances of multi-draft writing in a postgraduate class
    doi:10.1016/j.asw.2023.100759
  3. The development of teacher feedback literacy in situ: EFL writing teachers’ endeavor to human-computer-AWE integral feedback innovation
    doi:10.1016/j.asw.2023.100739
  4. Beyond literacy and competency – The effects of raters’ perceived uncertainty on assessment of writing
    Abstract

    This study investigated how common raters’ experiences of uncertainty in high-stakes testing are before, during, and after the rating of writing performances, what these feelings of uncertainty are, and what reasons might underlie such feelings. We also examined if uncertainty was related to raters’ rating experience or to the quality of their ratings. The data were gathered from the writing raters (n = 23) in the Finnish National Certificates of Proficiency, a standardized Finnish high-stakes language examination. The data comprise 12,118 ratings as well as raters’ survey responses and notes during rating sessions. The responses were analyzed by using thematic content analysis and the ratings by descriptive statistics and Many-Facets Rasch analyses. The results show that uncertainty is variable and individual, and that even highly experienced raters can feel unsure about (some of) their ratings. However, uncertainty was not related to rating quality (consistency or severity/leniency). Nor did uncertainty diminish with growing experience. Uncertainty during actual ratings was typically associated with the characteristics of the rated performances but also with other, more general and rater-related or situational factors. Other reasons external to the rating session were also identified for uncertainty, such as those related to the raters themselves. An analysis of the double-rated performances shows that although similar performance-related reasons seemed to cause uncertainty for different raters, their uncertainty was largely associated with different test-takers’ performances. While uncertainty can be seen as a natural part of holistic ratings in high-stakes tests, the study shows that even if uncertainty is not associated with the quality of ratings, we should constantly seek ways to address uncertainty in language testing, for example by developing rating scales and rater training. This may make raters’ work easier and less burdensome.

    doi:10.1016/j.asw.2023.100768
  5. Are self-compassionate writers more feedback literate? Exploring undergraduates’ perceptions of feedback constructiveness
    doi:10.1016/j.asw.2023.100761
  6. Developing EFL teachers’ feedback literacy for research and publication purposes through intra- and inter-disciplinary collaborations: A multiple-case study
    doi:10.1016/j.asw.2023.100751
  7. The mediating role of curriculum configuration on teacher’s L2 writing assessment literacy and practices in embedded French writing
    doi:10.1016/j.asw.2023.100742
  8. The development and validation of a scale on L2 writing teacher feedback literacy
    doi:10.1016/j.asw.2023.100743

April 2023

  1. Towards fostering Saudi EFL learners' collaborative engagement and feedback literacy in writing
    doi:10.1016/j.asw.2023.100721
  2. Developing teacher feedback literacy through self-study: Exploring written commentary in a critical language writing curriculum
    doi:10.1016/j.asw.2023.100709
  3. Exploring multilingual students’ feedback literacy in an asynchronous online writing course
    doi:10.1016/j.asw.2023.100718
  4. Pedagogical values of translingual practices in improving student feedback literacy in academic writing
    doi:10.1016/j.asw.2023.100715
  5. Chinese EFL Teachers’ Writing Assessment Feedback Literacy: A Scale Development and Validation Study
    doi:10.1016/j.asw.2023.100726
  6. Genre pedagogy: A writing pedagogy to help L2 writing instructors enact their classroom writing assessment literacy and feedback literacy
    Abstract

    As part of a larger case study, this single exploratory case study aims to explore the potential of genre-based pedagogy (GBP) to allow L2 writing instructors to enact their writing assessment literacy and feedback literacy. The findings demonstrate that GBP afforded the participating writing instructor of a genre-based EAP writing course to carry out effective writing classroom assessment practices and thus enact their2 writing assessment literacy and feedback literacy. GBP allowed effective writing classroom assessment practices such as diagnostic assessment and learner involvement in assessment. More specifically, genre exploration tasks led to diagnostic assessment and helped the instructor coordinate effective classroom discussions to elicit evidence of the students’ knowledge of the target genre that they would study. Second, students’ production of texts in target genres not only allowed the instructor to collect evidence of the students’ specific genre knowledge, but it also afforded learner involvement through self-reflection. The instructor could also efficiently interpret this evidence and provide formative feedback through pre-established genre specific assessment criteria.

    doi:10.1016/j.asw.2023.100717

January 2023

  1. Exploring the development of student feedback literacy in the second language writing classroom
    doi:10.1016/j.asw.2023.100697

October 2022

  1. Integrated writing and its correlates: A meta-analysis
    Abstract

    Integrated tasks are increasing in popularity, either replacing or complementing writing-only independent tasks in writing assessments. This shift has generated many research interests to investigate the underlying construct and features of integrated writing (IW) performances. However, due to the complexity of the IW construct, there are conflicting findings about whether and the extent to which various language skills and IW text features correlate to IW scores. To understand the construct of IW, we conducted a meta-analysis to synthesize correlation coefficients between scores of IW performances and (1) other language skills and (2) text quality features of IW. We also examined factors that may moderate the correlation of IW scores with these two groups of correlates. Consequently, (1) reading and writing skills showed stronger correlations than listening to IW scores; and (2) text length had a strongest correlation, followed by source integration, organization and syntactic complexity, with a smallest correlation of lexical complexity. Several IW task features affected the magnitude of correlations. The results supported the view that IW is an independent construct, albeit related, from other language skills and IW task features may affect the construct of IW.

    doi:10.1016/j.asw.2022.100662

July 2022

  1. Assessing L2 student writing feedback literacy: A scale development and validation study
    doi:10.1016/j.asw.2022.100643

April 2021

  1. Improving student feedback literacy in academic writing: An evidence-based framework
    doi:10.1016/j.asw.2021.100525

April 2019

  1. “I should summarize this whole paragraph”: Shared processes of reading and writing in iterative integrated assessment tasks
    doi:10.1016/j.asw.2019.03.003

April 2018

  1. Show me your true colours: Scaffolding formative academic literacy assessment through an online learning platform
    doi:10.1016/j.asw.2018.03.005

April 2016

  1. Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices
    doi:10.1016/j.asw.2016.03.001

October 2012

  1. Literacy instruction: From assignment to assessment
    doi:10.1016/j.asw.2012.07.001
  2. A history of New York state literacy test assessment: Historicizing calls to localism in writing assessment
    doi:10.1016/j.asw.2012.05.001

January 2009

  1. Credibly assessing reading and writing abilities for both elementary student and program assessment
    doi:10.1016/j.asw.2008.12.001

January 2005

  1. Uneasy writing: The defining moments of high-stakes literacy testing
    doi:10.1016/j.asw.2005.08.002

January 2004

  1. Integrating reading and writing in a competency test for non-native speakers of English
    doi:10.1016/j.asw.2004.01.002

January 1996

  1. Performance assessment and the literacy unit of the new standards project
    doi:10.1016/s1075-2935(96)90003-3