Assessing Writing

61 articles
Year: Topic: Clear
Export:
writing pedagogy ×

January 2026

  1. Generative artificial intelligence for automated essay scoring: Exploring teacher agency through an ecological perspective
    Abstract

    Generative artificial intelligence (AI) is increasingly used in writing assessment, particularly for automated essay scoring (AES) and for generating formative feedback within automated writing evaluation (AWE). While AI-driven AES enhances efficiency and consistency, concerns regarding accuracy, bias, and ethical implications raise critical questions about its role in assessment. This paper examines the impact of generative AI on teacher agency through an ecological perspective, which considers agency as shaped by personal, institutional, and sociocultural factors. The analysis highlights the need for teachers to critically mediate AI-generated scores and feedback to align them with pedagogical goals, ensuring AI functions as an assistive tool rather than a determinant of assessment outcomes. Although AI can streamline assessment, over-reliance risks diminishing teachers’ evaluative expertise and reinforcing biases embedded in AI systems. Ethical concerns, including transparency, data privacy, and fairness, further complicate its adoption. To address these challenges, this paper proposes a framework for responsible AI integration that prioritizes bias mitigation, data security, and teacher-driven decision-making. The discussion concludes with pedagogical implications and directions for future research on AI-assisted writing assessment. • Teachers can actively mediate AI-generated scores to maintain agency. • Dependence on AES may weaken teachers’ evaluative skills. • Bias, data privacy, and AI opacity can undermine teachers’ decision-making. • AI literacy and hybrid assessment models can promote teacher autonomy. • A framework for protecting teacher agency in generative AI–based AWE is presented.

    doi:10.1016/j.asw.2025.100990
  2. Unveiling the antecedents of feedback-seeking behavior in L2 writing: The impact of future L2 writing selves and emotions
    Abstract

    While existing research on second or foreign (L2) feedback has predominantly focused on the effectiveness of various feedback practices and their impacts on writing performance, limited attention has been devoted to learners’ proactive role in seeking feedback, and how this important yet underexplored construct correlates with conative and affective variables remains insufficiently examined. To help fill that void, we sought to explore the concept of feedback-seeking behavior and its antecedents in L2 writing by examining the correlations with future L2 writing selves and emotions, particularly unpacking the mediating effect of emotions in the emotion-driven chain of “motivation→emotion→increased or decreased behavior” among 225 undergraduate English major students. Structural equation modeling unveiled that ideal and ought-to L2 writing selves directly and significantly influenced emotions, and emotions impacted the two dimensions of feedback-seeking behavior significantly. More importantly, ideal L2 writing self indirectly influenced feedback monitoring and feedback inquiry through the mediation of writing enjoyment. Nevertheless, writing boredom exercised no significant mediating effect on future L2 selves and feedback-seeking behavior. These findings reinforced the learner-centered perspective that positions students as proactive agents and provide some notable implications for L2 writing instruction to advance our understanding of teacher feedback. • Learners with heightened L2 selves deployed more feedback-seeking strategies. • Experiencing L2 enjoyment fostered distinct feedback-seeking behaviors. • No variations in L2 boredom existed in the link between L2 selves and behavior. • More high-quality research evaluating L2 learners as proactive agents is needed.

    doi:10.1016/j.asw.2025.101009
  3. How reliable and valid is peer evaluation in adolescents’ L2 argumentative writing?
    Abstract

    Peer evaluation is widely recognized for its educational benefits; however, its reliability and validity, particularly among adolescent second-language (L2) writers at the early stages of English language and literacy development, remain insufficiently explored. This explanatory sequential mixed-methods study investigated the reliability and validity of peer evaluation in English argumentative writing among 35 Grade 10 and 37 Grade 12 students from a public high school in Beijing, China. Twelve of the participating students (six at each grade) were interviewed about the validity, reliability, and value of peer evaluation. The findings indicated that peer evaluations demonstrated high levels of reliability and validity, with peer-assessed writing scores closely aligning with inter-teacher assessments. Notably, variations were observed among Grade 10 students, particularly in the evaluation of lower-order writing skills, such as grammar and vocabulary, which exhibited reduced validity. These results underscore the potential of peer evaluation in assessing higher-order content-level writing across varying levels of L2 English writing proficiency. The study also highlights areas where adolescent L2 writers may require additional support to enhance the effectiveness of peer evaluation practices in English argumentative writing. Implications for improving English argumentative writing instruction and refining peer evaluation strategies in high school L2 English classrooms are discussed. • Peer evaluation shows high reliability, similar to inter-teacher rating. • Peer evaluation works well for higher-order skills in L2 argumentative writing. • 10th graders struggled with evaluating lower-order skills like grammar. • 12th graders evaluate lower- and higher-order skills with greater validity than 10th graders.

    doi:10.1016/j.asw.2025.100992
  4. Assessing the effects of explicit coherence instruction on EFL students’ integrated writing performance
    Abstract

    As a key attribute of effective writing, coherence remains challenging to teach in language classrooms, with traditional writing instruction frequently overlooking coherence in favor of discrete, rule-based features. This mixed-methods study investigates the effectiveness of explicit coherence instruction on English-as-a-Foreign-Language (EFL) students’ performance on integrated writing tasks. The study employed a controlled experimental design with 64 upper-intermediate-level undergraduate students at a Chinese university, drawing on Hasan’s Cohesive Harmony theory as the theoretical framework. Half of the participants (n = 32) in the experimental group received explicit instruction on coherence with a focus on cohesive chains and cohesive devices in integrated writing, while the control group (n = 32) received standard paraphrasing instruction. Quantitative analysis revealed that the experimental group showed significant improvements in coherence scores and multiple cohesive chain measures. Qualitative discourse analysis of six students’ writing samples from the experimental group demonstrated varying levels of improvement in writing coherence, with high-performing students showing better use of identity chains and pronoun references. The findings revealed that explicit instruction on coherence significantly improved students’ performance in creating coherent integrated writing, particularly through the development of cohesive chains and appropriate use of cohesive devices. This study underscores the pedagogical value of teaching coherence to enhance writing quality and provides concrete strategies for developing more effective teaching approaches for integrated writing tasks in EFL contexts. • The study examined 64 Chinese EFL students using mixed-methods experimental design. • Cohesive Harmony theory served as the framework for assessing writing coherence. • Explicit instruction significantly improved coherence in integrated writing tasks. • High-performing students demonstrated superior identity chain development.

    doi:10.1016/j.asw.2026.101019

April 2025

  1. Validation of the individual and collective self-efficacy scale for teaching writing in post-secondary faculty
    Abstract

    Faculty actions in the classroom are known to impact student writing self-efficacy and academic achievement. The purpose of this paper was to validate Locke and Johnston’s Individual and Collective Self-Efficacy for Teaching Writing Scales, a tool originally validated in high school teachers, in a new population of post-secondary faculty. Exploratory and confirmatory factor analysis methods were used in two studies with independent samples of multidisciplinary faculty (N = 281) for the exploratory factor analysis (Study 1) and nursing discipline specific faculty (N = 187) for the confirmatory factor analysis (Study 2). Three factors were identified in the questionnaire which maintained the essence of the theoretical structure proposed by Locke and Johnston. Factor 1 was named Context and Process Competencies, Factor 2 Textural Competencies, and Factor 3 Motivational Competencies. This factor structure was confirmed with acceptable goodness of fit in the confirmatory factor analysis Study 2. Learning to be a teacher of writing is a developmental process and this measurement tool has important validation information that speaks to its usefulness in understanding that process. • Instructional practices are known to impact student achievement levels. • Faculty individual self-efficacy for teaching writing is three factors. • Faculty undergo a slow enculturation practice to teaching writing. • This scale can be used to assess impact of teacher agency on student outcomes.

    doi:10.1016/j.asw.2025.100923

January 2025

  1. A meta-analysis of relationships between syntactic features and writing performance and how the relationships vary by student characteristics and measurement features
    Abstract

    Students’ proficiency in constructing sentences impacts the writing process and writing products. Linguistic demands in writing differ in terms of both student characteristics and measurement features. To identify various syntactic demands considering these features, we conducted a meta-analysis examining the relationships between syntactic features (complexity and accuracy) and writing performance (quality, productivity, and fluency) and moderating effects of both student characteristics and measurement features. A total of 109 studies (effect sizes: 871; the total number of participants: 24,628) met the inclusion criteria. Results showed that there was a weak relationship for syntactic accuracy (r = .25) and complexity (r = .16). Writers' characteristics, including grade level and language proficiency, and measurement features, writing genres, writing outcomes, whether the writing task is text-based or not, and type of syntactic complexity measures, were significant moderators for certain syntactic features. The findings highlighted the importance of writer and measurement factors when considering the relationships between linguistic features in writing and writing performance. Implications were discussed regarding the selection of syntactic features in assessing language use in writing, gaps in the literature, and significance for writing instruction and assessment. • Aimed to depict the relationships between syntactic features and writing performance. • Found weak relationships between syntactic features and writing outcomes. • Relationships vary as a function of student characteristics and measurement features. • Noun phrase complexity might be more valid than some traditional syntactic complexity measures. • Findings have important implications for writing assessments.

    doi:10.1016/j.asw.2024.100909

October 2024

  1. Effects of a genre and topic knowledge activation device on a standardized writing test performance
    Abstract

    The aim of this article was twofold: first, to introduce a design for a writing test intended for application in large-scale assessments of writing, and second, to experimentally examine the effects of employing a device for activating prior knowledge of topic and genre as a means of controlling construct-irrelevant variance and enhancing validity. An authentic, situated writing task was devised, offering students a communicative purpose and a defined audience. Two devices were utilized for the cognitive activation of topic and genre knowledge: an infographic and a genre model. The participants in this study were 162 fifth-grade students from Santiago de Chile, with 78 students assigned to the experimental condition (with activation device) and 84 students assigned to the control condition (without activation device). The results demonstrate that the odds of presenting good writing ability are higher for students who were part of the experimental group, even when controlling for text transcription ability, considered a predictor of writing. These findings hold implications for the development of large-scale tests of writing guided by principles of educational and social justice. • Genre and topic knowledge are forms of prior knowledge relevant to writing. • Higher odds for better writing in students exposed to prior knowledge activation. • Results support use of prior knowledge activation in standardized assessment.

    doi:10.1016/j.asw.2024.100898

July 2024

  1. Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy
    doi:10.1016/j.asw.2024.100874
  2. A teacher’s inquiry into diagnostic assessment in an EAP writing course
    doi:10.1016/j.asw.2024.100848
  3. Beyond accuracy gains: Investigating the impact of individual and collaborative feedback processing on L2 writing development
    Abstract

    Despite the burgeoning research on exploring learner engagement with feedback, how second language (L2) learners’ engagement with feedback in different processing conditions influences their subsequent writing development is under-explored. This study examines the effects of individual and collaborative processing (languaging) of teacher feedback on Chinese lower-secondary school EFL learners’ writing development. Eighty-one students aged 13–14 with A1-A2 levels of English proficiency (according to the Common European Framework of Reference) from two classes and two experienced English teachers participated in the study. Students were provided with comprehensive teacher feedback and were asked to process feedback provided on three writing tasks through either individual written or collaborative oral languaging over six weeks. Pre-, post-, and delayed post-tests were administered. Students’ writing development was analysed using complexity, accuracy, and fluency measures, as well as content and organisation writing scores. Findings showed that the two conditions did not influence students’ writing complexity and fluency differently, while only the collaborative oral languaging condition contributed to students’ sustainable accuracy gains. Results based on the analytic writing scores suggested that students in the two conditions significantly improved content and organisation scores over time. Pedagogical and research implications regarding implementing the two feedback processing conditions are discussed.

    doi:10.1016/j.asw.2024.100876

April 2024

  1. Visualizing formative feedback in statistics writing: An exploratory study of student motivation using DocuScope Write & Audit
    Abstract

    Recently, formative feedback in writing instruction has been supported by technologies generally referred to as Automated Writing Evaluation tools. However, such tools are limited in their capacity to explore specific disciplinary genres, and they have shown mixed results in student writing improvement. We explore how technology-enhanced writing interventions can positively affect student attitudes toward and beliefs about writing, both reinforcing content knowledge and increasing student motivation. Using a student-facing text-visualization tool called Write & Audit, we hosted revision workshops for students (n = 30) in an introductory-level statistics course at a large North American University. The tool is designed to be flexible: instructors of various courses can create expectations and predefine topics that are genre-specific. In this way, students are offered non-evaluative formative feedback which redirects them to field-specific strategies. To gauge the usefulness of Write & Audit, we used a previously validated survey instrument designed to measure the construct model of student motivation (Ling et al. 2021). Our results show significant increases in student self-efficacy and beliefs about the importance of content in successful writing. We contextualize these findings with data from three student think-aloud interviews, which demonstrate metacognitive awareness while using the tool. Ultimately, this exploratory study is non-experimental, but it contributes a novel approach to automated formative feedback and confirms the promising potential of Write & Audit.

    doi:10.1016/j.asw.2024.100830
  2. Is the variation in syntactic complexity features observed in argumentative essays produced by B1 level EFL learners in Finland and Pakistan attributable exclusively to their L1?
    Abstract

    This study has explored the syntactic complexity features of English learners at the B1 Common European Framework of Reference (CEFR) (CoE, 2001) level from both Pakistan and Finland. The learners in question were taught English as a Foreign Language (EFL) using different pedagogical methods. This study took into account various factors including the learners' proficiency level, age, and grade, as well as variations in their native language. To assess the impact of the learners' native language and pedagogical methods on syntactic complexity features, twelfth grade EFL students from Upper-Secondary schools in both nations were given identical instructions and time limits to complete an English academic essay on the same topic. The study utilized L2 syntactic complexity analyzer (L2SCA) to extract fourteen syntactic complexity features, and Mann-Whitney U Tests were used to analyze the differences in the syntactic complexity features between the two groups. The study has revealed significant differences between Finnish and Pakistani EFL learners due to variations in their native language and the effects of pedagogical methods on syntactic complexity features. The implications of this study extend to language testing and assessment, the CEFR framework, and pedagogy in both Finland and Pakistan.

    doi:10.1016/j.asw.2024.100839
  3. Assessing video game narratives: Implications for the assessment of multimodal literacy in ESP
    Abstract

    Research into the contribution of multimodality to language learning is gaining momentum. While most studies pave the way for new understandings of language teaching and learning, there is an increasing demand for comprehensive assessment practices, particularly within higher education contexts. A few studies have emphasized the importance of reflecting on and establishing criteria for the assessment of multimodal literacy. This is necessary to understand students’ contributions in detail and to provide them with effective support in developing their multimodal skills. This study discusses the assessment of multimodal writing in English for Specific Purposes (ESP) contexts. It presents the design of an analytical tool for assessing multimodal texts and provided an example of its application. This tool covers assessment categories such as language use, content expression, interpersonal meaning, multimodality, and creativity and originality. As an example, we focus on the multimodal writing of a video game narrative, a genre that requires the integration of multiple modes of communication to convey meaning more effectively. Finally, this study offers pedagogical insights into the assessment of multimodal literacy in ESP.

    doi:10.1016/j.asw.2024.100809
  4. Characteristics of students’ task representation and its association with argumentative integrated writing performance
    Abstract

    Task representation denotes students’ interpretation in which what a learning or assessment task required them to do. An argumentative integrated writing task which involves the use of reading materials as claims or evidences for composing an essay, makes the role of task representation more critical than others, as writers may be confused with whether their task is to focus on synthesizing the reading materials that they comprehend, or expressing their own views. With the aim of exploring the characteristics of task representation and its association with integrated writing, this study invited 474 secondary four students from Hong Kong to participate in think aloud writing protocol followed by stimulated recall interview (36 participants), and complete an integrated writing task and a questionnaire (438 participants). Three factors of the task representation were identified as source use, rhetorical purpose and text format, and significant positive correlations were found between the three factors and integrated writing performance. Theoretical and pedagogical implications are discussed.

    doi:10.1016/j.asw.2024.100845

January 2024

  1. A mixed Rasch model analysis of multiple profiles in L2 writing
    Abstract

    The present study used the Mixed Rasch Model (MRM) to identify multiple profiles in L2 students’ writing with regard to several linguistic features, including content, organization, grammar, vocabulary, and mechanics. To this end, a pool of 500 essays written by English as a foreign language (EFL) students were rated by four experienced EFL teachers using the Empirically-derived Descriptor-based Diagnostic (EDD) checklist. The ratings were subjected to MRM analysis. Two distinct profiles of L2 writers emerged from the sample analyzed including: (a) Sentence-Oriented and (b) Paragraph-Oriented L2 Writers. Sentence-Oriented L2 Writers tend to focus more on linguistic features, such as grammar, vocabulary, and mechanics, at the sentence level and try to utilize these subskills to generate a written text. However, Paragraph-Oriented Writers are inclined to move beyond the boundaries of a sentence and attend to the structure of a whole paragraph using higher-order features such as content and organization subskills. The two profiles were further examined to capture their unique features. Finally, the theoretical and pedagogical implications of the identification of L2 writing profiles and suggestions for further research are discussed.

    doi:10.1016/j.asw.2023.100803

October 2023

  1. Assessing Korean writing ability through a scenario-based assessment approach
    doi:10.1016/j.asw.2023.100766
  2. Feedback literacy in writing research and teaching: Advancing L2 WCF research agendas
    Abstract

    Research on corrective feedback (CF) has developed from its original focus on identifying which type of CF is most effective for developing L2 language learners’ grammatical accuracy to focusing on how learners use CF. Underpinning this is the assumption that learners know what to do with CF when they receive it. The concept of “feedback literacy” challenges this assumption. Carless and Boud (2018), define feedback literacy as “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (p. 1316). Our intention in this paper is to reflect on the manner in which theoretical and empirical work on feedback literacy can contribute to advancing L2 written corrective feedback (WCF) research agendas. Central in our proposal is the partially under-researched aspect of experience in terms of the L2 writers’ educational background experience, particularly experience with L1 and L2 writing. We further argue that how learners were taught L1 writing and how the L1 educational culture/ society values writing can impact on how learners approach L2 writing tasks and accompanying feedback. Implications of this inclusive view of the learner for future research and pedagogy is discussed.

    doi:10.1016/j.asw.2023.100786
  3. Insights from lexical and syntactic analyses of a French for academic purposes assessment
    Abstract

    With the objective of improving writing assessment of language instruction, we examine the lexical and syntactic features in two corpora of high and low scoring French texts of the Test du Certificat de Compétence en Langue Seconde (Second Language Certification Test; TCCLS) at the University of Ottawa (uOttawa). We first situate the test in its local context, demonstrating how our research objectives are born from specific needs to improve student outcomes. We then describe our creation of two corpora of high and low performing test takers, followed by lexical bundle (LB) analyses (Phase 1) and further linguistic complexity analyses with a French-language tool (Phase 2). Results indicate that high level writers used more LBs and borrowed more text from the prompt than low level writers. In addition, specific elements of linguistic complexity were identified, suggesting high level writers produced texts that were lexically richer and more syntactically advanced. We discuss the importance of these findings in improving our writing instruction, as well as the challenges of adapting tools and approaches traditionally associated with English to French.

    doi:10.1016/j.asw.2023.100789

July 2023

  1. Collaborating with ChatGPT in argumentative writing classrooms
    doi:10.1016/j.asw.2023.100752
  2. Shifting perceptions of socially just writing assessment: Labor-based contract grading and multilingual writing instruction
    doi:10.1016/j.asw.2023.100731

April 2023

  1. The design and cognitive validity verification of reading-to-write tasks in L2 Chinese writing assessment
    Abstract

    Reading-to-write (RTW) tasks have been commonly employed in second language (L2) English academic writing pedagogy, and many studies have investigated the validity and reliability of RTW tasks in L2 English writing assessment. Meanwhile, few studies have examined the cognitive validity of RTW tasks, and the design and validation of such tasks in L2 Chinese academic writing assessment remain underexplored. This study develops a Chinese RTW task following a set of design criteria and procedures and evaluates its cognitive validity as an instrument of L2 Chinese academic writing assessment. The RTW task was administered to 15 undergraduate and 15 postgraduate L2 Chinese learners in an eye-tracking laboratory. Analyses of the task features and the eye-tracking and stimulated recall interview data suggested that the RTW task largely aligned with the characteristics of authentic tasks in real L2 Chinese academic writing contexts and elicited a representative range of cognitive processes in existing models of RTW cognitive processes. Many of these processes manifested in different ways between the two groups of participants at different L2 Chinese proficiency levels. Our findings have useful implications for understanding the cognitive validity of the RTW task in L2 Chinese writing assessment.

    doi:10.1016/j.asw.2023.100699
  2. Exploring multilingual students’ feedback literacy in an asynchronous online writing course
    doi:10.1016/j.asw.2023.100718
  3. Pedagogical values of translingual practices in improving student feedback literacy in academic writing
    doi:10.1016/j.asw.2023.100715
  4. Genre pedagogy: A writing pedagogy to help L2 writing instructors enact their classroom writing assessment literacy and feedback literacy
    Abstract

    As part of a larger case study, this single exploratory case study aims to explore the potential of genre-based pedagogy (GBP) to allow L2 writing instructors to enact their writing assessment literacy and feedback literacy. The findings demonstrate that GBP afforded the participating writing instructor of a genre-based EAP writing course to carry out effective writing classroom assessment practices and thus enact their2 writing assessment literacy and feedback literacy. GBP allowed effective writing classroom assessment practices such as diagnostic assessment and learner involvement in assessment. More specifically, genre exploration tasks led to diagnostic assessment and helped the instructor coordinate effective classroom discussions to elicit evidence of the students’ knowledge of the target genre that they would study. Second, students’ production of texts in target genres not only allowed the instructor to collect evidence of the students’ specific genre knowledge, but it also afforded learner involvement through self-reflection. The instructor could also efficiently interpret this evidence and provide formative feedback through pre-established genre specific assessment criteria.

    doi:10.1016/j.asw.2023.100717

January 2023

  1. Exploring the development of student feedback literacy in the second language writing classroom
    doi:10.1016/j.asw.2023.100697

October 2022

  1. Integrated writing and its correlates: A meta-analysis
    Abstract

    Integrated tasks are increasing in popularity, either replacing or complementing writing-only independent tasks in writing assessments. This shift has generated many research interests to investigate the underlying construct and features of integrated writing (IW) performances. However, due to the complexity of the IW construct, there are conflicting findings about whether and the extent to which various language skills and IW text features correlate to IW scores. To understand the construct of IW, we conducted a meta-analysis to synthesize correlation coefficients between scores of IW performances and (1) other language skills and (2) text quality features of IW. We also examined factors that may moderate the correlation of IW scores with these two groups of correlates. Consequently, (1) reading and writing skills showed stronger correlations than listening to IW scores; and (2) text length had a strongest correlation, followed by source integration, organization and syntactic complexity, with a smallest correlation of lexical complexity. Several IW task features affected the magnitude of correlations. The results supported the view that IW is an independent construct, albeit related, from other language skills and IW task features may affect the construct of IW.

    doi:10.1016/j.asw.2022.100662

July 2022

  1. Implementing continuous assessment in an academic English writing course: An exploratory study
    doi:10.1016/j.asw.2022.100629

January 2022

  1. Composing strategies employed by high-and low-performing Iranian EFL students in essay writing classes
    doi:10.1016/j.asw.2021.100601

October 2021

  1. Diagnosing writing ability using China’s Standards of English Language Ability: Application of cognitive diagnosis models
    doi:10.1016/j.asw.2021.100565
  2. Assessing EFL students’ writing development as they are exposed to the integrated use of drama-based pedagogy and SFL-based teaching
    doi:10.1016/j.asw.2021.100569
  3. L2 learners’ agentic engagement in an assessment as learning-focused writing classroom
    doi:10.1016/j.asw.2021.100571
  4. Repurposing plagiarism detection services for responsible pedagogical application and (In)Formative assessment of source attribution practices
    doi:10.1016/j.asw.2021.100563

July 2021

  1. Examining lexical features and academic vocabulary use in adolescent L2 students’ text-based analytical essays
    Abstract

    Having rich and complex vocabulary is a crucial component that contributes to the quality of writing for academic purposes. However, use of academic vocabulary can be challenging for adolescent L2 writers who are developing their academic language proficiency. Thus, understanding lexical needs of adolescent L2 students in composing academic essays is pivotal in supporting this population in their endeavor to become proficient academic writers. This study investigates the lexical features of adolescent L2 students’ text-based analytical essays and analyzes the extent to which lexical density, lexical diversity, and lexical sophistication predict the quality of their writing. Computational tools Coh-Metrix and VocabProfiler were used to obtain quantitative measures of lexical density, diversity, and sophistication. The results of the study indicate that the essays (n = 70), on average, have (1) low lexical density, (2) more repetition of words indicating less diversity compared to grade-level estimates, and (3) a higher percentage of basic words and lower percentage of academic words. 44 % of the AWL words in the essays come from the source text and prompt. The results of multiple hierarchical regression indicate that the use of academic vocabulary is a predictor of writing quality. The study has important pedagogical implications for classroom practice at secondary school.

    doi:10.1016/j.asw.2021.100540

July 2020

  1. Co-constructed rubrics and assessment for learning: The impact on middle school students’ attitudes and writing skills
    doi:10.1016/j.asw.2020.100468

January 2020

  1. Linking TOEFL iBT® writing rubrics to CEFR levels: Cut scores and validity evidence from a standard setting study
    Abstract

    English writing is a key competence for higher education success. However, research on the assessment of writing skills in English as a foreign language in European upper secondary education (i.e. beyond year 9) remains scarce. The Common European Framework of Reference (CEFR) describes language proficiency on a scale of six ascending levels (A1-C2). For writing skills at the end of secondary education in Europe, the common standard is vantage level B2. In this study, experts from Germany and Switzerland linked upper secondary students’ writing profiles elicited in a constructed response test (integrated and independent essays from the TOEFL iBT®) to CEFR levels. Standard setting methodology (a modified examinee paper selection/performance profile approach) was used to establish the linkages. The study reports the methodology and procedure of the standard setting process and discusses the procedural and internal validity of resulting cut scores. It also applies the cut scores to a large sample of upper secondary students in Germany and Switzerland to gain evidence for external and consequential validity.

    doi:10.1016/j.asw.2019.100420

October 2018

  1. Contract grading in the technical writing classroom: Blending community-based assessment and self-assessment
    doi:10.1016/j.asw.2018.06.002

July 2018

  1. From assessing to teaching writing: What teachers prioritize
    doi:10.1016/j.asw.2018.03.003

October 2017

  1. Student and instructor perceptions of writing tasks and performance on TOEFL iBT versus university writing courses
    doi:10.1016/j.asw.2017.09.004
  2. Assessing C2 writing ability on the Certificate of English Language Proficiency: Rater and examinee age effects
    doi:10.1016/j.asw.2017.08.004

April 2017

  1. To make a long story short: A rubric for assessing graduate students’ academic and popular science writing skills
    doi:10.1016/j.asw.2016.12.004
  2. Improvement of writing skills during college: A multi-year cross-sectional and longitudinal study of undergraduate writing performance
    doi:10.1016/j.asw.2016.11.001

July 2016

  1. Searching for differences and discovering similarities: Why international and resident second-language learners’ grammatical errors cannot serve as a proxy for placement into writing courses
    doi:10.1016/j.asw.2016.05.001

October 2015

  1. Developing rubrics to assess the reading-into-writing skills: A case study
    doi:10.1016/j.asw.2015.07.004

April 2015

  1. Predicting EFL writing ability from levels of mental representation measured by Coh-Metrix: A structural equation modeling study
    doi:10.1016/j.asw.2015.03.001

January 2014

  1. Building students’ evaluative and productive expertise in the writing classroom
    doi:10.1016/j.asw.2013.11.004

April 2013

  1. How different are they? A comparison of Generation 1.5 and international L2 learners’ writing ability
    doi:10.1016/j.asw.2013.01.003
  2. Two portfolio systems: EFL students’ perceptions of writing ability, text improvement, and feedback
    doi:10.1016/j.asw.2012.10.003

April 2012

  1. Challenges in assessing the development of writing ability: Theories, constructs and methods
    doi:10.1016/j.asw.2012.02.001

January 2010

  1. Investigating learners’ use and understanding of peer and teacher feedback on writing: A comparative study in a Chinese English writing classroom
    doi:10.1016/j.asw.2010.01.002

January 2008

  1. Harming not helping: The impact of a Canadian standardized writing assessment on curriculum and pedagogy
    doi:10.1016/j.asw.2008.10.004