Written Communication

12 articles
Year: Topic: Clear
Export:
assessment ×

February 2026

  1. Reading Medium and Communicative Purpose in Writing: Effects on Pausing Behaviour and Text Quality, Controlling for Reading Comprehension and Executive Functions
    Abstract

    This study investigated how reading medium (print vs. digital) and communicative purpose (informative vs. persuasive) shape writing processes and outcomes in integrative academic tasks. Eighty-one university students read three source texts in print or digitally and, after random assignment, produced either an informative or persuasive synthesis within a 2×2 between-subjects design. Keystroke logging recorded pausing across three writing stages, indexing planning, translation, and revision. Text quality was scored with holistic rubrics capturing discourse features and integration of sources. Reading medium significantly influenced pausing: students who read in print paused longer during writing, yet medium had no effect on overall text quality. Task purpose mattered: persuasive tasks yielded higher-quality formal writing, whereas scores reflecting level of source integration did not differ. No interaction between reading medium and task purpose emerged. When controlling for reading comprehension, working memory, and planning ability, the main effects of medium and task purpose remained, but period-specific pausing effects were no longer significant. Findings highlight distinct roles for reading medium and task purpose in shaping writing behavior and performance. The results support cautious causal interpretations and suggest that incorporating digital reading and varying task types may enhance academic writing in higher education, informing curriculum design and assessment.

    doi:10.1177/07410883251409662

October 2020

  1. Are Two Voices Better Than One? Comparing Aspects of Text Quality and Authorial Voice in Paired and Independent L2 Writing
    Abstract

    Research has shown that collaboratively produced texts are better in quality compared with individually written texts. However, no study has considered the role of collaboration in authorial voice, which is an essential element in current writing curricula. This study analyzes the effects of collaborative task performance in the quality of L2 learners’ argumentative texts and in their authorial voice strength. A total of 306 upper-intermediate L2 learners were selected and divided into independent ( N = 130) and paired ( N = 176) groups. Each learner/pair was asked to write one argumentative text. The quality of writings was determined by a quantitative analysis that included three measures of complexity, accuracy, and fluency (CAF). Participants’ authorial voice strength was assessed by two raters using an analytic voice rubric. Comparison of means revealed that pairs outperformed independent writers in all CAF measures. However, the results for the role of collaboration in authorial voice were mixed: While pairs were more successful than independent writers in manifesting their ideational voice, independent writers outperformed pairs with regard to affective and presence voice dimensions and holistic voice scores. The article concludes that, despite its positive implications for L2 writing, collaborative writing may pose challenges for learners’ authorial stance taking.

    doi:10.1177/0741088320939542

April 2019

  1. Multidimensional Levels of Language Writing Measures in Grades Four to Six
    Abstract

    This study examined multiple measures of written expression as predictors of narrative writing performance for 362 students in grades 4 through 6. Each student wrote a fictional narrative in response to a title prompt that was evaluated using a levels of language framework targeting productivity, accuracy, and complexity at the word, sentence, and discourse levels. Grade-related differences were found for all of the word-level and most of the discourse-level variables examined, but for only one sentence-level variable (punctuation accuracy). The discourse-level variables of text productivity, narrativity, and process use, the sentence-level variables of grammatical correctness and punctuation accuracy, and the word-level variables of spelling/capitalization accuracy, lexical productivity, and handwriting style were significant predictors of narrative quality. Most of the same variables that predicted story quality differentiated good and poor narrative writers, except punctuation accuracy and narrativity, and variables associated with word and sentence complexity also helped distinguish narrative writing ability. The findings imply that a combination of indices from across all levels of language production are most useful for differentiating writers and their writing. The authors suggest researchers and educators consider levels of language measures such as those used in this study in their evaluations of writing performance, as a number of them are fairly easy to calculate and are not plagued by subjective judgments endemic to most writing quality rubrics.

    doi:10.1177/0741088318819473

April 2014

  1. What Is Successful Writing? An Investigation Into the Multiple Ways Writers Can Write Successful Essays
    Abstract

    This study identifies multiple profiles of successful essays via a cluster analysis approach using linguistic features reported by a variety of natural language processing tools. The findings from the study indicate that there are four profiles of successful writers for the samples analyzed. These four profiles are linguistically distinct from one another and demonstrate that expert human raters examine a number of different linguistic features in a variety of combinations when assessing writing proficiency and assigning high scores to independent essays (regardless of the scoring rubric considered). The writing styles in the four clusters can be described as action and depiction style, academic style, accessible style, and lexical style. The study provides empirical evidence that successful writing cannot be defined simply through a single set of predefined features, but that, rather, successful writing has multiple profiles. While these profiles may overlap, each profile is distinct.

    doi:10.1177/0741088314526354

April 2013

  1. Can Writing Attitudes and Learning Behavior Overcome Gender Difference in Writing? Evidence From NAEP
    Abstract

    Based on eighth-grade writing assessment data from the 1998 ( N = 20,586) and 2007 ( N = 139,900) National Assessment of Educational Progress (NAEP), this study examines the relationships among students’ writing attitudes, learning-related behaviors, and gender in relation to writing performance. Overall, the effects of attitudes were slightly larger than the effects of learning behaviors on writing performance, and gender differences were more prominent in attitudes than learning behaviors related to writing. Perhaps the most surprising finding from the 2007 NAEP data was that females with the most negative attitudes toward writing outperformed males with the most positive attitudes (i.e., writing scores based on two measures of attitudes: females, 157 and 161; males, 151 and 149). Overall, a similar pattern was observed with learning behaviors and gender differences in writing scores. Furthermore, medium effect sizes of gender difference in writing scores (females scoring substantially higher than males) were present even though the students reported to be at the same level in terms of writing attitudes and learning behaviors. The present study demonstrates that gender disparity in students’ writing performance is persistent and strong; it cannot be explained by gender differences in attitudes or behavior alone or in attitudes and behavior combined.

    doi:10.1177/0741088313480313

January 2013

  1. Scaling Writing Ability
    Abstract

    This analysis of 83 scoring rubrics and grade definitions from writing programs at U.S. public research universities captures the current state of the struggle to define and measure specific writing traits, and it enables an induction of the underlying theoretical construct of “academic writing” present at these writing programs. Findings suggest that writing specialists have managed to permeate U.S. first-year writing assessment with certain progressive assumptions about writing and writing instruction, but they also indicate critical areas for revision, given such documents’ critical gatekeeping role at postsecondary institutions. The study also raises a broader question about the difficulties of rhetorically constructing “writing ability” in a way that is consistent with the contextualist paradigm dominant in contemporary writing studies.

    doi:10.1177/0741088312466992

January 2010

  1. Linguistic Features of Writing Quality
    Abstract

    In this study, a corpus of expert-graded essays, based on a standardized scoring rubric, is computationally evaluated so as to distinguish the differences between those essays that were rated as high and those rated as low. The automated tool, Coh-Metrix, is used to examine the degree to which high- and low-proficiency essays can be predicted by linguistic indices of cohesion (i.e., coreference and connectives), syntactic complexity (e.g., number of words before the main verb, sentence structure overlap), the diversity of words used by the writer, and characteristics of words (e.g., frequency, concreteness, imagability). The three most predictive indices of essay quality in this study were syntactic complexity (as measured by number of words before the main verb), lexical diversity (as measured by the Measure of Textual Lexical Diversity), and word frequency (as measured by Celex, logarithm for all words). Using 26 validated indices of cohesion from Coh-Metrix, none showed differences between high- and low-proficiency essays and no indices of cohesion correlated with essay ratings. These results indicate that the textual features that characterize good student writing are not aligned with those features that facilitate reading comprehension. Rather, essays judged to be of higher quality were more likely to contain linguistic features associated with text difficulty and sophisticated language.

    doi:10.1177/0741088309351547

October 2006

  1. Writing Into the 21st Century
    Abstract

    This study charts the terrain of research on writing during the 6-year period from 1999 to 2004, asking “What are current trends and foci in research on writing?” In examining a cross-section of writing research, the authors focus on four issues: (a) What are the general problems being investigated by contemporary writing researchers? Which of the various problems dominate recent writing research, and which are not as prominent? (b) What population age groups are prominent in recent writing research? (c) What is the relationship between population age groups and problems under investigation? and (d) What methodologies are being used in research on writing? Based on a body of refereed journal articles ( n = 1,502) reporting studies about writing and composition instruction that were located using three databases, the authors characterize various lines of inquiry currently undertaken. Social context and writing practices, bi- or multi-lingualism and writing, and writing instruction are the most actively studied problems during this period, whereas writing and technologies, writing assessment and evaluation, and relationships among literacy modalities are the least studied problems. Undergraduate, adult, and other postsecondary populations are the most prominently studied population age group, whereas preschool-aged children and middle and high school students are least studied. Research on instruction within the preschool through 12th grade (P-12) age group is prominent, whereas research on genre, assessment, and bi- or multilingualism is scarce within this population. The majority of articles employ interpretive methods. This indicator of current writing research should be useful to researchers, policymakers, and funding agencies, as well as to writing teachers and teacher educators.

    doi:10.1177/0741088306291619

January 2005

  1. Creating the Subject of Portfolios
    Abstract

    This article presents research from a qualitative study of the way that reflective writing is solicited, taught, composed, and assessed within a state-mandated portfolio curriculum. The research situates reflective texts generated by participating students within the larger goals and bureaucratic processes of the school system. The study finds that reflective letters are a genre within the state curriculum that regulates the substance and tone of students’ reflections. At the classroom level, the genre provides a mode that students adopt with the assurance that their reflections will meet state evaluators’ expectations. At the bureaucratic level, the genre helps to continually validate the state’s portfolio curriculum through its strong encouragement of stylized narratives of progress. The study demonstrates the importance of understanding how large-scale assessments shape pedagogy and students’ writing.

    doi:10.1177/0741088304271831

October 1998

  1. “The Clay that Makes the Pot”—
    Abstract

    This is a piece about language and how we evaluate the work of young writers as they learn to express themselves in writing. The authors' focus is on current reforms in writing assessment, including the brief life of the California Learning Assessment System (CLAS) writing portfolios, and how they rarely address the vibrant role of language—the work and play of words—in students' writing. Through audio taped interviews with two elementary and two middle school students and their teachers, as well as the written artifacts in the students' portfolios, we analyzed the patterns of the students' writing and the comments of teachers and peers on their work. In this article, language in writing is metaphorically compared to “the clay that makes the pot,” emphasizing that young writers want to startle, want to engage readers with refreshing and surprising language—but few are provided the guidance for how to do it. The authors' central point is that writing revolves around criticism, but if the assessment stays on the surface and encourages word substitution over content revision, then the criticism may not be helpful in pushing the generative aspect of writing: the work of language.

    doi:10.1177/0741088398015004001
  2. Cognitive Differences in Proficient and Nonproficient Essay Scorers
    Abstract

    This article examines the behavioral differences of essay scorers who demonstrate different levels of proficiency for a psychometric scoring task. The authors compare three proficiency groups to identify differences in (a) essay features they consider, (b) their understanding of the scoring rubric, and (c) their decision-making procedures. Results indicate scorers with different levels of proficiency do not focus on different essay features when making evaluative decisions but their understandings of the scoring criteria may vary. Proficient scores are more likely to focus on general features of an essay when making evaluative decisions and to adopt values espoused by the scoring rubric than are less proficient scorers. Also, proficient scorers make evaluations by reading the entire essay and then reviewing its content, whereas less proficient scorers may interrupt the reading process to monitor how well the essay satisfies the scoring criteria. Finally, the authors discuss implications for scorer selection and training.

    doi:10.1177/0741088398015004002

January 1985

  1. Some Effects of Varying the Structure of a Topic on College Students' Writing
    Abstract

    Incoming freshmen are typically required to write essays which are then holistically rated to determine composition course placement. These placement essays vary not only in topic, but also in the way the topic is structured. Two topic structures are most commonly used: Open (students draw on their own knowledge) and Response (students read a given text and respond to it). It has been established that students perform differently on topic structure itself. To investigate this effect, one topic was used but presented as (1) an Open topic structure, (2) a Response topic structure with one reading passage, and (3) a Response topic structure with three reading passages. The essays, written by college freshmen, were holistically rated for quality and analyzed for fluency, total error, and error ratios. The results indicated that the structure of the topic made a difference in quality, fluency, and total error, but not in any error ratio. These results suggest that, for placement testing, one should first decide which types of students one wishes to identify because each topic structure distinguishes low, average, and high ability students differently.

    doi:10.1177/0741088385002001005