All Journals
140 articlesFebruary 2026
-
Reading Medium and Communicative Purpose in Writing: Effects on Pausing Behaviour and Text Quality, Controlling for Reading Comprehension and Executive Functions ↗
Abstract
This study investigated how reading medium (print vs. digital) and communicative purpose (informative vs. persuasive) shape writing processes and outcomes in integrative academic tasks. Eighty-one university students read three source texts in print or digitally and, after random assignment, produced either an informative or persuasive synthesis within a 2×2 between-subjects design. Keystroke logging recorded pausing across three writing stages, indexing planning, translation, and revision. Text quality was scored with holistic rubrics capturing discourse features and integration of sources. Reading medium significantly influenced pausing: students who read in print paused longer during writing, yet medium had no effect on overall text quality. Task purpose mattered: persuasive tasks yielded higher-quality formal writing, whereas scores reflecting level of source integration did not differ. No interaction between reading medium and task purpose emerged. When controlling for reading comprehension, working memory, and planning ability, the main effects of medium and task purpose remained, but period-specific pausing effects were no longer significant. Findings highlight distinct roles for reading medium and task purpose in shaping writing behavior and performance. The results support cautious causal interpretations and suggest that incorporating digital reading and varying task types may enhance academic writing in higher education, informing curriculum design and assessment.
December 2024
October 2024
-
Abstract
Abstract This article explores the impact of labor-based grading contracts on student attitudes and perceptions within multilingual First-Year Composition (FYC) sections at an R1 university. Data collected qualitatively and quantitatively examined correlations between labor-based grading contracts and shifts in student attitudes toward writing and overall learning experiences. Findings revealed that some students found labor-based grading contracts motivating, leading to improved attitudes toward writing, while others found themselves demotivated or stressed by the absence of traditional grades. The concept of fairness emerged as a key concern, challenging the assumption that labor-based grading contracts universally benefit students. This article underscores the need for nuanced implementation of labor-based grading contracts and encourages a student-centered approach to foster equitable and antiracist writing assessment practices. It acknowledges the potential benefits of labor-based contract grading, but also its associated challenges, and calls for a critical examination of grading contracts within local contexts to ensure they genuinely advance opportunities for underrepresented students.
September 2024
May 2024
-
College Composition Graduate Instructors’ Development of Conceptual and Practical Tools for Responding to Student Writing ↗
Abstract
Recent scholarship has demonstrated the need for criticality toward writing assessments that privilege standard language ideologies and correctness-based approaches. However, teachers continue to experience discrepancies between their intentions and actions, struggling to address both content and form in facilitative, constructive commentary. This study uses the activity theory framework of pedagogical tools, composed of conceptual and practical tools, to analyze through interviews and commented-on papers how two college composition graduate instructors responded to student writing. This study finds that while one teacher held and enacted consistent and congruent pedagogical tools grounded in sociocultural theories of writing development, the other experienced entrenched conflict between competing beliefs about evaluative and process-oriented purposes for teaching writing. These contrastive experiences illustrate how instructors’ development of pedagogical tools is mediated by interactions between their epistemological orientations and language ideologies, reinforcing the need to surface tacit beliefs about Standardized English and academic writing. This study concludes with recommendations for productive intervention in novice composition teachers’ development of response practices.
2024
September 2023
-
Abstract
Cultural rhetorics—as orientation, methodology, and practice—has made meaningful contributions to writing pedagogy (Brooks-Gillies et al.; Cedillo and Bratta; Baker-Bell; Cedillo et al.; Cobos et al.; Condon and Young; Powell). Despite these contributions, classroom teachers and writing program administrators can struggle to conceptualize assessment beyond bureaucratic practice and their role in assessment beyond standing in loco for the institution. To more fully realize the potential of cultural rhetorics in our classrooms and programs, the field needs assessment models that seek to uncover the counterstories of writing and meaning-making. Our work, at the intersections of queer rhetorics and writing assessment, provides a theoretical framework called Queer Validity Inquiry (QVI) that disrupts stock stories of success—a success that is always available to some at the expense of others. Through four diffractive lenses—failure, affectivity, identity, and materiality—QVI prompts us to determine what questions about student writers and their writing intrigue us, why we care about them, and whose interests are being served by those questions.
December 2022
-
Interchanges: A Kairotic Moment for CLA? Response to Anne Ruggles Gere et al.’s “Communal Justicing: Writing Assessment, Disciplinary Infrastructure, and the Case for Critical Language Awareness” ↗
Abstract
Preview this article: Interchanges: A Kairotic Moment for CLA? Response to Anne Ruggles Gere et al.’s “Communal Justicing: Writing Assessment, Disciplinary Infrastructure, and the Case for Critical Language Awareness”, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/74/2/collegecompositionandcommunication32280-1.gif
October 2022
-
Abstract
AbstractWriting assessment and social justice rely largely on success-trajectory narratives, which sideline productive failure as a means of resisting normative futurity-based modes of education and policy. This essay offers an alternative perspective on failure in writing assessment and social justice by illustrating how relying on rhetoric as a hope and means for positive change can undermine aims of social justice and a critical education. By examining the queer (non)possibilities for assessment and acceptance without dependence on constant improvement and success, instructors may find more inclusive ways of thinking about the value of rhetoric's role in a generative acceptance of difference.
May 2022
-
Abstract
This essay focuses on writing assessment. Specifically, the author explores the embedded raced construction of writing assessment, rubrics, inter alia, commonly used in first year composition courses. The author posits that rubrics used to assess what Asao Inoue termed Habits of White Language cannot effectively assess and may be detrimental to assessing speakers from different linguistic backgrounds, specifically African Americans. The importance of Black Language (BL), rhetoric, and argumentation styles to rhetorical studies and American discourse must not only be recognized but also explored and taught as a style of argumentation. I implement an Afrocentric rubric using the principles of African American Rhetoric as a means for both expanding the rhetorical triangle and providing ethical assessment of BL in writing.
February 2021
-
Communal Justicing: Writing Assessment, Disciplinary Infrastructure, and the Case for Critical Language Awareness ↗
Abstract
Critical language awareness offers one approach to communaljusticing, an iterative and collective process that can address inequities in the disciplinary infrastructure of Writing Studies. We demonstrate justicing in the field’s pasts, policies, and publications; offer a model of communal revision; and invite readers to become agents of communal justicing.
October 2020
-
Abstract
This article examines how faculty at one college respond to student writing, how students interpret that feedback, and how through collective self-evaluation and community-building workshops some faculty paved a path toward more productive response. The first part of the findings resonate with what scholars in the 1980s discovered: that teachers’ feedback strategies often operate at cross-purposes with students’ motivations and understandings. Asking why, after forty years of scholarship, such counterproductive strategies still prevail, the study suggests burdensome workloads, lack of training, rigid applications of rubrics and genres, and isolation from peers are to blame. It then profiles three teachers who, despite these obstacles, provide deep-reaching feedback. Although their pedagogies and backgrounds differ, they share common bonds, teaching authentically from who they are, an approach that is open to all teachers once they feel freed to adopt it.
-
Are Two Voices Better Than One? Comparing Aspects of Text Quality and Authorial Voice in Paired and Independent L2 Writing ↗
Abstract
Research has shown that collaboratively produced texts are better in quality compared with individually written texts. However, no study has considered the role of collaboration in authorial voice, which is an essential element in current writing curricula. This study analyzes the effects of collaborative task performance in the quality of L2 learners’ argumentative texts and in their authorial voice strength. A total of 306 upper-intermediate L2 learners were selected and divided into independent ( N = 130) and paired ( N = 176) groups. Each learner/pair was asked to write one argumentative text. The quality of writings was determined by a quantitative analysis that included three measures of complexity, accuracy, and fluency (CAF). Participants’ authorial voice strength was assessed by two raters using an analytic voice rubric. Comparison of means revealed that pairs outperformed independent writers in all CAF measures. However, the results for the role of collaboration in authorial voice were mixed: While pairs were more successful than independent writers in manifesting their ideational voice, independent writers outperformed pairs with regard to affective and presence voice dimensions and holistic voice scores. The article concludes that, despite its positive implications for L2 writing, collaborative writing may pose challenges for learners’ authorial stance taking.
2020
October 2019
-
Reciprocity and Power Dynamics: Community Members Grading Students by Jessica Shumake & Rachael Wendler Shah ↗
Abstract
This article explores the dynamic practice of inviting community members to grade college students on their work in community-engaged partnerships. The authors articulate theories of writing assessment with theories of reciprocity to argue that community-based student evaluations can be a valid and ethical form of assessment, and discuss a case study in which local youth… Continue reading Reciprocity and Power Dynamics: Community Members Grading Students by Jessica Shumake & Rachael Wendler Shah
-
Abstract
Classroom writing assessment practices can interrogate white supremacy through the way readers judge student writing. Furthermore, writing assessments designed and engaged in as ecologies offer social justice projects that can explore judgment as a racialized discourse. The author demonstrates one application of an antiracist writing assessment ecology through a practice called “problem posing the nature of judgment and language” and discusses the problem posing of two ecological places in the class.
April 2019
-
Abstract
This study examined multiple measures of written expression as predictors of narrative writing performance for 362 students in grades 4 through 6. Each student wrote a fictional narrative in response to a title prompt that was evaluated using a levels of language framework targeting productivity, accuracy, and complexity at the word, sentence, and discourse levels. Grade-related differences were found for all of the word-level and most of the discourse-level variables examined, but for only one sentence-level variable (punctuation accuracy). The discourse-level variables of text productivity, narrativity, and process use, the sentence-level variables of grammatical correctness and punctuation accuracy, and the word-level variables of spelling/capitalization accuracy, lexical productivity, and handwriting style were significant predictors of narrative quality. Most of the same variables that predicted story quality differentiated good and poor narrative writers, except punctuation accuracy and narrativity, and variables associated with word and sentence complexity also helped distinguish narrative writing ability. The findings imply that a combination of indices from across all levels of language production are most useful for differentiating writers and their writing. The authors suggest researchers and educators consider levels of language measures such as those used in this study in their evaluations of writing performance, as a number of them are fairly easy to calculate and are not plagued by subjective judgments endemic to most writing quality rubrics.
March 2019
-
Abstract
Despite national efforts to accelerate students through precollegiate writing course sequences to transfer-level composition, questions persist regarding appropriate placement and the support needed for students to succeed. An analytical text-based writing assessment was administered to students across four levels of composition courses at a California community college. Differences in student writing scores between course levels and the relationship between writing score, course level, and high school GPA were examined. Key findings include (1) significant differences in average scores between the first precollegiate course and other courses in the sequence and (2) weak relationships between course level and high school GPA and assessment scores and high school GPA.
January 2019
-
“Reciprocity and Power Dynamics: Community Members Gradings Students” by Jessica Shumake & Rachael Wendler Shah ↗
Abstract
This article explores the dynamic practice of inviting community members to grade college students on their work in community-engaged partnerships. The authors articulate theories of writing assessment with theories of reciprocity to argue that community-based student evaluations can be a valid and ethical form of assessment, and discuss a case study in which local youth… Continue reading “Reciprocity and Power Dynamics: Community Members Gradings Students” by Jessica Shumake & Rachael Wendler Shah
-
Testing the Test: Expanding the Dialogue on Technical Writing Assessment in the Academy and Workplace ↗
Abstract
The small amount of work on workplace writing assessment has focused almost entirely on student readiness for professional writing or included case studies of employer expectations for new writers. While these studies provide insight into current pedagogies for technical writing and writing instruction in general, the main conclusion to be drawn from them is the unsatisfactory number of recent graduates who display workplace readiness. In this article, we explore writing assessment research in both the academy and the workplace and attempt to identify ways in which the academy’s assessment practices lead, lag behind, or simply differ from writing assessment in the workplace. This comparison will serve to identify not only where the academy might improve pedagogy in its curriculum for technical communication in order to best prepare students for workplace writing but also where the workplace might learn from the academy to improve its own hiring and training procedures for technical writers. In this case study, we used Neff’s approach to grounded theory to categorize rater feedback according to a ranking system and then used statistical analysis to compare writer performance. We found that the direct test method yields the most predictive results when raters combine tacit knowledge with a clearly defined rubric. We hope that the methods used in this study can be replicated in future studies to yield further results when exploring workplace genres and what they might teach us about our own pedagogical practice.
December 2017
-
Review: Antiracist Writing Assessment Ecologies: Teaching and Assessing Writing for a Socially Just Future, by Asao Inoue ↗
Abstract
Preview this article: Review: Antiracist Writing Assessment Ecologies: Teaching and Assessing Writing for a Socially Just Future, by Asao Inoue, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/tetyc/45/2/teachingenglishinthetwo-yearcollege29433-1.gif
-
Collaborative Ecologies of Emergent Assessment: Challenges and Benefits Linked to a Writing-Based Institutional Partnership ↗
Abstract
This essay reports on a writing-based formative assessment of a university-wide initiative to enhance students’ global learning. Our mixed (and unanticipated) results show the need for enhanced expertise in writing assessment as well as for sustained partnerships among diverse institutional stakeholders so that public programming—from events linked to classroom-level learning to broader cross unit mandates like accreditation—can yield more rigorous, responsive, and mixed method assessments.
May 2017
-
Elaborated Specificity versus Emphatic Generality: A Corpus-Based Comparison of Higher- and Lower-Scoring Advanced Placement Exams in English ↗
Abstract
Text-driven, quantitative methods provide new ways to analyze student writing, by uncovering recurring grammatical features and related stylistic effects that remain tacit to students and those who read and evaluate student writing. To date, however, these methods are rarely used in research on students transitioning into US postsecondary writing, and especially rare are studies of student writing that is already scored according to high-stakes writing expectations. This study offers a corpus-based, comparative analysis of higher- and lower-scoring Advanced Placement (AP) exams in English, revealing statistically significant syntactic patterns that distinguish higher-scoring exams according to “informational production” and lower-scoring essays according to “involved” or “interactional” production (Biber, 1988). These differences contribute to what we label emphatic generality in the lower-scoring essays, in which writers tend to foreground human actors, including themselves. In contrast, patterns in higher-scoring essays achieve what we call elaborated specificity, by focusing on and explicating specific, often abstract, concepts.These findings help uncover what is rewarded (or not) in high-stakes writing assessments and show that some students struggle with register awareness. A related implication, then, is the importance of teaching register awareness to students at the late secondary and early university level—students who are still relative novices, but are being invited to compose informationally dense prose. Such register considerations, and specific features revealed in this study, provide ways to help demystify privileged writing forms for students, particularly students for whom academic writing may seem distant from their own communicative practices and ambitions.
April 2017
-
Abstract
This article examines the teaching of a multimodal pedagogy in an online technical communication classroom. Based on the results of an e-portfolio assessment, the authors argue that multimodality can be taught successfully in the online environment if the instructor carefully plans and scaffolds each assignment. Specifically, they argue for an increased emphasis within the technical communication classroom on teaching the e-portfolio as a genre that not only exemplifies students’ multimodal literacies but also establishes their identities as technical communicators in the 21st century. This article provides a model for teaching multimodal composition in the online technical communication classroom and calls for more scholarship on teaching the e-portfolio in the digital environment.
-
Abstract
This article reports the background, methods, and results of a 7-year project (2007–2013) that assessed the writing of undergraduate business majors at a business college. It describes specific issues with writing assessment and how this study attempted to overcome them, largely through a situated assessment approach. The authors provide the results of more than 3,700 assessments of nearly 2,000 documents during the course of the study, reporting on scores overall and for each rubric criterion and comparing the scores of English and business assessors. They also investigate how two curricular interventions were evaluated through this assessment project. Although overall, the writing of these business majors was assessed as good, results showed noteworthy differences between the scores of English and business assessors and a noteworthy impact for one of the curricular interventions, an effort to improve the material conditions of writing instruction. The authors conclude by discussing some next steps and implications of this project.
2017
November 2016
-
Abstract
Stephanie West-Puckett argues for open badging as an alternative born-digital assessment paradigm that can, when attendant to critical validity inquiry, promote full participation and more equitable outcomes for students of color and lower income students. Her case study of digital badging in first-year composition demonstrates how students and teachers can negotiate “good writing,” interrupting bias through the co-creation of digital badges that demystify disciplinary knowledge and serve as portable assessment objects that build social capital across contexts.
-
Guest Editors’ Introduction: Toward Writing Assessment as Social Justice: An Idea Whose Time Has Come ↗
Abstract
This special issue takes up a singular question: What would it mean to incorporate social justice into our writing assessments? This issue aims to foreground the perspectives of contributors whose voices are not typically heard in writing assessment scholarship: non-tenure-track faculty, HBCU WPAs, researchers interested in global rhetorics, queer faculty, and faculty of color. These voices have too often not been heard in writing assessment scholarship. There is no doubt that the first step toward projects of social justice writing assessment is to listen to those who have not been heard, to make more social the project of socially just writing assessment. The guest editors argue that there is much to be learned by making the writing assessment “scene,” as Chris Gallagher would say, more inclusive.
-
Expanding the Dialogue on Writing Assessment at HBCUs: Foundational Assessment Concepts and Legacies of Historically Black Colleges and Universities ↗
Abstract
Race and class are deeply embedded in the way the field and teachers think about linguistic and written performance. Yet, addressing and understanding racial and linguistic prejudice remains important to the fairness of one’s pedagogies, assessment practices, and curricular development. The author argues that social justice approaches to assessment require instructors and program administrators to rethink assessment concepts such as reliability and validity with an eye toward the ways disadvantage is embedded in the very construct task responses and assessment materials used to define quality writing. Because historically Black colleges and universities (HBCUs) present a unique blend of culturally relevant teaching and traditional (i.e., White) definitions of quality writing, they provide a unique site for inquiry into questions of writing assessment and social justice. Specifically, in engaging with the push-pull legacy toward language use and race that is found at HBCUs, the author indicates ways we might enable teachers, administrators, and students to resist monolingual, racialized consequences embedded in their views of writing assessment and rethink the foundational measurement concepts of reliability, validity, and fairness.
-
Abstract
ost writing assessment at the college level is geared toward “homegrown” or “traditional” students: the ones who start their first year of college education at the same institution from which they later graduate. Assessment at Alexander’s institution was mostly effective for those same students but was less successful for some transfer students, as shown in assessment data. Instead of trying to force those students to learn the “norm” standards, the author, as WPA, began conversations with faculty at the community colleges where these students begin their college careers to determine how to honor the many different writing knowledges that these students bring to the classroom. Looked at through a lens of queer theory, this is the path to “queering” writing assessment.
-
Who We Are(n’t) Assessing: Racializing Language and Writing Assessment in Writing Program Administration ↗
Abstract
Decisions about writing assessment are rooted in racial and linguistic identity; the consequences for many writing assessment decisions are often reflective of the judgments made about who does and does not deserve opportunities for success, opportunities historically denied to students of color and linguistically diverse writers. Put simply, assessment creates or denies opportunity structures. Because writing assessment is also racially and linguistically affected by the identities of those performing assessment, the role of writing program administrator (WPA) becomes a social justice role that challenges racial and linguistic biases and interrogates institutional structures, so that all students have the same opportunities for success.
September 2016
-
Abstract
Books reviewed: Assessing and Improving Student Writing in College: A Guide for Institutions, General Education, Departments, and Classrooms
August 2016
-
Abstract
This article identifies five categories of resources that preservice teachers drew on as they considered student writing and planned their own approaches to assessing and teaching writing. Identifying these resources helps us better understand how beginning writing teachers think about student writing—and better understand mismatches that commonly occur between what teacher educators teach and what new teachers actually do. Our study builds on literature that considers how writing teachers are prepared, extends research about how preservice teachers use what they learn, and adds layers of detail to literature about the resources that beginning teachers draw upon to aid and support them in their work. The pedagogical and research projects described in this study stem from a communities-of-practice framework. Our methods surfaced preservice teachers’ claims about writing and the resources they drew upon to support those claims. Drawing upon our rhetorical view of writing, we worked inductively to identify these claims and resources, using grounded analysis of transcripts from preservice teachers’ VoiceThread conversations to develop a taxonomy of 15 resources grouped into 5 categories: understanding of students and student writing; knowledge of context; colleagues; roles; and writing. This research has implications for educators and researchers working in teacher preparation. Scaffolded instruction is essential to help beginning teachers use particular resources—and to employ resources in ways connected with rhetorical conceptual frameworks. To that end, the taxonomy of resources can be used as a tool for individual and programmatic assessment, as well as to facilitate scaffolded instruction.
February 2016
-
Abstract
This article shares our experience designing and deploying writing assessment in English Composition I: Achieving Expertise, the first-ever first-year writing Massive Open Online Course (MOOC). We argue that writing assessment can be effectively adapted to the MOOC environment and that doing so reaffirms the importance of mixed-methods approaches to writing assessment and drives writing assessment toward a more individualized,learner-driven, and learner-autonomous paradigm.
January 2016
-
Abstract
Decades of research on rater training and scoring practices demonstrates that raters' preferences for writing quality are malleable; for instance, it is customary to "calibrate" raters' scoring decisions through documents like scoring protocols and rubrics. This essay argues that while rubrics from contemporary large-scale writing assessments (and the local assessments they inspire) maintain retrograde assumptions about language variation, relatively small adjustments to these rubrics could help raters and candidates establish what Joseph Williams once called "the ordinary kind of contract" that readers and writers routinely observe anywhere outside of testing contexts.
2016
December 2015
-
Abstract
This essay provides a comparative analysis of a large number of texts devoted to writing assessment, analyses that help answer questions about writing assessment volumes and that provide a picture of writing assessment scholarship over a twenty-five-year period.
-
Feature: Learning in Practice: Increasing the Number of Hybrid Course Offerings in Community Colleges ↗
Abstract
This essay provides a comparative analysis of a large number of texts devoted to writing assessment, analyses that help answer questions about writing assessment volumes and that provide a picture of writing assessment scholarship over a twenty-five-year period.
-
Abstract
The Inquiry column is about the scholarship of teaching and learning.
September 2015
May 2015
-
Abstract
Text-driven, quantitative methods provide new ways to analyze student writing, by uncovering recurring grammatical features and related stylistic effects that remain tacit to students and those who read and evaluate student writing. To date, however, these methods are rarely used in research on students transitioning into US postsecondary writing, and especially rare are studies of student writing that is already scored according to high-stakes writing expectations. This study offers a corpus-based, comparative analysis of higher- and lower-scoring Advanced Placement (AP) exams in English, revealing statistically significant syntactic patterns that distinguish higher-scoring exams according to “informational production” and lower-scoring essays according to “involved” or “interactional” production (Biber, 1988). These differences contribute to what we label emphatic generality in the lower-scoring essays, in which writers tend to foreground human actors, including themselves. In contrast, patterns in higher-scoring essays achieve what we call elaborated specificity, by focusing on and explicating specific, often abstract, concepts.These findings help uncover what is rewarded (or not) in high-stakes writing assessments and show that some students struggle with register awareness. A related implication, then, is the importance of teaching register awareness to students at the late secondary and early university level—students who are still relative novices, but are being invited to compose informationally dense prose. Such register considerations, and specific features revealed in this study, provide ways to help demystify privileged writing forms for students, particularly students for whom academic writing may seem distant from their own communicative practices and ambitions.
April 2015
-
Abstract
This article examines a central pedagogical dilemma within queer studies: with an increase in public attention to LGBT concerns (and an investment in the categories that comprise the LGBT rubric), how might we prioritize the complexities of queerness within a social context that tends to privilege discrete designations for identity?
March 2015
June 2014
-
The Legal and the Local: Using Disparate Impact Analysis to Understand the Consequences of Writing Assessment ↗
Abstract
In this article, we investigate disparate impact analysis as a validation tool for understanding the local effects of writing assessment on diverse groups of students. Using a case study data set from a university that we call Brick City University, we explain how Brick City’s writing program undertook a self-study of its placement exam using the disparate impact process followed by the Office for Civil Rights of the US Department of Education. This three-step process includes analyzing placement rates through (1) a threshold statistical analysis, (2) a contextualized inquiry to determine whether the placement exam meets an important educational objective, and (3) a consideration of less discriminatory assessment alternatives. By employing such a process, Brick City re-conceptualized the role of placement testing and basic writing at the university in a way that was less discriminatory for Brick City’s diverse student population.
April 2014
-
What Is Successful Writing? An Investigation Into the Multiple Ways Writers Can Write Successful Essays ↗
Abstract
This study identifies multiple profiles of successful essays via a cluster analysis approach using linguistic features reported by a variety of natural language processing tools. The findings from the study indicate that there are four profiles of successful writers for the samples analyzed. These four profiles are linguistically distinct from one another and demonstrate that expert human raters examine a number of different linguistic features in a variety of combinations when assessing writing proficiency and assigning high scores to independent essays (regardless of the scoring rubric considered). The writing styles in the four clusters can be described as action and depiction style, academic style, accessible style, and lexical style. The study provides empirical evidence that successful writing cannot be defined simply through a single set of predefined features, but that, rather, successful writing has multiple profiles. While these profiles may overlap, each profile is distinct.
March 2014
February 2014
-
Abstract
How do teachers define failure when learning to write? We don’t ask the question often enough. In this article, I attempt to offer a definition and critique of the nature and production of failure in writing classrooms and programs. I argue that the production of failure in writing assessments can create more purposeful consequences, particularly for those historically most likely to suffer “failures” in writing classrooms: students of color, multilingual students, and working-class students. Drawing upon survey and grade data from California State University, Fresno, I examine two kinds of failure produced in writing classrooms, quality-failure and labor-failure. I argue that quality-failure (associated with judging the quality of drafts) is the least useful kind of failure for writing classrooms, while labor-failure (associated with work and effort) offers better consequences for student-writers and can help articulate a more robust writing construct by including noncognitive dimensions of writing. I conclude by proposing “productive failure” as a future possibility for writing classrooms.
-
Abstract
Editor Ellen Cushman introduces Mya Poe as the guest editor of this special issue on diversity and international writing assessment and previews the content of the issue.
-
Abstract
Diversity in writing assessment research means paying attention to the consequences of writing assessment for all students’ learning and writing. This special issue of Research in the Teaching of English brings together researchers from various national contexts who share such a perspective to explore the meanings and roles of writing assessment today.