All Journals
140 articlesJanuary 2001
-
Abstract
Notes that writing assessment has become an important specialty within composition studies with links to such “suspicious partners” as educational research, statistics, and politics and with profound effects on public policy and educational funding. Discusses the modern era of writing assessment beginning during the fall of 1971 an its implications. Considers assessment as a site of conflict.
November 2000
-
Abstract
Explores how writing instructors at “City University” grappled with crises of standardization in evaluation of students’ portfolios. Details the two most severe experiences in multiple breakdowns in the project of standardization: crises of textual representation and crises of evaluative subjectivity. Examines conflicting interpretations (psychometric and hermeneutic) of City University’s crises.
May 1999
-
Abstract
Describes computer-software programs that “read” and score college-placement essays. Argues they may impress administrators, but they also (1) marginalize students by disregarding what they have to say; (2) disregard decades of research on the writing process; and (3) ignore faculty’s professional expertise. Argues assessment practices should be guided by theoretical soundness and sensitivity to issues affecting real people.
March 1999
-
Abstract
Shares freshman-composition students’ stories about portfolio assessment (interviewing students at length three times during the semester), to examine ways students understand portfolios, how portfolios work, and why sometimes they do not. Suggests concerns relevant to implementing department-wide competency portfolios. Argues that community colleges may be better situated than large universities to reap the benefits of portfolios.
-
Abstract
Investigates English-as-a-Second-Language (ESL) students’ native literacy-learning experiences, via written learning autobiographies of 26 students from at least eight different countries. Discusses writing instruction in students’ native languages; most satisfying writing assessment in their native languages; and differences between writing in their native language and English. Draws five conclusions for ESL instruction.
February 1999
-
Abstract
Preview this article: Looking Back as We Look Forward: Historicizing Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/50/3/collegecompositionandcommunication1341-1.gif
October 1998
-
Abstract
This is a piece about language and how we evaluate the work of young writers as they learn to express themselves in writing. The authors' focus is on current reforms in writing assessment, including the brief life of the California Learning Assessment System (CLAS) writing portfolios, and how they rarely address the vibrant role of language—the work and play of words—in students' writing. Through audio taped interviews with two elementary and two middle school students and their teachers, as well as the written artifacts in the students' portfolios, we analyzed the patterns of the students' writing and the comments of teachers and peers on their work. In this article, language in writing is metaphorically compared to “the clay that makes the pot,” emphasizing that young writers want to startle, want to engage readers with refreshing and surprising language—but few are provided the guidance for how to do it. The authors' central point is that writing revolves around criticism, but if the assessment stays on the surface and encourages word substitution over content revision, then the criticism may not be helpful in pushing the generative aspect of writing: the work of language.
-
Abstract
This article examines the behavioral differences of essay scorers who demonstrate different levels of proficiency for a psychometric scoring task. The authors compare three proficiency groups to identify differences in (a) essay features they consider, (b) their understanding of the scoring rubric, and (c) their decision-making procedures. Results indicate scorers with different levels of proficiency do not focus on different essay features when making evaluative decisions but their understandings of the scoring criteria may vary. Proficient scores are more likely to focus on general features of an essay when making evaluative decisions and to adopt values espoused by the scoring rubric than are less proficient scorers. Also, proficient scorers make evaluations by reading the entire essay and then reviewing its content, whereas less proficient scorers may interrupt the reading process to monitor how well the essay satisfies the scoring criteria. Finally, the authors discuss implications for scorer selection and training.
September 1998
-
Abstract
Describes how a weekly focused journal writing assessment (in which students note any use of language they find interesting, puzzling, amusing, or annoying as well as their response to it) enhances composition students’ awareness of how language is used and where. Offers several different advantages of such journal writing.
December 1997
-
Abstract
Explores issues, problems, and procedures involved in large English departments which use portfolio assessment and where part-timers and full-timers need to collaborate in this process. Offers recommendations involving the relationship of part-time and full-time teachers in such programs.
October 1997
-
Abstract
Asks if there is a place for portfolio assessment in the literature classroom. Finds that portfolios help students use writing to engage literary texts in multiple and productive ways, and offer opportunities to examine effects of the reading process over the course of the writing pieces. Argues for a particular kind of portfolio focusing on a single literary work.
-
Abstract
Observes how nine members of the Pine View High School English Department interpreted and implemented Kentucky’s state requirement for portfolio assessment of secondary school students. Suggests that the faculty saw the assessment as a test of their competence and felt great pressure to produce good portfolios but little incentive to explore ways portfolios might be used in the classroom.
February 1997
-
The Relative Contributions of Research-Based Composition Activities to Writing Improvement in the Lower and Middle Grades ↗
Abstract
In a benchmark meta-analysis of experimental research findings from 1962 to 1982, Hillocks (1986) reported the varying effects of general modes of instruction and specific instructional activities (foci) on the quality of student writing. The main purpose of the present study was to explore the relative effectiveness of those modes and foci using a non-experimental methodology and a new group of 16 teachers and 275 students in grades 1, 3–6, and 8. Teachers who had attended a summer writing institute reported on 17 different instructional variables that were primarily derived from the meta-analysis during each week of a ten-week treatment period that occurred at the beginning of the next school year. A pre- and post- treatment large-scale writing assessment was used with a prompt that allowed latitude in student choice of topic and extra time for prewriting and/or revision. Large gains in quality and quantity were found in the lower grades (1, 3, and 4) and smaller gains were found in the middle grades (5, 6, and 8). The demographic variables of SES, primary language, residence, and gender were found to have small and/or insignificant relationships to gains. Teacher-determined combinations of instructional variables and their relationship to gains in quality were investigated through factor analysis while controlling for pretreatment individual differences. Only one combination of activities was associated with large gains, and it was interpretable as the environmental mode of instruction. This combination included inquiry, prewriting, writing about literature, and the use of evaluative scales.
December 1996
May 1996
-
Abstract
Preview this article: Reviews: (Re)Articulating Writing Assessment for Teaching and Learning, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/tetyc/32/2/teachingenglishinthetwoyearcollege4583-1.gif
-
Abstract
Preview this article: Review: What We Really Value: Beyond Rubrics in Teaching and Assesing Writing, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/tetyc/32/2/teachingenglishinthetwoyearcollege4582-1.gif
October 1995
-
Abstract
Preview this article: Review: Uncovering Possibilities for a Constructivist Paradigm for Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/46/3/collegecompositioncommunication8738-1.gif
-
Abstract
Preview this article: Writing Assessment: A Position Statement, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/46/3/collegecompositioncommunication8736-1.gif
October 1994
-
Abstract
The recent addition of a writing performance assessment to the Graduate Management Admission Test (GMAT) means that many students now enter business school with a writing assessment score and perhaps even a heightened awareness that writing matters in some way to the successful completion of an MBA degree. This situation presents teachers of business and managerial writing with a new opportunity and pressure to provide students with writing tools that are directly relevant to their business studies and professional careers. The Analysis of Argument Measure and the Persuasive Adaptiveness Measure introduced here are assessment tools that may be used to explain holistic assessment scores (which students receive on the GMAT writing component) and may assist students in understanding and evaluating their writing, both in school and in the workplace. Designed to evaluate managerial documents that are persuasive and directorial in nature, these measures were developed through a series of pilots and used to assess a selected sample of managerial memorandums that were also scored holistically. Correlating the holistic and analytic scores revealed a positive association, and interrater reliability achieved good agreement beyond chance. These results suggest that the measures may be reliably employed to assess characteristics valued in managerial writing. Examples of how these analytic measures may be employed for teaching and research are also described.
May 1994
-
Abstract
Preview this article: Adventuring into Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/45/2/collegecompositionandcommunication8789-1.gif
January 1994
-
Abstract
This article describes the design and evaluation of a formal writing assessment program within a technical writing course. Our purpose in this base-line study was to evaluate student writing at the conclusion of the course. In implementing this evaluation, we addressed fundamental issues of sound assessment: reliability and validity. Our program may encourage others seeking to assess educational outcomes in technical writing courses.
May 1992
-
Abstract
Preview this article: A Selected Bibliography on Postsecondary Writing Assessment, 1979-1991, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/43/2/collegecompositionandcommunication8887-1.gif
November 1990
May 1990
-
Abstract
I recently attended a conference previously unknown to me and to most college English faculty: The Assessment Forum of American Association for Higher Education (AAHE). (I was there to give a paper on measurement of writing ability and on evaluation of writing programs.) The experience of that conference ought to have been routine; after all, I have directed a variety of large-scale writing programs and I have been speaking and publishing on writing assessment for over fifteen years; I have also spent many years as chair of an English department and as a writing program administrator. But experience of hearing papers and discussions at that conference was not at all routine; it was both troubling and enlightening, as well as quite new in unexpected ways. My first reaction to sessions on writing measurement at AAHE was that I had entered a new world. The papers not only made different assumptions about writing than I, as a writing teacher, writer, and researcher, normally make, but came out of a wholly different scholarly community of discourse, one that calls itself the assessment movement. The references were entirely unfamiliar, procedures were different, and approach to subject struck me as insensitive to what writing is all about. But all of these differences seemed to center on way people spoke (and hence thought) about measurement: I was in a foreign country, language was different, and that difference changed everything. I had entered a new discourse community in a field in which I was a well-published specialist, and none of my knowledge or experience seemed to matter. And yet discourse was about measuring writing ability and evaluating writing programs, that is, about what has (however accidentally) become my specialty. I felt disoriented. When I returned home from AAHE I found a flier from Jossey-Bass, publisher of my 1985 book, Teaching and Assessing Writing. I don't expect book to appear on every flier marketing division puts out, but this little
September 1987
-
Abstract
Preview this article: Review: What Can We Know, What Must We Do, What May We Hope: Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ce/49/5/collegeenglish11471-1.gif
May 1987
October 1986
-
Abstract
Preview this article: A Procedure for Writing Content-Fair Essay Examination Topics for Large-Scale Writing Assessments, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/37/3/collegecompositionandcommunication11232-1.gif
January 1985
-
Abstract
Incoming freshmen are typically required to write essays which are then holistically rated to determine composition course placement. These placement essays vary not only in topic, but also in the way the topic is structured. Two topic structures are most commonly used: Open (students draw on their own knowledge) and Response (students read a given text and respond to it). It has been established that students perform differently on topic structure itself. To investigate this effect, one topic was used but presented as (1) an Open topic structure, (2) a Response topic structure with one reading passage, and (3) a Response topic structure with three reading passages. The essays, written by college freshmen, were holistically rated for quality and analyzed for fluency, total error, and error ratios. The results indicated that the structure of the topic made a difference in quality, fluency, and total error, but not in any error ratio. These results suggest that, for placement testing, one should first decide which types of students one wishes to identify because each topic structure distinguishes low, average, and high ability students differently.