All Journals

140 articles
Year: Topic: Clear
Export:
assessment ×

January 2001

  1. The Opening of the Modern Era of Writing Assessment: A Narrative1
    Abstract

    Notes that writing assessment has become an important specialty within composition studies with links to such “suspicious partners” as educational research, statistics, and politics and with profound effects on public policy and educational funding. Discusses the modern era of writing assessment beginning during the fall of 1971 an its implications. Considers assessment as a site of conflict.

    doi:10.58680/ce20011209

November 2000

  1. Pulling Your Hair Out: Crises of Standardization in Communal Writing Assessment
    Abstract

    Explores how writing instructors at “City University” grappled with crises of standardization in evaluation of students’ portfolios. Details the two most severe experiences in multiple breakdowns in the project of standardization: crises of textual representation and crises of evaluative subjectivity. Examines conflicting interpretations (psychometric and hermeneutic) of City University’s crises.

    doi:10.58680/rte20001717

May 1999

  1. Writing into Silence: Losing Voice with Writing Assessment Technology
    Abstract

    Describes computer-software programs that “read” and score college-placement essays. Argues they may impress administrators, but they also (1) marginalize students by disregarding what they have to say; (2) disregard decades of research on the writing process; and (3) ignore faculty’s professional expertise. Argues assessment practices should be guided by theoretical soundness and sensitivity to issues affecting real people.

    doi:10.58680/tetyc19991841

March 1999

  1. Views from the Underside: Proficiency Portfolios in First-Year Composition
    Abstract

    Shares freshman-composition students’ stories about portfolio assessment (interviewing students at length three times during the semester), to examine ways students understand portfolios, how portfolios work, and why sometimes they do not. Suggests concerns relevant to implementing department-wide competency portfolios. Argues that community colleges may be better situated than large universities to reap the benefits of portfolios.

    doi:10.58680/tetyc19991826
  2. The Need to Understand ESL Students’ Native Language Writing Experiences
    Abstract

    Investigates English-as-a-Second-Language (ESL) students’ native literacy-learning experiences, via written learning autobiographies of 26 students from at least eight different countries. Discusses writing instruction in students’ native languages; most satisfying writing assessment in their native languages; and differences between writing in their native language and English. Draws five conclusions for ESL instruction.

    doi:10.58680/tetyc19991830

February 1999

  1. Looking Back as We Look Forward: Historicizing Writing Assessment
    Abstract

    Preview this article: Looking Back as We Look Forward: Historicizing Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/50/3/collegecompositionandcommunication1341-1.gif

    doi:10.58680/ccc19991341

October 1998

  1. “The Clay that Makes the Pot”—
    Abstract

    This is a piece about language and how we evaluate the work of young writers as they learn to express themselves in writing. The authors' focus is on current reforms in writing assessment, including the brief life of the California Learning Assessment System (CLAS) writing portfolios, and how they rarely address the vibrant role of language—the work and play of words—in students' writing. Through audio taped interviews with two elementary and two middle school students and their teachers, as well as the written artifacts in the students' portfolios, we analyzed the patterns of the students' writing and the comments of teachers and peers on their work. In this article, language in writing is metaphorically compared to “the clay that makes the pot,” emphasizing that young writers want to startle, want to engage readers with refreshing and surprising language—but few are provided the guidance for how to do it. The authors' central point is that writing revolves around criticism, but if the assessment stays on the surface and encourages word substitution over content revision, then the criticism may not be helpful in pushing the generative aspect of writing: the work of language.

    doi:10.1177/0741088398015004001
  2. Cognitive Differences in Proficient and Nonproficient Essay Scorers
    Abstract

    This article examines the behavioral differences of essay scorers who demonstrate different levels of proficiency for a psychometric scoring task. The authors compare three proficiency groups to identify differences in (a) essay features they consider, (b) their understanding of the scoring rubric, and (c) their decision-making procedures. Results indicate scorers with different levels of proficiency do not focus on different essay features when making evaluative decisions but their understandings of the scoring criteria may vary. Proficient scores are more likely to focus on general features of an essay when making evaluative decisions and to adopt values espoused by the scoring rubric than are less proficient scorers. Also, proficient scorers make evaluations by reading the entire essay and then reviewing its content, whereas less proficient scorers may interrupt the reading process to monitor how well the essay satisfies the scoring criteria. Finally, the authors discuss implications for scorer selection and training.

    doi:10.1177/0741088398015004002

September 1998

  1. Instructional Note · Keeping Language Journals in English Composition
    Abstract

    Describes how a weekly focused journal writing assessment (in which students note any use of language they find interesting, puzzling, amusing, or annoying as well as their response to it) enhances composition students’ awareness of how language is used and where. Offers several different advantages of such journal writing.

    doi:10.58680/tetyc19981805

December 1997

  1. Part-Timers, Full-Timers, and Portfolio Assessment
    Abstract

    Explores issues, problems, and procedures involved in large English departments which use portfolio assessment and where part-timers and full-timers need to collaborate in this process. Offers recommendations involving the relationship of part-time and full-time teachers in such programs.

    doi:10.58680/tetyc19973836

October 1997

  1. Portfolios in Literature Courses: A Case Study
    Abstract

    Asks if there is a place for portfolio assessment in the literature classroom. Finds that portfolios help students use writing to engage literary texts in multiple and productive ways, and offer opportunities to examine effects of the reading process over the course of the writing pieces. Argues for a particular kind of portfolio focusing on a single literary work.

    doi:10.58680/tetyc19973828
  2. Tests Worth Taking? Using Portfolios for Accountability in Kentucky
    Abstract

    Observes how nine members of the Pine View High School English Department interpreted and implemented Kentucky’s state requirement for portfolio assessment of secondary school students. Suggests that the faculty saw the assessment as a test of their competence and felt great pressure to produce good portfolios but little incentive to explore ways portfolios might be used in the classroom.

    doi:10.58680/rte19973885

February 1997

  1. The Relative Contributions of Research-Based Composition Activities to Writing Improvement in the Lower and Middle Grades
    Abstract

    In a benchmark meta-analysis of experimental research findings from 1962 to 1982, Hillocks (1986) reported the varying effects of general modes of instruction and specific instructional activities (foci) on the quality of student writing. The main purpose of the present study was to explore the relative effectiveness of those modes and foci using a non-experimental methodology and a new group of 16 teachers and 275 students in grades 1, 3–6, and 8. Teachers who had attended a summer writing institute reported on 17 different instructional variables that were primarily derived from the meta-analysis during each week of a ten-week treatment period that occurred at the beginning of the next school year. A pre- and post- treatment large-scale writing assessment was used with a prompt that allowed latitude in student choice of topic and extra time for prewriting and/or revision. Large gains in quality and quantity were found in the lower grades (1, 3, and 4) and smaller gains were found in the middle grades (5, 6, and 8). The demographic variables of SES, primary language, residence, and gender were found to have small and/or insignificant relationships to gains. Teacher-determined combinations of instructional variables and their relationship to gains in quality were investigated through factor analysis while controlling for pretreatment individual differences. Only one combination of activities was associated with large gains, and it was interpretable as the environmental mode of instruction. This combination included inquiry, prewriting, writing about literature, and the use of evaluative scales.

    doi:10.58680/rte19973874

December 1996

  1. Toward a New Theory of Writing Assessment
    doi:10.58680/ccc19968674

May 1996

  1. Reviews: (Re)Articulating Writing Assessment for Teaching and Learning
    Abstract

    Preview this article: Reviews: (Re)Articulating Writing Assessment for Teaching and Learning, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/tetyc/32/2/teachingenglishinthetwoyearcollege4583-1.gif

    doi:10.58680/tetyc20044583
  2. Review: What We Really Value: Beyond Rubrics in Teaching and Assesing Writing
    Abstract

    Preview this article: Review: What We Really Value: Beyond Rubrics in Teaching and Assesing Writing, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/tetyc/32/2/teachingenglishinthetwoyearcollege4582-1.gif

    doi:10.58680/tetyc20044582

October 1995

  1. Writing Assessment: A Position Statement
    doi:10.2307/358714
  2. Uncovering Possibilities for a Constructivist Paradigm for Writing Assessment
    doi:10.2307/358717
  3. Review: Uncovering Possibilities for a Constructivist Paradigm for Writing Assessment
    Abstract

    Preview this article: Review: Uncovering Possibilities for a Constructivist Paradigm for Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/46/3/collegecompositioncommunication8738-1.gif

    doi:10.58680/ccc19958738
  4. Writing Assessment: A Position Statement
    Abstract

    Preview this article: Writing Assessment: A Position Statement, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/46/3/collegecompositioncommunication8736-1.gif

    doi:10.58680/ccc19958736

October 1994

  1. Analytic Measures for Evaluating Managerial Writing
    Abstract

    The recent addition of a writing performance assessment to the Graduate Management Admission Test (GMAT) means that many students now enter business school with a writing assessment score and perhaps even a heightened awareness that writing matters in some way to the successful completion of an MBA degree. This situation presents teachers of business and managerial writing with a new opportunity and pressure to provide students with writing tools that are directly relevant to their business studies and professional careers. The Analysis of Argument Measure and the Persuasive Adaptiveness Measure introduced here are assessment tools that may be used to explain holistic assessment scores (which students receive on the GMAT writing component) and may assist students in understanding and evaluating their writing, both in school and in the workplace. Designed to evaluate managerial documents that are persuasive and directorial in nature, these measures were developed through a series of pilots and used to assess a selected sample of managerial memorandums that were also scored holistically. Correlating the holistic and analytic scores revealed a positive association, and interrater reliability achieved good agreement beyond chance. These results suggest that the measures may be reliably employed to assess characteristics valued in managerial writing. Examples of how these analytic measures may be employed for teaching and research are also described.

    doi:10.1177/1050651994008004002

May 1994

  1. Adventuring into Writing Assessment
    Abstract

    Preview this article: Adventuring into Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/45/2/collegecompositionandcommunication8789-1.gif

    doi:10.58680/ccc19948789

January 1994

  1. The Assessment of Technical Writing: A Case Study
    Abstract

    This article describes the design and evaluation of a formal writing assessment program within a technical writing course. Our purpose in this base-line study was to evaluate student writing at the conclusion of the course. In implementing this evaluation, we addressed fundamental issues of sound assessment: reliability and validity. Our program may encourage others seeking to assess educational outcomes in technical writing courses.

    doi:10.2190/53lm-vwv5-jftv-b7h7

May 1992

  1. A Selected Bibliography on Postsecondary Writing Assessment, 1979-1991
    Abstract

    Preview this article: A Selected Bibliography on Postsecondary Writing Assessment, 1979-1991, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/43/2/collegecompositionandcommunication8887-1.gif

    doi:10.58680/ccc19928887

November 1990

  1. Computers and writing assessment: A preliminary view
    doi:10.1016/s8755-4615(05)80004-0

May 1990

  1. Language and Reality in Writing Assessment
    doi:10.58680/ccc19908972
  2. Language and Reality in Writing Assessment
    Abstract

    I recently attended a conference previously unknown to me and to most college English faculty: The Assessment Forum of American Association for Higher Education (AAHE). (I was there to give a paper on measurement of writing ability and on evaluation of writing programs.) The experience of that conference ought to have been routine; after all, I have directed a variety of large-scale writing programs and I have been speaking and publishing on writing assessment for over fifteen years; I have also spent many years as chair of an English department and as a writing program administrator. But experience of hearing papers and discussions at that conference was not at all routine; it was both troubling and enlightening, as well as quite new in unexpected ways. My first reaction to sessions on writing measurement at AAHE was that I had entered a new world. The papers not only made different assumptions about writing than I, as a writing teacher, writer, and researcher, normally make, but came out of a wholly different scholarly community of discourse, one that calls itself the assessment movement. The references were entirely unfamiliar, procedures were different, and approach to subject struck me as insensitive to what writing is all about. But all of these differences seemed to center on way people spoke (and hence thought) about measurement: I was in a foreign country, language was different, and that difference changed everything. I had entered a new discourse community in a field in which I was a well-published specialist, and none of my knowledge or experience seemed to matter. And yet discourse was about measuring writing ability and evaluating writing programs, that is, about what has (however accidentally) become my specialty. I felt disoriented. When I returned home from AAHE I found a flier from Jossey-Bass, publisher of my 1985 book, Teaching and Assessing Writing. I don't expect book to appear on every flier marketing division puts out, but this little

    doi:10.2307/358159

September 1987

  1. What Can We Know, What Must We Do, What May We Hope: Writing Assessment
    doi:10.2307/378057
  2. Review: What Can We Know, What Must We Do, What May We Hope: Writing Assessment
    Abstract

    Preview this article: Review: What Can We Know, What Must We Do, What May We Hope: Writing Assessment, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ce/49/5/collegeenglish11471-1.gif

    doi:10.58680/ce198711471

May 1987

  1. Writing Assessment: Issues and Strategies
    doi:10.2307/357723

October 1986

  1. A Procedure for Writing Content-Fair Essay Examination Topics for Large-Scale Writing Assessments
    Abstract

    Preview this article: A Procedure for Writing Content-Fair Essay Examination Topics for Large-Scale Writing Assessments, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/37/3/collegecompositionandcommunication11232-1.gif

    doi:10.58680/ccc198611232

January 1985

  1. Some Effects of Varying the Structure of a Topic on College Students' Writing
    Abstract

    Incoming freshmen are typically required to write essays which are then holistically rated to determine composition course placement. These placement essays vary not only in topic, but also in the way the topic is structured. Two topic structures are most commonly used: Open (students draw on their own knowledge) and Response (students read a given text and respond to it). It has been established that students perform differently on topic structure itself. To investigate this effect, one topic was used but presented as (1) an Open topic structure, (2) a Response topic structure with one reading passage, and (3) a Response topic structure with three reading passages. The essays, written by college freshmen, were holistically rated for quality and analyzed for fluency, total error, and error ratios. The results indicated that the structure of the topic made a difference in quality, fluency, and total error, but not in any error ratio. These results suggest that, for placement testing, one should first decide which types of students one wishes to identify because each topic structure distinguishes low, average, and high ability students differently.

    doi:10.1177/0741088385002001005

December 1984

  1. Designing Topics for Writing Assessment: Problems of Meaning
    doi:10.58680/ccc198414858

May 1983

  1. Teachers’ Writing Assessments Across the High School Curriculum
    doi:10.58680/rte198315712

February 1983

  1. Data Correction to: “A Comparison of Direct and Indirect Writing Assessment Methods” by Richard J. Stiggins (p. 101, Volume 16, No. 2, May 1982)
    doi:10.58680/rte198315723

December 1982

  1. A Procedure for Writing Assessment and Holistic Scoring
    doi:10.2307/357964

May 1982

  1. A Comparison of Direct and Indirect Writing Assessment Methods
    doi:10.58680/rte198215742

December 1979

  1. Evaluating Student Writing
    doi:10.58680/ccc197916208

January 1976

  1. Round Two of the National Writing Assessment--Interpreting the Apparent Decline of Writing Ability: A Review
    doi:10.58680/rte197620048

Undated

  1. OUTCOMES ASSESSMENT AND BASIC WRITING: WHAT, WHY, AND HOW?