All Journals

140 articles
Year: Topic: Clear
Export:
assessment ×

February 2014

  1. Forum: Writing Assessment in Global Context
    Abstract

    Paradigms of writing instruction and of writing assessment are interconnected, and they are, or should be, affected by the sociocultural context in which they are embedded. In the case of writing assessment, the predominant context is the assessment of the writing proficiency of second- or third-language writers of English. Since the Second World War, English has taken a hold as the language of business and politics, and much of that interaction occurs between and among multiple groups who share only English as a common language. English is also dominantly the language of intellectual exchange, and English language tests are a critical component of decision-making about the movement of people from less-developed countries to countries where they can gain greater educational opportunity. English tests have great value. Everywhere in the world, English proficiency is one of the essential keys to unlock the door of educational opportunity and all that promises for an individual’s future. The assessment of writing is, then, socially and politically significant not only within a country’s internal struggles for opportunity for all through quality education, but also between nations.

    doi:10.58680/rte201424582
  2. A Framework for Using Consequential Validity Evidence in Evaluating Large-Scale Writing Assessments: A Canadian Study
    Abstract

    The increasing diversity of students in contemporary classrooms and the concomitant increase in large-scale testing programs highlight the importance of developing writing assessment programs that are sensitive to the challenges of assessing diverse populations. To this end, this paper provides a framework for conducting consequential validity research on large-scale writing assessment programs. It illustrates this validity model through a series of instrumental case studies drawing on the research literature conducted on writing assessment programs in Canada. We derived the cases from a systematic review of the literature published between January 2000 and December 2012 that directly examined the consequences of large-scale writing assessment on writing instruction in Canadian schools. We also conducted a systematic review of the publicly available documentation published on Canadian provincial and territorial government websites that discussed the purposes and uses of their large-scale writing assessment programs. We argue that this model of constructing consequential validity research provides researchers, test developers, and test users with a clearer, more systematic approach to examining the effects of assessment on diverse populations of students. We also argue that this model will enable the development of stronger, more integrated validity arguments.

    doi:10.58680/rte201424579
  3. Review Essay: All Writing Assessment Is Local
    Abstract

    Writing Assessment in the 21st Century: Essays in Honor of Edward M. White Norbert Elliot and Les Perelman, eds. Race and Writing Assessment Asao B. Inoue and Mya Poe, eds. Writing Assessment and the Revolution in Digital Texts and Technologies Michael R. Neal Digital Writing: Assessment and Evaluation Heidi A. McKee and Danielle Nicole DeVoss, eds.

    doi:10.58680/ccc201424573

January 2014

  1. Framing Sustainability
    Abstract

    Corporate social responsibility is a topic that is increasingly incorporated into business school curricula. This article describes a study of undergraduate business majors who wrote about an environmental topic in response to an Analytical Writing Assessment question in the Graduate Management Admission Test™. Of 187 students, only 76 mentioned natural resources in their responses. The study examines this smaller corpus for stance, framing, and argument. The results indicate that the majority of those 76 students supported sustainable practices but were less adept at presenting their perspectives, invoking a personal frame over a professional one. The authors suggest ways to help students develop stronger skills in writing about corporate social responsibility.

    doi:10.1177/1050651913502488

June 2013

  1. Local Assessment: Using Genre Analysis to Validate Directed Self-Placement
    Abstract

    Grounded in the principle that writing assessment should be locally developed and controlled, this article describes a study that contextualizes and validates the decisions that students make in the modified Directed Self-Placement (DSP) process used at the University of Michigan. The authors present results of a detailed text analysis of students’ DSP essays, showing key differences between the writing of students who self-selected into a mainstream first-year writing course and that of students who self selected into a preparatory course. Using both rhetorical move analysis and corpus-based text analysis, the examination provides information that can, in addition to validating student decisions, equip students with a rhetorically reflexive awareness of genre and offer an alternative to externally imposed writing assessment.

    doi:10.58680/ccc201323661

April 2013

  1. Can Writing Attitudes and Learning Behavior Overcome Gender Difference in Writing? Evidence From NAEP
    Abstract

    Based on eighth-grade writing assessment data from the 1998 ( N = 20,586) and 2007 ( N = 139,900) National Assessment of Educational Progress (NAEP), this study examines the relationships among students’ writing attitudes, learning-related behaviors, and gender in relation to writing performance. Overall, the effects of attitudes were slightly larger than the effects of learning behaviors on writing performance, and gender differences were more prominent in attitudes than learning behaviors related to writing. Perhaps the most surprising finding from the 2007 NAEP data was that females with the most negative attitudes toward writing outperformed males with the most positive attitudes (i.e., writing scores based on two measures of attitudes: females, 157 and 161; males, 151 and 149). Overall, a similar pattern was observed with learning behaviors and gender differences in writing scores. Furthermore, medium effect sizes of gender difference in writing scores (females scoring substantially higher than males) were present even though the students reported to be at the same level in terms of writing attitudes and learning behaviors. The present study demonstrates that gender disparity in students’ writing performance is persistent and strong; it cannot be explained by gender differences in attitudes or behavior alone or in attitudes and behavior combined.

    doi:10.1177/0741088313480313

March 2013

  1. Valuing the Resources of Infrastructure: Beyond From-Scratch and Off-the-Shelf Technology Options for Electronic Portfolio Assessment in First-Year Writing
    doi:10.1016/j.compcom.2012.12.001

January 2013

  1. Scaling Writing Ability
    Abstract

    This analysis of 83 scoring rubrics and grade definitions from writing programs at U.S. public research universities captures the current state of the struggle to define and measure specific writing traits, and it enables an induction of the underlying theoretical construct of “academic writing” present at these writing programs. Findings suggest that writing specialists have managed to permeate U.S. first-year writing assessment with certain progressive assumptions about writing and writing instruction, but they also indicate critical areas for revision, given such documents’ critical gatekeeping role at postsecondary institutions. The study also raises a broader question about the difficulties of rhetorically constructing “writing ability” in a way that is consistent with the contextualist paradigm dominant in contemporary writing studies.

    doi:10.1177/0741088312466992

September 2012

  1. The Trouble with Outcomes: Pragmatic Inquiry and Educational Aims
    Abstract

    Although outcomes assessment (OA) has become “common sense” in higher education, this article shows that the concept of outcomes tends to limit and compromise teaching and learning while serving the interests of institutional management. By contrast, the pragmatic concept of consequences tends to expand our view of teaching and learning, and contests the technical rationality of the managerial university. Though I challenge outcomes assessment, I recognize that OA is the coin of the educational realm. Therefore, this article outlines ways to frame and use educational aims to minimize the negative tendencies of outcomes assessment and to maximize the positive tendencies of “consequential assessment.”

    doi:10.58680/ce201220677

June 2012

  1. Review Essay: The Point Is to Change It: Problems and Prospects for Public Rhetors
    Abstract

    Books discussed in this essay: Reframing Writing Assessment to Improve Teaching and Learning, Linda Adler-Kassner and Peggy O’Neill Going Public: What Writing Programs Learn from Engagement, Shirley K. Rose and Irwin Weiser, editors The Public Work of Rhetoric: Citizen-Scholars and Civic Engagement, John M. Ackerman and David J. Coogan, editors Activism and Rhetoric: Theories and Contexts for Political Engagement, Seth Kahn and JongHwa Lee, editors

    doi:10.58680/ccc201220303

February 2012

  1. Placement of Students into First-Year Writing Courses
    Abstract

    The purpose of the present study is to examine concurrent and predictive evidence used in the validation of ACCUPLACER, a purchased test used to place first-year students into writing courses at an urban, public research university devoted to science and technology education. Concurrent evidence was determined by correlations between ACCUPLACER scores and scores on two other tests designed to measure writing ability: the New Jersey Basic Skills Placement Test and the SAT Writing Section. Predictive evidence was determined by coefficients of determination between ACCUPLACER scores and end-of-semester performance measures. A longitudinal study was also conducted to investigate the grade history of students placed into first-year writing by established and new methods. When analyzed in terms of gender and ethnicity impact, ACCUPLACER failed to achieve statistically significant prediction rates for student performance. The study reveals some limits of placement testing and the problems related to it.

    doi:10.58680/rte201218457
  2. CCC Poster Page 9: Writing Assessment
    doi:10.58680/ccc201218451

September 2011

  1. Book Review: Adler-Kassner and O’Neill’s Reframing Writing Assessment
    Abstract

    “Part scholarly monograph, part handbook, part rallying cry, Reframing Writing Assessment is an important addition to a spate of recent books on assessment that encourage teachers to take back our professional lives.”

  2. From Rigidity to Freedom: An English Department’s Journey in Rethinking How We Teach and Assess Writing
    Abstract

    This essay chronicles an English department overhauling its rubric design, curriculum, and portfolio in order to emphasize a wider range of “real-world” writing.

    doi:10.58680/tetyc201117295

August 2011

  1. Subjectivity, Intentionality, and Manufactured Moves: Teachers’ Perceptions of Voice in the Evaluation of Secondary Students’ Writing
    Abstract

    Composition theorists concerned with students’ academic writing ability have long questioned the application of voice as a standard for writing competence, and second language compositionists have suggested that English language learners may be disadvantaged by the practice of emphasizing voice in the evaluation of student writing. Despite these criticisms, however, voice continues to frequently appear as a goal in guidelines for teaching writing and on high-stakes writing assessment rubrics in the United States. Given the apparent lack of alignment between theory and practice regarding its use, more empirical research is needed to understand how teachers apply voice as a criterion in the evaluation of student writing. Researchers have used sociocultural and functionalist frameworks to analyze voice-related discursive patterns, yet we do not know how readers evaluate written texts for voice. To address this gap in research the present study asked: 1) What language features do secondary English teachers associate with voice in secondary students’ writing and how do they explain their associations? 2) How do such identified features vary across genres as well as among readers? Nineteen teachers were interviewed using a think-aloud protocol designed to illuminate their perceptions of voice in narrative and expository samples of secondary students’ writing. Results from an inductive analysis of interview transcripts suggest that participating teachers associated voice with appraisal features, such as amplified expressions of affect and judgment, that are characteristic of literary genres.

    doi:10.58680/rte201117151

June 2011

  1. Technology-Mediated Writing Assessments: Principles and Processes
    doi:10.1016/j.compcom.2011.04.007
  2. New Spaces and Old Places: An Analysis of Writing Assessment Software
    doi:10.1016/j.compcom.2011.04.004

May 2011

  1. An Outcomes Assessment Project: Basic Writing and Essay Structure
    Abstract

    An outcomes assessment project we conducted at our open admissions institution turned out to be considerably more enjoyable and worthwhile than we anticipated.

    doi:10.58680/tetyc201115235

February 2011

  1. Being There: (Re)Making the Assessment Scene
    Abstract

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar “stakeholder” theory of power, I propose a rewriting of the assessment scene that asserts faculty and student agency and leadership for writing assessment.

    doi:10.58680/ccc201113456

June 2010

  1. Review Essay: Assessment in the Service of Learning
    Abstract

    Effective Grading: A Tool for Learning and Assessment in College, 2nd ed. Barbara E. Walvoord and Virginia Johnson Anderson San Francisco: Jossey-Bass, 2010. 255 pp. A Guide to College Writing Assessment Peggy O’Neill, Cindy Moore, and Brian Huot Logan: Utah State University Press, 2009. 218 pp. Organic Writing Assessment: Dynamic Criteria Mapping in Action Bob Broad, Linda Adler-Kassner, Barry Alford, Jane Detweiler, Heidi Estrem, Susanmarie Harrington, Maureen McBride, Eric Stalions, and Scott Weeden Logan: Utah State University Press, 2009. 167 pp. Teaching and Evaluating Writing in the Age of Computers and High-Stakes Testing Carl Whithaus Mahwah, NJ: Erlbaum, 2005. 169 pp. Composition in Convergence: The Impact of New Media of Writing Assessment Diane Penrod Mahwah, NJ: Erlbaum, 2005. 184 pp.

    doi:10.58680/ccc201011337

May 2010

  1. A Usable Past for Writing Assessment
    Abstract

    Writing program administrators and other composition specialists need to know the history of writing assessment in order to create a rich and responsible culture of it today. In its first fifty years, the field of writing assessment followed educational measurement in general by focusing on issues of reliability, whereas in its next fifty years, it turned its attention to validity. Overall, the field has exhibited a tension between reliability and validity, with the latter increasingly being conceptualized as involving a whole set of considerations that need to be theorized.

    doi:10.58680/ce201010801

January 2010

  1. Why Assessment?
    Abstract

    Outcomes assessment is necessary in higher education partly because it can counteract courseocentrism, the assumption teaching naturally occurs in isolated classrooms that leave teachers knowing little about one another and that leave students vulnerable to confusingly mixed messages as they go from course to course and subject to subject.

    doi:10.1215/15314200-2009-028
  2. Linguistic Features of Writing Quality
    Abstract

    In this study, a corpus of expert-graded essays, based on a standardized scoring rubric, is computationally evaluated so as to distinguish the differences between those essays that were rated as high and those rated as low. The automated tool, Coh-Metrix, is used to examine the degree to which high- and low-proficiency essays can be predicted by linguistic indices of cohesion (i.e., coreference and connectives), syntactic complexity (e.g., number of words before the main verb, sentence structure overlap), the diversity of words used by the writer, and characteristics of words (e.g., frequency, concreteness, imagability). The three most predictive indices of essay quality in this study were syntactic complexity (as measured by number of words before the main verb), lexical diversity (as measured by the Measure of Textual Lexical Diversity), and word frequency (as measured by Celex, logarithm for all words). Using 26 validated indices of cohesion from Coh-Metrix, none showed differences between high- and low-proficiency essays and no indices of cohesion correlated with essay ratings. These results indicate that the textual features that characterize good student writing are not aligned with those features that facilitate reading comprehension. Rather, essays judged to be of higher quality were more likely to contain linguistic features associated with text difficulty and sophisticated language.

    doi:10.1177/0741088309351547

September 2009

  1. Creating a Culture of Assessment in Writing Programs and Beyond
    Abstract

    As writing-program administrators and faculty are being called upon more frequently to help design and facilitate large-scale assessments, it becomes increasingly important for us to see assessment as integral to our work as academics. This article provides a framework, based on current historical, theoretical, and rhetorical knowledge, to help writing specialists understand how to embrace assessment as a powerful mechanism for improved teaching and learning at their institutions.

    doi:10.58680/ccc20098315

August 2009

  1. Ventriloquation in Discussions of Student Writing: Examples from a High School English Class
    Abstract

    This study examines discussions of model papers in a high school Advanced Placement English classroom where students were preparing for a high-stakes writing assessment. Much of the current research on talk about writing in various contexts such as classroom discourse, teacher-student writing conferences, and peer tutoring has emphasized the social and constructive nature of instructional discourse. Building on this work, the present study explored how talk about writing also takes on a performative function, as speakers accent or point to the features of the context that are most significant ideologically. Informed by perspectives on the emergent and mediated nature of discourse, this study found that the participants used ventriloquation to voice the aspects of the essays that they considered to be most important, and that these significant chunks were often aphorisms about the test essay. The teacher frequently ventriloquated raters, while the students often ventriloquated themselves or the teacher. The significance of ventriloquation is not just that it helps to mediate the generic conventions of timed student essays; it also mediates social positioning by helping the speakers to present themselves and others in flexible ways. This study also raises questions about the ways that ventriloquation can limit the ways that students view academic writing.

    doi:10.58680/rte20097245

February 2009

  1. Online Placement in First-Year Writing
    Abstract

    This essay describes Louisiana State University’s search for an alternative to available placement protocols. Under the leadership of Les Perelman at MIT, LSU collaborated with four universities to develop iMOAT, a program for administering online assessments of student writing. This essay focuses on LSU’s On-line Challenge, which developed from the iMOAT project. The On-line Challenge combines direct and indirect writing assessments with student choice while freeing students from the constraints of time and place to invite new possibilities for assessing writing.

    doi:10.58680/ccc20096969

January 2009

  1. The Difficulty of Raising Standards in Teacher Training and Education
    Abstract

    The New York Times and others regularly implore us to raise the quality of teacher education. This essay explores why it is so difficult to do so, particularly at the urban, public institutions that produce many of our nation's teachers. It describes one such attempt to raise standards in writing. I document the process of building a new writing assessment program, including a writing assessment exam and a remediation program. I discuss our rubric and scoring procedures, samples of student work, and the poor score trends for our exam. I describe the difficulties in working without adequate resources, and I examine the ways in which our program posed a threat to the economics of the university. I conclude that efforts to raise program quality and produce higher-quality graduates are unlikely to succeed without fundamental changes to the economy of education generally and teacher education in particular.

    doi:10.1215/15314200-2008-022
  2. Methods and Results of an Accreditation-Driven Writing Assessment in a Business College
    Abstract

    This article describes a pilot effort for an accreditation-driven writing assessment in a business college, detailing the pilot's logistics and methods. Supported by rubric software and a philosophy of “real readers, real documents,” the assessment was piloted in summer 2006 with five evaluators who were English instructors and four who worked or taught in business environments. The nine evaluators were each given 10 reports that were drawn from a sample of 50 reports completed in a writing-intensive course. They created 88 individual assessments using a 10-category rubric. While the overarching purpose of the pilot was to determine the effectiveness of the methods used, the results may also be of interest to those involved with the assessment of writing.

    doi:10.1177/1050651908324383

December 2008

  1. Scoring Rubrics and the Material Conditions of Our Relations with Students
    Abstract

    This article explores the use of scoring rubrics in the context of deteriorating material conditions of writing instruction.

    doi:10.58680/tetyc20086884
  2. An Inter-Institutional Model for College Writing Assessment
    Abstract

    In a FIPSE-funded assessment project, a group of diverse institutions collaborated on developing a common, course-embedded approach to assessing student writing in our first-year writing programs. The results of this assessment project, the processes we developed to assess authentic student writing, and individual institutional perspectives are shared in this article.

    doi:10.58680/ccc20086868

September 2008

  1. Symposium: Assessment
    Abstract

    Closed Systems and Standardized Writing Tests by Chris M. Anson; "Information Illiteracy and Mass Market Writing Assessments" by Les Perelman "Genre, Testing, and the Constructed Realities of Student Achievement" by Mya Poe; "The Call of Research: A Longitudinal View of Writing Development" by Nancy Sommers.

    doi:10.58680/ccc20086753

July 2008

  1. Contextualize Technical Writing Assessment to Better Prepare Students for Workplace Writing: Student-Centered Assessment Instruments
    Abstract

    To teach students how to write for the workplace and other professional contexts, technical writing teachers often assign writing tasks that reflect real-life communication contexts, a teaching approach that is grounded in the field's contextualized understanding of genre. This article argues to fully embrace contextualized literacy and better teach workplace writing, technical writing teachers also need to contextualize how they assess student writing. To this end, this article examines some of workplaces' best assessment practices and critically integrates them into an introductory technical writing classroom through a method called student-centered assessment instruments. This method engages students, as workplaces engage employees, in the assessment process to identify local requirements for writing tasks. Aligned with theory and practice, this method is not only an effective classroom assessment method, but becomes an integrated part of students' genre-learning process within and beyond the classroom.

    doi:10.2190/tw.38.3.e

March 2008

  1. When Timing Isn’t Everything: Resisting the Use of Timed Tests to Assess Writing Ability
    Abstract

    In this study, we compared self-revised essays to timed writing exams written by students in a developmental English course in a community college. Using a multiple-trait rubric, we found that self-revised essays showed greater elaboration than timed writing exams, and that elaboration and focus correlated only for self-revised essays. We argue, based on these findings and on theoretical grounds, for further exploration of the self-revised essay as an authentic portrait of student writing ability.

    doi:10.58680/tetyc20086547

December 2007

  1. Portfolio Partnerships between Faculty and WAC: Lessons from Disciplinary Practice, Reflection, and Transformation
    Abstract

    In portfolio assessment, WAC helps other disciplines increase programmatic integrity and accountability. This analysis of a portfolio partnership also shows composition faculty how a dynamic culture of assessment helps us protect what we do well, improve what we need to do better, and solve problems as writing instruction keeps pace with programmatic change.

    doi:10.58680/ccc20076392

October 2007

  1. Comments on Lab Reports by Mechanical Engineering Teaching Assistants
    Abstract

    Many engineering undergraduates receive their first and perhaps most intensive exposure to engineering communication through writing lab reports in lab courses taught by graduate teaching assistants (TAs). Most of the TAs' teaching of writing happens through their comments on students' lab reports. Technical writing faculty need to be aware of TAs' response practices so they can build on or counteract that instruction as needed. This study examines the response practices of two TAs and the ways the practices shifted after the TAs began using a grading rubric. The analysis reveals distinct patterns in focus and mode, some reflecting best practices and some not. It also indicates encouraging changes after the TAs started using the grading rubric. The TAs' marginalia became more content focused and specific and, perhaps most important, less authoritative and more likely to reflect a coaching mode. The article concludes with implications for technical writing courses.

    doi:10.1177/1050651907304024

May 2007

  1. Organization and Development Features of Grade 8 and Grade 10 Writers: A Descriptive Study of Delaware Student Testing Program (DSTP) Essays
    Abstract

    The primary purpose of this study was to investigate the efficacy of formulaic writing such as the five-paragraph theme (FPT) or essay for the purpose of earning high scores on high-stakes writing assessments. This qualitative descriptive study analyzed more than 1000 essays from Delaware Grade 8 and 10 writers, written for a statewide direct-writing assessment.

    doi:10.58680/rte20076022

October 2006

  1. A Decade of Research: Assessing Change in the Technical Communication Classroom Using Online Portfolios
    Abstract

    Over a period of 10 years, we have developed a sustainable process of online portfolio assessment that demonstrates both reliability and validity, using both qualitative and quantitative measures. The sustainable cycle is that, each semester, we assess a random sampling of the students' work that they have posted, as per our instructions, in an online portfolio. During the reading, the faculty score the documents for 11 variables, including writing, content, audience awareness, and document design. We achieved validity by a modified online Delphi that led to a redefinition of the construct of technical communication itself; we achieved reliability by adjudication resulting in adjacent scores. The results of our assessment meet the requirements of ABET and result in a continual cycle of improvement for our technical communication curriculum. Results from three semesters show an improving correlation between the course grade and the overall, holistic portfolio score.

    doi:10.2190/c481-k214-8472-n377
  2. Writing Into the 21st Century
    Abstract

    This study charts the terrain of research on writing during the 6-year period from 1999 to 2004, asking “What are current trends and foci in research on writing?” In examining a cross-section of writing research, the authors focus on four issues: (a) What are the general problems being investigated by contemporary writing researchers? Which of the various problems dominate recent writing research, and which are not as prominent? (b) What population age groups are prominent in recent writing research? (c) What is the relationship between population age groups and problems under investigation? and (d) What methodologies are being used in research on writing? Based on a body of refereed journal articles ( n = 1,502) reporting studies about writing and composition instruction that were located using three databases, the authors characterize various lines of inquiry currently undertaken. Social context and writing practices, bi- or multi-lingualism and writing, and writing instruction are the most actively studied problems during this period, whereas writing and technologies, writing assessment and evaluation, and relationships among literacy modalities are the least studied problems. Undergraduate, adult, and other postsecondary populations are the most prominently studied population age group, whereas preschool-aged children and middle and high school students are least studied. Research on instruction within the preschool through 12th grade (P-12) age group is prominent, whereas research on genre, assessment, and bi- or multilingualism is scarce within this population. The majority of articles employ interpretive methods. This indicator of current writing research should be useful to researchers, policymakers, and funding agencies, as well as to writing teachers and teacher educators.

    doi:10.1177/0741088306291619

September 2006

  1. Instructional Notes: Words to Voice: Three Approaches for Student Self-Evaluation
    Abstract

    Three approaches—engaging first-year writers in naming strengths and weaker areas, determining descriptors that fit their various compositions, and applying a rubric that details all the grade-determinant components—serve to give students the vocabulary they need to wrap their voices around words and to describe their learning.

    doi:10.58680/tetyc20066039

June 2005

  1. Accelerated Classes and the Writers at the Bottom: A Local Assessment
    Abstract

    Assessment, including writing assessment, is a form of social action. Because standardized tests can be used to reify the social order, local assessments that take into account specific contexts are more likely to yield useful information about student writers. This essay describes one such study, a multiple-measure comparison of accelerated summer courses with nonaccelerated courses. We began with the assumption that the accelerated courses would probably not be as effective as the longer courses;but our assessment found that assumption largely to be incorrect. Contextual information made it clear that students were taking summer accelerated courses strategically, for reasons we had been unaware of and in ways that forced us to reinterpret their writing and our courses.

    doi:10.58680/ccc20054822
  2. The Scoring of Writing Portfolios: Phase 2
    Abstract

    Although most portfolio evaluation currently uses some adaptation of holistic scoring, the problems with scoring portfolios holistically are many, much more than for essays, and the problems are not readily resolvable. Indeed, many aspects of holistic scoring work against the principles behind portfolio assessment. We have from the start needed a scoring methodology that responds to and reflects the nature of portfolios, not merely an adaptation of essay scoring. I here propose a means for scoring portfolios that allows for relatively efficient grading where portfolio scores are needed and where time and money are in short supply. It is derived conceptually from portfolio theory rather than essay-testing theory and supports the key principle behind portfolios, that students should be involved with reflection about and assessment of their own work. It is time for the central role that reflective writing can play in portfolio scoring to be put into practice.

    doi:10.58680/ccc20054823

January 2005

  1. Creating the Subject of Portfolios
    Abstract

    This article presents research from a qualitative study of the way that reflective writing is solicited, taught, composed, and assessed within a state-mandated portfolio curriculum. The research situates reflective texts generated by participating students within the larger goals and bureaucratic processes of the school system. The study finds that reflective letters are a genre within the state curriculum that regulates the substance and tone of students’ reflections. At the classroom level, the genre provides a mode that students adopt with the assurance that their reflections will meet state evaluators’ expectations. At the bureaucratic level, the genre helps to continually validate the state’s portfolio curriculum through its strong encouragement of stylized narratives of progress. The study demonstrates the importance of understanding how large-scale assessments shape pedagogy and students’ writing.

    doi:10.1177/0741088304271831

May 2004

  1. REVIEW: Mind the Gap: Stepping Out with Caution in Assessment and Student Public Writing
    Abstract

    Reviewed are:Public Works: Student Writing as Public Text, edited by Emily J. Isaacs and Phoebe Jackson; Re(Articulating) Writing Assessment for Teaching and Learning, by Brian Huot; and What We Really Value: Beyond Rubrics in Teaching and Assessing Writing, by Bob Broad.

    doi:10.58680/ce20042850

February 2004

  1. Reviews (Re)Articulating Assessment: Writing Assessment for Teaching and Learning by Brian Huot
    Abstract

    Preview this article: Reviews (Re)Articulating Assessment: Writing Assessment for Teaching and Learning by Brian Huot, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/ccc/55/3/collegecompositionandcommunication2768-1.gif

    doi:10.58680/ccc20042768
  2. (Re)Articulating Assessment: Writing Assessment for Teaching and Learning
    doi:10.2307/4140701

January 2004

  1. The Impact of Student Learning Outcomes Assessment on Technical and Professional Communication Programs
    Abstract

    Because of accreditation, budget, and accountability pressures at the institutional and program levels, technical and professional communication faculty are more than ever involved in assessment-based activities. Using assessment to identify a program's strengths and weaknesses allows faculty to work toward continuous improvement based on their articulation of learning and behavioral goals and outcomes for their graduates. This article describes the processes of program assessment based on pedagogical goals, pointing out options and opportunities that will lead to a meaningful and manageable experience for technical communication faculty, and concludes with a view of how the larger academic body of technical communication programs can benefit from such work. As ATTW members take a careful look at the state of the profession from the academic perspective, we can use assessment to further direct our programs to meet professional expectations and, far more importantly, to help us meet the needs of the well-educated technical communicator.

    doi:10.1207/s15427625tcq1301_9

2003

  1. Composition’s Akrasia: The Devaluing of Intuitive Expertise in Writing Assessment

December 2001

  1. The Silent Scream: Students Negotiating Timed Writing Assessments
    Abstract

    Discusses how current scholarship argues against one-shot, high-stakes writing tasks. Presents work from students that were part of a team-taught curriculum that coordinated writing and reading classes. Designs activities that would provide a core of material for students to draw on in their final testing situations.

    doi:10.58680/tetyc20011994

February 2001

  1. Exploring the Impact of a High-Stakes Direct Writing Assessment in Two High School Classrooms
    Abstract

    This semester-long qualitative study explores the effects of a high-stakes, direct writing test on 3 teachers and their students in 1 rural Maryland high school. Out of the 23 students in both classes, 14 students had been identified for special education services for physical or learning problems; all had either failed the test once or had not yet taken it. The researchers conducted interviews with teachers and students, observed their classrooms, and collected samples of student writing and other artifacts to address 3 questions: (a) How did the test influence teacher beliefs about writing instruction? (b) How did these teachers adapt their instruction to respond to the demands of the test? (c) How did students who had not passed the test respond to their writing instruction and how did preparation for the test affect their attitudes/beliefs about writing? Our findings suggest that an emphasis on test preparation diminished the likelihood of the teachers’ engaging in reflective practice that is sensitive to the needs of individual students, that the high-stakes assessment process discounted the validity of locally developed standards for assessing writing, and that the criteria for passing the test failed to take into consideration the rich variety of American culture and the complexity of literacy learning.

    doi:10.58680/rte20011724

January 2001

  1. The Opening of the Modern Era of Writing Assessment: A Narrative
    Abstract

    ssessment is a peculiar field within college English studies. In one sense, every faculty member is engaged directly in it, assigning, responding to, and grading student papers; many members of English departments also participate in one way or another in placement testing for entering students or in mid-career or exit writing assessments for more advanced students. In another sense, external assessment of our work is always there in subtle and unacknowledged ways, defining what we do and how well we do it, how much power we can exert in controlling our curriculum, and how our scholarly work is valued. In this second sense, even more than in the first, assessment affects the way our work is perceived by others inside and outside the academy and hence helps determine the resources we receive for everything from duplicating to new faculty positions. The common misperceptions of our fieldthat as writing teachers we are picky grammarians and value flowery prose or as literature teachers we are irresponsible revolutionaries, for instance-are damaging cliches that arise in large part from assessment gone awry. Once we are evaluated as unable to fulfill our roles, no one in a position of power need take seriously our claims, and our discipline becomes easy to dismiss as an expensive frill. We will defend our private world of assessment as a matter between our students and us, at most a matter to be shared with our colleagues. But that public world of external assessment seems beyond our reach, if-not our ken, and our instincts are always to withdraw, to claim professional privilege. Yet with so much at stake, no English faculty member can avoid involvement in assessment, although many of us would prefer to see our work in other terms. In yet another sense, writing assessment has become an important specialty

    doi:10.2307/378995