Key URLs and Links from Talks

Brian Huot:
The Big Test by Nicholas Lemann
On a Scale: A Social History of Writing Assessment in America by Norbert Elliot
Standards For Educational And Psychological Testing 1999 by AERA
Assessing Writing: A Critical Sourcebook by Brian Huot and Peggy O'Neill

Bob Cummings/Ron Balthazor:
No Gr_du_te Left Behind by James Traub
EMMA, UGA's electronic and e-portfolio environment

Marti Singer:
GSU's Critical Thinking Through Writing Project



Tuesday, October 23, 2007

Assessing Writing: Contents

introduction

FOUNDATIONS

  1. Direct and Indirect Measures for Large-Scale Evaluation of Writing

Ramon Veal and Sally Hudson

Veal and Hudson summarize the differences between holistic, analytic, and primary trait scoring, and provide the research basis for comparing different commonly used writing assessment procedures.

  1. Holisticism

Edward M. White

White offers a strong theoretical argument for holistic scoring, and he reviews various theories of reading and interpretation from the scholarly literature and applies them to the reading of student writing. He furnishes a strong theoretical basis for the holistic scoring that ranges beyond the need for training readers and producing scores.

  1. Reliability Issues in Holistic Assessment

Roger Cherry and Paul Meyer

Cherry and Meyer provide detailed discussions of both instrument and interrater reliability, and the article distinguishes between different kinds of reliability and their uses while supplying relevant formula and description of how to best calculate interrater reliability in scoring sessions.

  1. The Worship of Efficiency: Untangling Theoretical and Practical Considerations in Writing Assessment

Michael M. Williamson

Williamson links the history of pedagogical approaches to the development of writing assessment and the value of efficiency throughout the twentieth century, and he pushes on the traditional importance given to reliability and its inability to drive the most important (validity) aspects of any writing assessment.

  1. Can There Be Validity Without Reliability?

Pamela A. Moss

Moss builds upon work in educational measurement on validity and performance assessment, and argues for a new, more flexible understanding of reliability as a measurement concept by challenging traditional notions of reliability in educational measurement.

  1. Portfolios as a Substitute for Proficiency Exams

Peter Elbow and Pat Belanoff

Elbow and Belanoff demonstrate the value, efficacy and practicality of using portfolios to assess student writing at the college level and ground the use of portfolios in a particular program’s need to assess student writing, providing writing teachers and program administrators with a strong model for responding to the need to assess in positive productive ways.

  1. Changing the Model for the Direct Assessment of Writing

Roberta Camp

Camp chronicles the development of writing assessment from an educational measurement perspective and draws upon work in cognitive psychology and literacy studies to make the argument that once researchers were able to furnish a more detailed and complicated picture of reading and writing, writing assessment developers were able to follow suit in developing more direct and authentic forms of writing assessment.

  1. Looking Back as We Look Forward: Historicizing Writing Assessment

Kathleen Blake Yancey

Yancey follows the development of writing assessment over a fifty year period from a college writing assessment perspective and illustrates the ongoing importance of assessment for the teaching of college writing, even as the assessment themselves change.(This article originally appeared in a fiftieth anniversary issue of CCCs.)

  1. Testing the Test of a Test: A Response to the Multiple Inquiry in the Validation of Writing Tests

Pamela A. Moss

Moss responds to college writing assessment as a field about its use of empirical methods and its understanding of test validity and makes an argument for validity in writing assessment as ongoing reflective practice in which all test use must include a process of inquiry and reflection within which we describe the limitations of the test and decisions made on its behalf.

  1. Toward a New Theory of Writing Assessment

Brian Huot

Huot introduces college writing assessment to relevant literature from educational measurement, arguing that current theories of test validity can advance and support a new set of theories and practices for writing assessment and challenges current notions and uses of reliability and validity to foster a new agenda for writing assessment.

MODELS

  1. The Importance of Teacher Knowledge in College Composition Placement Testing

William L. Smith

Smith provides one of the first models for writing assessment that moves beyond holistic scoring, opening up the possibilities for a range of assessment models based upon local values and expertise, which is equally important for providing a strong model for validity inquiry in which each use of an assessment requires research into its accuracy, adequacy and consequences for the program, students and teachers.

  1. Adventuring into Writing Assessment

Richard Haswell and Susan Wyche Smith

Haswell and Smith move beyond holistic scoring and interrater reliability to argue that writing teachers and administrators with expertise in writing assessment can learn to change writing assessment and create a productive assessment culture at their institutions.

  1. Portfolio Negotiations: Acts in Speech

Russel K. Durst, Marjorie Roemer and Lucille Schultz

Durst, Roemer and Schultz report on a model for exit testing in which three teacher teams, or “trios,” read student portfolios. This communal approach to making important judgments about students based upon a collection of their work breaks new ground and provides a strong model of a program that makes decisions about students and helps to create an assessment culture in which teachers talk with each other about their students and teaching.

  1. Directed Self-Placement: An Attitude of Orientation

Daniel Royer and Roger Gilles

Royer and Gilles authored this breakthrough piece that establishes the possibility for students to make their own decisions about placement into first-year writing and argues that empowering students to make their own placement decisions is theoretically sound and promotes learning and responsibility, creating basic writing courses comprised of self-selected students.

  1. WAC Assessment and Internal Audiences: A Dialogue

Richard Haswell and Susan McLeod

Haswell and McLeod model a series of “mock” conversations in which a writing assessment researcher and an administrator work through various problems in presenting assessment and outcomes data to various administrators and audiences throughout the academy.

  1. A Process for Establishing Outcomes-Based Assessment Plans for Writing and Speaking in the Disciplines

Michael Carter

Carter offers a practical, hands on explanation of how to conduct outcomes assessment and argues that outcomes assessment can be valuable beyond the need for accountability and can provide institutions and writing program administrators with important information to enhance teaching and learning.

ISSUES

  1. Influences on Evaluators of Expository Essays: Beyond the Text

Sarah Warshauer Freedman

Freedman reports on a study that explores the effects on holistic scores given to college students' expository essays due to three variables—essay, reader, and environment—and finds as Smith did years later that raters were the chief influence on students scores.

  1. “Portfolio Scoring”: A Contradiction in Terms

Robert L. Broad

Broad challenges the drive for interrater reliability associated with holistic scoring and procedures such as norming that prize consensus and agreement, arguing for “post-positivist” methods of assessment that are situated and located, which value context and diverse perspectives and are more theoretically aligned with writing portfolios.

  1. Questioning Assumptions about Portfolio-Based Assessment

Liz Hamp-Lyons and William Condon

Hamp-Lyons and Condon report on a study of portfolio readers who evaluated portfolios for exit from a first-year writing practicum, identify five assumptions typically made about portfolio assessment, and discuss them in light of the study’s findings and the authors’ experiences, concluding that portfolios are not inherently more accurate or better than essay testing depending on how they are used.

  1. Rethinking Portfolios for Evaluating Writing: Issues of Assessment and Power

Brian Huot and Michael M. Williamson

Huot and Williamson explore the connections between assessment and issues of power, politics and surveillance and contend that unless the power relationships in assessment situations are made explicit, the potential of portfolios to transform writing assessment—and positively influence teaching and learning—will be compromised.

  1. The Challenges of Second Language Writing Assessment

Liz Hamp-Lyons

Hamp-Lyons presents an overview of the challenges of assessing the writing of non-native English speakers and identifies some of the key differences in reading and evaluating texts by NNS and native speakers, concluding with a lengthy discussion of portfolio assessment. Since this article was published, the literature on assessing writing of NNS (or ESL) students is much more extensive; however, it provides a foundation for approaching the more recent work.

  1. Expanding the Dialogue on Culture as a Critical Component When Assessing Writing

Arnetha F. Ball

Ball addresses issues of teacher evaluation of writing produced by ethnically diverse students and reports on two studies that examine the rhetorical and linguistic features of texts and how they contribute to holistic assessment of student writing as well as a reflective discussion with African-American teachers about the evaluation of student writing.

  1. Gender Bias and Critique of Student Writing

Richard Haswell and Janis Tedesco Haswell

Haswell and Haswell provide an empirical study of how knowledge of writers’ gender affects readers evaluations of writing and discusses implications of the findings for teachers and assessors; their findings strongly suggest that readers are influenced by gender stereotypes in complex ways. The article also provides a thorough overview of gender and writing assessment.

  1. Validity of Automated Scoring: Prologue for a Continuing Discussion of Machine Scoring Student Writing

Michael M. Williamson

Williamson positions automated scoring within the broader assessment community and emerging concepts of validity and compares the way the composition and writing communities have considered these concepts to that of the educational measurement community’s position, challenging writing assessment professionals to become conversant with the field of educational measurement so that they can communicate effectively to those outside of their own community.

additional readings

about the editors

credits

index

No comments: