Rebecca Burnett, Georgia Institute of Technology, School of Literature, Communication, and Culture (LCC)
Nick's Notes:
Restrict Rubrics to Regain Rhetorical Responsibility:
Rubrics have huge benefit because of work load; they make it possible for time-challenged instructors and placement readers and other assessors to provide quicker feedback on writing. (Rubrics are often necessary for survival.)
But there is a tension between rhetorical theory and how rubrics are applied.
Rubrics have an inherent caste system:
There is the one size fits all rubric
or
the custom rubric
One-size-fits-all example: superficial commercial tools; Rebecca recalled seeing a site that boasted instructors could "create rubrics in seconds." Not just commercial issue, however; many higher ed. sites –including lots of WAC programs—offer examples of rubrics that fit this model. Maybe not necessarily intentionally, but certainly when an "example" of rubric is taken and applied without any thought to making custom changes to it.
custom rubrics:
Let you set item and cat criteria; when enacted programmatically, enable raters to compare artifacts. This use of rubrics make certain assessments not only easier but possible, i.e. home-grown placement exams where a program designs rubrics to match their curriculum and course goals.
Student benefits to rubrics:
• assignment specific feedback
• enable one kind of self assessment (Herb Simon says experts can self assess and teaching students to self-apply rubrics can help them to do that).
• encourage peer assessment – students use language and criteria of rubrics as guide to critically (and constructively) reading peers' essays.
• identify competence and weakness
Teacher Benefits:
• focus and sharpen thinking on expectations
• facilitate movement of instructors to new courses
• provoke new thinking about a process or an artifact
• provide consistency in multi-section course
• support Composing or Communication Across the Curriculum
Admin Benefits
• demo that courses deal w/ considerably more than "correctness"
• provide benchmarks for rater consistency, providing reliability for large-scale assessment.
Yet, for all these benefits, major and important-to-consider complications exist.
To excavate these complications, Rebecca passed out a copy of a rubric she found at the WWW site of a Research I university. The rubric is used to help faculty across the curriculum know what good writing is and to give students feedback on the criteria that go into good writing. (Sorry table is not formatted much, nc):
Scale ------------ excellent --------- good------------ fair-------------- poor
WRITING QUALITIES:
accuracy of content
adaptation to a particular audience
identification and development of key ideas
conformation w/ genre conventions
ogranization of information
supporting evidence
spelling
punctuation
grammar
usage
Some of the issues w/ the above as brought out in discussion Rebecca lead:
No way to know how to apply rubric – checks, numbers, scores?
Concepts were vague and some things overlapped – i.e. genre and audience.
40 percent of it is on mechanics and surface errors.
Reality is this rubric form is reproduced all over the place, using something akin to this. And people feel like they're doing a good job because they're using them.
On the plus side, at least 4 point scale does force a choice on the writing (Brian Huot noted from audience); with odd number of scale options, people tend to choose middle of the road option a disproportionate number of times.
Other inherent rubric problems: what will the rubric encourage and reward?
• will it reward risk-taking
• or will it encourage conformity.
Lot depends on the content of rubric to say whether it encourages risk-taking or conformity.
NC questions: Can you write a rubric that encourages risk-taking? How do you do that and apply it?
Synergy of Communication is lost when using rubrics:
An argument is not inherently persuasive, nor is it persuasive is isolation.
Instead, an argument is persuasive for a particular situation, to a particular auidence.
The synergy is lost when rubrics are typically presented.
Rubrics by their very nature create bounded objects of rhetorical elements. That is, they isolate qualities as distinct when in reality, many of those qualities can only be inferred and judged in their synergistic relationship to other qualities. You cannot, in reality, separate a consideration of idea development from one of audience. How much development an idea needs often depends upon whom the audience is, what the argument to that audience intends, how much space one has to write in, and other factors.
NC questions: is it possible to apply rubrics with synergy kept in mind? Can you assess development in light of intended audience? If so, how does the rubric scale and scoring communicate to the writer that the judgment on development was tied to an understanding/judgment on audience?
How might that look? What if a rubric was based on rhetorical elements? asks Rebecca.
Rhetorical Elements
• sufficiency and accuracy of content PLUS
• culture and context
• exigence
• roles/responsibilities
• purposes
• audiences
• argument
• organization
• kinds of support
• visual images
• design
• conventions of language, visuals, and desing
• evaluation criteria
Thinking about these things in interrelated way, some if forefront of mind, some absorbed/assumed or native even.
It's not about the specfic elements, or that you list them, all, but how they interact.
Alternative scale to excellent good poor fair might use these terms:
• exemplary
• mature
• competent
• developing
• beginning
• basic
(These come Communication across curriculum program at ISU where Rebecca taught before joining Georgia Tech)
GT now using a WOVEN Curriculum
Written communication
Oral communication
Visual communication
Electronic communication
Nonverbal communication
…individual and collbarative
…in cross-cultural and international contexts
Assessment/Rubric Features:
• idiosyncratic (match to specific assignment)
• organic
• synergistic
• self-assessing
• methodologically legitimate
• theoretically supportable
Activity we did.
Develop an assessment plan for an assignment where students where doing a Health project. The scenario is they work for a company and HR wants to do a health campaign. The students are working in teams to develop five pieces:
Posters, 15 second "push" phone messages, a powerpoint w/ voiceover, a memo, a boolet on better health. Students are sent to appropriate government and health WWW sites and print sources for researching data necessary.
At our workshop, tables worked on what they would do for developing a synergistic rubric for such an assignment. RB said rubrics can be for both formative and/or summative assessment.
I don't have as many notes on table reports because I was spending too much time listening. I do recall that our table had some disagreement on what we would emphasize: overall project rubric, or rubrics for parts of projects. Brian H. noted that because each of the five elements was different in media and purpose and audience (memo was to company heads as progress report for example), each would need its own rubric.
Others felt the whole project needed a unifying rubric. I thought the challenge was finding a way to do both.
NC final thoughts and questions: I remember a room consensus thinking that ideally the feedback would come from seeing what worked. In a real office, the effectiveness of the campaign would be measured by changes in employee behavior and their reception to the campaign. But in fact, a lot of the work on feedback on such a project wouldn't be rubric-based. It would be discussion based, meeting based, and workshop based. The team and the team's managers would meet to discuss their campaigns, to answer questions on why something was the way it was, on why they thought it was effective. Assessment would be in the form acceptance, rejecting, or acceptance with revisions (not so dissimilar from academic journal procedures, only faster).
NC Questions: Can you really create a rubric that approximates in some way that dynamic? What if the rubric were used in a feed back meeting w/ each team in a teacher conference?
Key URLs and Links from Talks
Brian Huot:
The Big Test by Nicholas Lemann
On a Scale: A Social History of Writing Assessment in America by Norbert Elliot
Standards For Educational And Psychological Testing 1999 by AERA
Assessing Writing: A Critical Sourcebook by Brian Huot and Peggy O'Neill
Bob Cummings/Ron Balthazor:
No Gr_du_te Left Behind by James Traub
EMMA, UGA's electronic and e-portfolio environment
Marti Singer:
GSU's Critical Thinking Through Writing Project
Wednesday, October 24, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment