Performance Assessment Requires Coordinating the “Performance” and the “Assessment”

By Susan M. Brookhart, author of Performance Assessment: Showing What Students Know and Can Do, now available at the Learning Sciences bookstore.

Most current learning standards ask students to do more than comprehend content knowledge. Performance assessment is a great way to find out how students can use or apply content knowledge or how they implement skills. It only works, however, if the task (the “performance”) and the rubrics (how the performance is assessed) both match the learning outcome or standard you are trying to assess. All performance assessments have those two parts—a task and a rubric or other scoring scheme—and both are important.

It’s sometimes easier to spot what’s wrong in a poor example. Suppose a middle school teacher found herself teaching state history. In addition to a test about the state’s history, geography, and economy, she wanted to find out what the students could do with that information. So she designed a performance assessment. The students were to plan a five-day itinerary for visitors who had never been to the state, describing what places they would visit and why these places were important to help their visitors get to know the state. Simple but effective, right? The “apply knowledge” part of the assigned comes in students’ selection of sites and in the justification of why they included that site over others. Higher-order thinking is required. Not all students’ work would look alike (as an aside, that’s a good barometer of how much original thinking a task requires), which would be fine as long as the justifications were reasonable and used logic and evidence.

New book from
Learning Sciences International

Performance Assessment:
Showing What Students Know and Can Do

By Susan M. Brookhart

ORDER NOW

Performance Assessment

However, suppose the rubric associated with this performance task just counted facts, as many rubrics I have seen do. For example, under Content, the performance level descriptions might read, “Has at least 5 facts about the site,” “Has 3-4 facts about the site,” and so on. Students whose thinking was logical, well founded, and well supported would fare no better on this rubric than students who just listed facts they collected from the internet or their textbook. After students took their finished work home, all the information that would be left from this performance assessment would be scores indicating recall and comprehension—correct listing of facts.

This sad but way-too-common scenario is hard to eradicate. Many teachers like their “counting facts” style rubrics because they are easy to administer and because they are easy to defend if a grade is challenged. (“There you go; count ‘em!”) This approach is very shortsighted, however. This ease of administration comes at the cost of mis-measuring students’ achievement of the learning standard. It comes at the cost of teaching students that “getting the right answers” really is all the teacher wants. Worst of all, if students are given the rubrics at the time they are given the task – which is recommended practice—most will figure out that they really don’t have to think, just locate and supply facts, so the intentions of the task will be violated.

The solution is to create rubrics that value what the standard values, and what the teacher valued when she created the task. Criteria should include things like selecting and prioritizing reasonable choices of sites to visit and supporting choices with relevant historic, geographical, or economic evidence. That’s how the “correct facts” come into play. It’s not “evidence” unless the facts are correct—but evidence is so much more, including how the student explains the details, relate to the importance of their intended tour destinations.

Finally, on a personal note, it breaks my heart to see the results of bad rubrics applied to student work. I have seen instances where students literally failed an assignment, even though their work seemed to show a reasonable amount of understanding and application of the content being assessed. Why? Because the rubrics included too many points for surface characteristics—spelling, following format requirements, supplying a picture (usually copied from the internet), counting facts, and so on—and none, or not enough, for the substance of the work. By the time I see such examples, the assessment is over. And someone’s child has failed, not because they didn’t meet a standard, but because the rubrics didn’t match the intentions of the task and the standard it represented. Those are the children who very soon decide, “(this subject) sucks and I can’t do it.” And we all know what happens next.

Performance assessment requires both a performance task that is a spot-on match with the intended learning outcome and a rubric or other scoring scheme that does the same. Please join me on my crusade to make sure that not another student fails because of faulty assessment!

Be sure to check out Susan M. Brookhart’s new book, Performance Assessment: Showing What Students Know and Can Do, now available at the Learning Sciences bookstore. Dr. Brookhart spoke at Building Expertise 2015 in June!