Rubrics to Measure Satisfactory and
Superior Performance

Once you have identified and weighted those activities that have the greatest impact on your department’s ability to meet its operational objectives, the next step is to determine what evidence would be sufficient to determine if these activities have truly been carried out in a satisfactory, superior, or less-than-satisfactory manner. This is true whether you are looking to adopt more intentional metrics for your admissions office, your major gift officers, your faculty, or staff in any other division within the institution.

By identifying and publicizing thoughtful and intentional criteria for measuring the success of staff activity, you avoid relying on purely qualitative or subjective assessments of staff or faculty performance –- and you ensure that the way staff performance is evaluated is aligned with the decisions your unit reached about what activities are truly important in meeting the unit’s goals.

Let's take a closer look at how rubrics might be applied within both an administrative unit and an academic department.

Example: A Rubric to Assess the Quality of Annual Fund Visits

When Scott Peters rolled out more intentional performance metrics for his annual gift officers at the University of Richmond, he wanted to take a more rigorous look at how well his officers were performing during visits. He realized that simply tracking the number of visits completed didn't tell him much about the quality of those visits -- how effective they actually were. So Peters identified four measures that, taken together, would allow him to devise a rubric for determining the level of performance of his direct reports:

  • The overall number of visits
  • What percentage of the visits are asks
  • The number of upgrades
  • How many volunteers were recruited during these visits

Setting specific expectations around satisfactory, unsatisfactory, and exemplary numbers for each of these measures -- and then looking at all four together -- allowed Peters to get a "whole picture" look of the quality of work, on a monthly basis. His officers knew that it wouldn't be enough to just seek gift renewals -- they would need to secure a sufficient number of upgrades, as well ("we're fundraisers, not fundmaintainers," Peters remarks). Similarly, his officers knew that they would need to balance solicitations and volunteer recruitment, rather than just focusing on one or the other.

If an officer was underperforming, using this rubric Peters could pinpoint what was most difficult for that team member, and then work with the officer to identify training opportunities and goals for improvement. Similarly, the rubric gave Peters a rationale for rewarding his highest performers.

What proved especially important in developing the rubric:

  • Taking into account all factors within the staff member's control that contribute significantly to the success of a particular activity (in this case, visits)
  • Ensuring the flexibility to tailor the measures to a particular staff member's activities
  • Adjusting metrics according to the total programmatic needs of the department

Establishing Criteria for Measuring Faculty Performance

The same principles apply in an academic department. Rubrics for measuring levels of performance among faculty need to be quantifiable, as well. “To the extent possible,” Raoul Arreola advises, “you want to ensure that the department isn’t relying too much on qualitative and subjective judgments of a faculty member’s progress.”

Arreola recommends that faculty and administration collaborate in developing checklists of minimum requirements for the success of various faculty activities (for example, what must be included in a syllabus in order for course design to be judged effective?), as well as checklists of elements that, if present, would allow the performance of a faculty activity to be considered “exemplary.” These checklists then form the basis for a performance rubric, lending greater objectivity to the evaluation process and ensuring that faculty evaluation is aligned with the department’s strategic priorities.

However, it’s critical that this “checklist” approach not become overly reductive – and avoid over-emphasis on any one measure. “The rubric needs to define levels of performance holistically,” Arreola notes, “across those activities within the instructor’s control that contribute to the department’s goals.”

For example, resist the temptation to rely heavily on exam scores for particular courses as the primary criterion of teaching effectiveness for a faculty member. The problem, Arreola cautions, is that a number of factors (ranging from the student’s own aptitude to the student’s life circumstances) that the instructor has no control over can have an impact on these scores. Arreola recommends focusing on those items within the instructor’s control that the research indicates contribute to student learning, such as:

  • Effective course design (as evidenced by the syllabus)
  • Presentations
  • Materials
  • The design of the instructional delivery
  • Student response to the instructional delivery

Arreola does note, though, that exam scores are a valid criterion for assessing student learning outcomes at the program level.

Articles in This Issue

A Letter from Amit Mrig, President, Academic Impressions

Defining What Activities Are Truly Critical

Rubrics to Measure Satisfactory and Superior Performance

Rollout and Buy-in: Handling the Transition to More Effective Staff Metrics

Using Performance Measures to Drive Faculty and Staff Development