Significant arbitration decision on use of student questionnaires for teaching evaluation

| 4 comments so far
Print Friendly, PDF & Email

A recent arbitration award between the Ryerson Faculty Association and Ryerson University has established an important precedent for faculty associations, and lends support to others who have been arguing that student questionnaires are deeply problematic instruments for the purpose of evaluating faculty members’ teaching effectiveness.

It is telling that “student evaluations of teaching” or SETs, as the arbitrator chooses to call them, have been a “live issue” between the university and the Ryerson Faculty Association for fifteen years. In that time, not only have SETs been a recurrent point of contention for other faculty associations for reasons similar to those addressed in the arbitration, but other grounds for concern have come to the fore. In response, OCUFA established a working group to examine “SETs” and their use, broaching a number of issues, including some that were not before the arbitrator for the Ryerson decision.

Arbitrator William Kaplan lends critical momentum with his award. He accepted the expert evidence of Professors Philip Stark and Richard Freishtat that student evaluations of teaching cannot be used to assess teaching effectiveness. Kaplan’s award, and Freishtat’s and Stark’s pivotal reports are available online, and summarized as follows.

While Mr. Kaplan does find that SETs can continue to be used in the context of tenure and promotion decisions, he asserts that they cannot be used for the purposes of measuring teaching effectiveness for promotion or tenure.

He accepts that SETs do have value as the principal source of information from students about their experience. However, he states that, while SETs are “easy to administer and have an air of objectivity,” insofar as assessing teaching effectiveness they are “imperfect at best and downright biased and unreliable at worst.”

The evidence provided by Stark and Freishtat shows that SET results are skewed by a long list of factors, including personal characteristics (such as race, gender, accent, age, and physical attractiveness) and course characteristics (including class size, subject matter, traditional teaching vs innovative pedagogy, etc.).

The lack of validity (“reliability” in the arbitrator’s award) of SET results is further complicated when SET results are reduced to averages and then compared with other faculty members, the Department, Faculty, and the University. Mr. Kaplan finds “the evidence is clear, cogent and compelling that averages establish nothing relevant or useful about teaching effectiveness.” The use of averages is fundamentally and irreparably flawed. He concludes that only frequency distribution reporting is meaningful.

The arbitrator accepted the experts’ conclusion that the best way to assess teaching effectiveness is through the careful assessment of the teaching dossier and in-class peer evaluations. SETs may be ubiquitous, but this does not serve as a justification for over-reliance on a flawed tool.

In addition to identifying several items for the parties to work on together (developing guidelines, modes of presenting results, and a successor questionnaire) and requiring discontinuation of online questionnaires in stipulated situations, Arbitrator Kaplan ordered that the:

  • Ryerson Faculty Association Collective Agreement be amended to ensure that SET results are not used in measuring teaching effectiveness for promotion or tenure;
  • numerical rating system be replaced with an alphabetical one;
  • summary question of overall effectiveness be removed from the questionnaire;
  • parties ensure that administrators and committee members charged with evaluating faculty are educated in the inherent and systemic biases in SETs.

The arbitrator declared that “a high standard of justice, fairness and due process is self-evidently required” given the impact that SETs can have on faculty. OCUFA also believes that this standard applies given the impact that SETs can have on student learning.

OCUFA has been using the term Student Questionnaires on Courses and Teaching (SQCTs) to describe these evaluations. When it releases its report in October, the OCUFA Working Group on SQCTs will have more to say with respect to the methodological, ethical, and human rights implications of student questionnaires.

References:

Acknowledgement: This story incorporates and adapts a summary prepared by Emma Phillips, Partner at Goldblatt Partners, LLP.

4 Responses to “Significant arbitration decision on use of student questionnaires for teaching evaluation”

  1. John Boylan

    I congratulate the RFA for their work in bringing this question to the table and for winning some new guidelines in favour of pedagogy. As a fifteen-year Part-time Instructor at Ryerson I have keenly felt the flaw in the current teacher rating process. I thank those individual professors for sticking to their principles and working hard for this positive resolution.

  2. David

    Katie, there are student self evaluation and achievement systems that work much better with high granularity. Serious educational games move the learner toward self awareness and coaching, rather than playing movie critic at semester’s end.

  3. paula

    Agree David!
    Student feedback is important but the method is so flawed presently.
    As an instructor myself, I’d welcome ways to glean more useful feedback. But, aside from creating a better evaluation, would also welcome a system that better encourages students to bring their concerns forward in real-time rather than waiting until the end of term to secretly share their questions, thoughts and concerns.

Leave a Reply

You must be logged in to post a comment.