I don’t think I can do this. I really
can’t.
I’ve been asked to apply for promotion next
year, and one of the mandatory things is to submit at least three ‘Student
Evaluation of Teaching’ reports. These are evaluations, not of the unit, but of
the lecturer, and they are not compulsory for the students to fill out.
While
there is a process for getting us to do 'Unit Evaluation' surveys as a matter
of course, 'Student Evaluation of Teaching' surveys aren’t done automatically: instead,
you request the teaching-management-minions-that-be to do them your behalf - by
the simple expedient of sending them (the minions) an email.
I have gotten away without doing any of
these for the past nine-and-a-bit years. It isn’t actually because of the
excuse I gave my colleagues the other day, that it is too much bother (after all, I just have to send someone an email). I don’t
like the whole idea of them. The thought of using them in a promotion
application makes me twitchy in a way people who knew me in high school will
remember.
Why, you might ask?
#1.
They don’t measure anything relevant.
With all respect to my students – who are
uniformly great people, eminently deserving of HDs and free beer – a student
who has just completed a unit is not yet in any position to evaluate the unit
or the lecturers who have helped them through it. They don’t know if the skills
and knowledge they obtained from it will be useful to them in their career,
they don’t know how it fits into the whole body of knowledge and skills they
will obtain in their degree, and they can’t judge whether it will have a
permanent impact on how they view the world or was just an entertaining intellectual
cul-de-sac. They can't judge whether their lecturer has given them a fatally flawed and bogus take on the topic, or has set them up with a solid basis for an ever deepening life-long understanding of it. The immediate impact of the unit or the teacher on the student is
not relevant to the desired educational outcome.
Okay, so they don’t measure anything
relevant. But I can just about put up with all the rigmarole about citation
counts and impact factors – which also aren’t measures of anything relevant.
Why can I swallow irrelevant measures of value in my research, but not in my
teaching?
#2.
They measure the irrelevant thing badly.
With research, the irrelevant indicators are at least reasonably transparent and quantitative measures of something. Okay, forget the
goal of measuring how I helped the unit to meet its true educational outcome.
How well did I help the students pass tests and keep them entertained in the
process? This is also something that
student evaluations of teaching can’t really tell me.
You can’t step in to the same river twice.
So a student can judge how they did in my part of a unit compared to how they
did in other parts of the unit, or how entertaining my part of the unit was
compared to other parts of the unit, but they can only encounter my material
for the first time once. The material and the lecturer are inextricably
entwined, so on the more modest goal of judging how good I was at getting them
to know topic X, or entertaining them while I did it, a student survey is also
flawed. They can only compare me with other lecturers teaching topics Y and Z –
topics which might be intrinsically easier or harder and more or less
entertaining.
And, since these evaluations are not
mandatory, the proportion of students who fill them out is always woefully
unacceptable by the standards of a poll or any peer-reviewed work in the social
sciences. The only students who will be bothered to answer them will be the
students who want to drive a stake through my heart and bury me in the
crossroads at midnight, and those who want to have my baby. Normal
middle-of-the-road representative worked-off-their-feet students will not
bother.
There are two other irritating things that only apply to these ‘Student Evaluation of Teaching’ reports:
#3.
They are open to abuse.
With research, I can’t pick and chose what
part of my ouevre to display, unless I want to cut my own throat and look
unproductive by leaving a whole bunch of papers out. They are all out there in
the public domain anyway, with quasi-empirical quantitative variables attached
to them telling you how popular they are.
But the rules for the promotion application
are practically begging me to cherry-pick the very best teaching evaluations I
can, with no oversight. That is just bad. Bad! No peer-reviewed journal in the
social sciences would accept a methodology where researchers conducted ten
surveys and reported on the three that gave the results supporting their
theory.
#4. They
are an imposition on the students.
I know these surveys don’t measure anything
relevant. And any qualitatively useful information about things I might have
done badly, or compliments that make me feel warm and fuzzy that I am on the
right track, show up on the Unit Evaluation surveys anyway. So I don’t need
these Teaching Evaluation surveys to learn anything that might be useful for
current students. Or future students. They are only useful for me. I don’t want
students to waste their time doing something that is only useful for me. I
would rather they spent their time creating new Chemistry Cat memes.