In my institution, we have this system that is called IPPA (Indeks Pencapaian Prestasi Akademik or Academic Performance Index). Each semester, a lecturer gets a score that is the average grade of all the students multiplied by 250, so that the maximum score is 1000. It's one of the KPI that is used to measure effectiveness of the lecturer. Do I need to say that not many of the staff are very happy about it? Please share what evaluation system is used in your institution.
Dear Miranda,
I could not find the RG discussion but there was a similar discussion on Linkedin about a year ago and I show some interesting opinions:
Mel Anderson • Sadly, many end-of-term student evaluations are mostly customer satisfaction surveys aimed at getting ideas for enhancing enrollments and tuition revenue.
Students are NOT customers, even if the school finance office sees them that way. When students walk into my classroom, they are purely students with duties and responsibilities for performance that are graded by me.
When a student doesn't perform well, he earns a lower grade. But he might consider himself a customer and seek recourse or even revenge against the teacher who graded him fairly but lower. The wrong person is evaluating the teacher; the right person is the department chairman who meets his responsibilities by evaluating the teacher's work and providing valid feedback to the teacher.
Maureen Hawkins • To follow up on what Mel said, one factor in student evaluations is grade anxiety. I find one good question to ask is "What grade do you expect to receive in this class?" I also ask the students to evaluate how much time they spent on preparation for the class (e.g., 1 hour / week, 2 - 5 hours / week, 5 - 1- hours / week, more than 10 hours / week) & how often did you attend class (e.g., all the time, most of the time, more than 1/2 the time, less than 1/2 the time, seldom or never). I can then try to correlate these answers with the responses.
I have 2 things I use these evaluations for. 1) I look to see if there are any complaints or suggestions I can do something about as well as positive comments about qualities I can enhance & plan how to use both to improve. I ignore obviously spiteful comments unless they address complaints also made by more balanced responses; some people, whether from grade anxiety or whatever reason, are not going to be satisfied, and there's not much you can do about it at that late stage after the course is over. I also recognize when comments are negative about something I do for what I consider a good pedagogical reason. If I can't find another way to fulfill that objective, I make sure I talk to the next class on the first day about my reasons for the assignment or other aspect of the course that was objected to. I also recognize that students have different preferences and expectations: e.g., one will object to too much time on class discussion & want more lecturing, while another will praise the amount of class discussion. If a majority object, I will reconsider whether I should shift the balance to achieve the same academic objectives; if it's only a few, I will address why I use the balance I do at the first class so students who want a different balance can decide before Drop and Add is over if they want to commit to the course. I also ask students who have had previous courses with me to tell the class what taking a course with me is like & to answer other students' questions on the first day while I leave the room. I find my evaluations are much better when students have a clear idea of my expectations from the first day, & I find they tend to trust other students' impressions more than what I say.
During the term, I maintain a discussion site on-line where students can anonymously comment on any aspects of the class they don't like or ask questions. I make sure I check the site regularly & respond to the whole class. If it's something I can change, I will explain how I intend to address it. If it's something I can't (or won't for pedagogical reasons), I explain why I can't (or won't) change it. I find if students can get their problems addressed before the end of term, they are less likely to complain on the evaluations.
2) My university requires me to submit evaluations as part of my Professional Activities Report. That report has a section where one discusses one's teaching. In that section, among other things, I address the evaluations. If there are complaints or suggestions that seem valid, I discuss what I intend to do to improve in these area, & I make sure to try out the changes the next time and report on the results in the next PAR. I also stress the positive comments & how I intend to enhance what I did to earn them. Similarly, if I do something that was objected to that I do for a good pedagogical reason and cannot find a substitute for, I explain my purpose & state that I will explain it more clearly to students in the future. I also point out the discussion board as an example of my efforts to address problems during the class rather than waiting until it is over.
Leslie Bowman • I have an anonymous survey several times during a course asking for suggestions for new activities or changes in current practices. I don't ask what they like or don't like. I ask for suggestions. As for official course evaluations, I agree that it's nothing more than a customer satisfaction questionnaire. Students who make good grades rarely complete the evaluations. Students who made bad grades can't wait to unload on those end-of-course evaluations. Teachers who have high expectations and grade accordingly generally have lower scores on evaluations than faculty who grade without extensive and comprehensive assessments.
Maureen Hawkins • To further follow up on what Mel said, I find that many student evaluations ask questions the students are not qualified to answer (e.g., "Does you professor know the subject matter well?") or which are matters of personal preference (e.g., "Do your professor's voice and mannerisms enhance his or her delivery of the material?"--to that one, I once had a suggestion that I change my hairstyle!). If there are questions like that on your standard evaluation, you should bring it up with your fellow faculty members, put together your reasons for objecting to them, and bring the issue up with your administration. In my own view, there are really only a few really pertinent questions students are qualified to answer: e.g., "Does your professor seem to care about the subject?" "Does your professor seem to care about teaching?" "Does your professor seem to care about students?" I don't think these are issues on which you can fool the students. However, I've not yet been able to persuade my administration to use them.
Richard Coulson • This is a difficult issue. Most of my career I worked in Medical Schools where evaluations were mandatory and student identification was required on faculty evaluations. Students did not do it until after grades were issued. In 30 years I never saw a flippant, ingenuous, or stupid review. Yes I saw plenty of criticism, most of it positive. The last 6 years have been in public undergraduate institutions with anonymous faculty evaluations. About 1/3 are blank, about 1/3 helpful, and about 1/3 flippant, ingenuous, stupid, and profane.
Mel Anderson • In this and several other forums, educators are ignoring something that happens outside the classroom: the students' emotional environments. Students seldom develop new emotional habits while sitting in lectures; they develop them in their social, professional, family and economic spheres where many other people display their emotional habits.
People come to class spring-loaded in their specific attitudes toward us and learning in general. They carry their personal goals about learning and grades into class, but they also carry their personal fear, anger and opinions about everything. And when a teacher says something that goes against the grain of even one student, that relationship changes.
It's all about relationships, but those relationships mostly reflect how we acknowledge, accept, approve, appreciate and admire one another. This means that the classroom also presents an emotional environment in which learning is best accomplished. And that is really up to us, fellow educators.
Be careful, it's a jungle in here!
Mel Anderson • I remember that about ten years ago, I taught a graduate project management course at a local private college. I thought I did it well, and so did all of the 25 students in the class--except one. That student spent the entire term objecting to everything I taught and said. And she was no doubt the one who graded me at total zeros on the customer satisfaction (student eval) survey.
I always wondered what it was that she thought the class should present, or what I should have said at some moment, or what perhaps I should NOT have said. This event taught me that you can't win them all, and that you'll never know what hit you when it happens. I know a few cute sayings that might fit:
Never get into a squirting contest with a skunk.
Never wrestle a pig; you both get dirty and the pig likes it.
The second mouse gets the cheese.
A ship is safe in the harbor--but that's not what ships are for.
How Come Every Time I Get Stabbed in the Back, My Fingerprints Are on the Knife?
(Jerry B. Harvey--The Abilene Paradox)
I stress that students evaluations on teachers serve mostly the interests of university managers that is schooling and are true prints of the marks students got from the teacher in question.
Hi Miranda,
Sounds like the effectiveness of a lecturer is measured by student's grades?
If yes, I would say that it should only contribute to a small percentage of KPI. The other KPI can include student's feedback, interview etc to gather some qualitative data on a teacher's performance.
And also important to define 'effectiveness' - is it based on how many publications per year? how many projects/students you take on? how many students able to be independent learners etc?
Yes, lecturer evaluation for their performance is vital. In many institutions of higher learning, everything the lecturer does (lecture,seminar,workshop,consultancy,etc.) is evaluated to see how best and what to improve. This is because they do that on behalf of their university. In UK, for instance this is now done at individual level, departmental level and institutional level. I think it is one of the best practices that faculty members need to consider positive.
I think this depend on what you mean by "teacher effectiveness".
If you mean how effective the teacher/lecturer was in passing on knowledge, motivations students to engage more in class related work (projects, bonus curricula etc.) and in general preform his "in class" duties, then student evaluation, marks, success rate at standardized complexity testing at the end of class (not those defined and directly controlled - tweaked by teachers or their assistants) with a few inputs on the side is probably more than enough to evaluate per class. Average or sum or in other way weight and aggregate those results and you could get a pretty fair "teacher mark" for the semester.
If you see teachers as means to approve, confirm, enhance and ensure legitimacy of your institution or acquire accreditation (licence or permit to operate as an educational institution in your country) and if laws that define rules of accreditation require your institution's "teachers/employees" to have a number of published papers in journals, conference proceedings, published books, reviews, analysis etc. then I suppose you won't be concerned by what students think of those teachers, as long as they are good at "producing" a sufficient amount of sci. list indexed publication points for your institution to rely on for the next accreditation round.
If, by effectiveness, you mean both of these, than your evaluation of your teachers should include more than just the sum of those two sets of inputs for the two mentioned scenarios, but in addition the teacher's involvement in projects of national and international importance, studies of both mentioned reaches etc. Also, you should add an input regarding his/her presentational skills (both as reported by students and their fellow colleagues - teachers).
Maybe I've made quite a digression in the last paragraph, but I hope my inputs help a bit.
Oh, add to add an answer to the second part of your question... No, teachers aren't happy about it.
And the third part... sorry, but unfortunately, the "higher ups" in my institution wouldn't be happy about me sharing the info on how the evaluation is conducted and what system is used, but I can tell you that the system is custom made (home brew if you will) and that it is complex and includes a lot more than I have mentioned in my previous post.
I hope that helps in your further research into this subject and I would certainly be grateful for any info you chose to share and I will gladly contribute within the limits that I'm at liberty to, of course.
@Tina, thanks. I agree with your comments. Yes, it does seem that my college considers the performance of students as a prime importance. The lecturers have been asking the admin not to consider this factor only, but to take into account our research efforts, our work to enable students to be info-literate and independent as well. (Quite a few of my questions on RG are related to getting info from our RG peers on these issues. Thanks again :))
@John. It's true that the performance of the lecturing staff is important. Unfortunately, many of my colleagues just concentrate on trying to get a high IPPA score. They are not so concerned about research, innovation etc.
I have asked many of them to be a member of RG, but they are not so interested. For a long while, I am the only RG member; for all I know that may still be so. But sometimes I feel closer to RG friends who have a lot in common, same research interests, problems etc. Thanks for your response.
@Milan. Thanks for your response. I have been thinking about what you have expressed. Still thinking about it. Your response takes many factors into consideration; thanks :)
We still don't have something like a single unidimensional score or grade on whatever scale, but more and more HE institutes have their students fill out questionnaires with a lot of questions about the quality of each lecture, including statements about the individual quality of each lecturer's course. The answers are not aggregated though to a single score, as I already said above. The reverse occurs also, at least at one of the institutes I am lecturing: each lecturer fills out a questionnaire about quality characteristics of the class they have been lecturing to. Furthermore, I know that teacher evaluation (not only at HE level) in China has got a prominent place in the landscape of measures of accountability, and are used to make decisions about salary etc. If you are interested I can send you references. The worst case is, of course, when students' regular grades are use to make such quality statements, because that is misusing grades in a entirely inappropriate way.
@Paul, yes, we also have the Observation n Evaluation (OnE) once a year. Students rate the lecturer on a 5-point scale. The important evaluation is the one done by our director or his assistant.
In Malaysia, matriculation colleges are 1 of the routes to university. Perhaps the system wants to ensure that the maximum number of students enter the public unis, and also ensure the students have some academic quality (?) Thanks again.
I did my Undergraduate from a private university in chennai, and yes my university has a grading system to evaluate the professors. I think the professors are quite happy about it.
As a government agency, we have before a Performance Evaluation System (PES) in which the evaluators are the supervisor, clients (students), peers and self. The average rating will be the basis of our performance as Outstanding(O), Very Satisfactory(VS), Satisfactory(S) and Poor. A performance bonus or incentive entitles us if we got a rating of O or VS. But an administrative sanction will face us if we got an S. Of course for humanitarian consideration no one (or very seldom) one gets an S, mostly everyone is VS. This evaluation system does not reflect the true picture because everyone seems VS even though he/she is not performing well. So recently a new performance based evaluation system was implemented throughout all government agencies. It is now based on performance of our major functions. In our agency we are now being evaluated in terms of our outputs in Instruction, Research, Extension and Production. The outputs are measured in terms of Quantity and Quality. Consequently, incentives vary depending on the performance.
Dear All,
There is a kind of evaluation system at our faculty in which teachers have to give accurate information on the number of teaching hours (according to the subjects, ratio of lectures and practicals) and the number of students they teach. Also we should report on classes and students we teach using a foreign language. This is only a quantitative approach of our efficiency. There are so called accreditation processes in every fourth year when some members of the national accreditation committee visit classes and they evaluate the quality of teaching and according to it they prepare an official opinion which is needed for licensing the universities and colleges.
Students can score the quality of teachers’ performance but there was a discussion on this subject on RG a week ago.
Dear Mranda
I work in two universities. Public and Private. In the public there is not teacher evaluation . In the private each semester the evaluation is made by students and in my opinion works very well identifying teacher who have no achieved their educational goal.This help the teacher identify weak points and improve them.
Dear Miranda,
Just from this year, teacher evaluation system has started working in practice though the matter was in the agenda ever-since the Professors had a substantial pay hike. It was simply a questionnaire where students were supposed to give a 5-point marking system ranging from 'Excellent' to 'very poor'. Only final year students were involved. Later on, a statistical picture was sent to the concerned teacher from a quality monitoring cell showing the maximum and the average score of the concerned teacher. The exercise was simply to remind the teachers that they are not beyond evaluation. However at present, it has hardly any practical significance. I can mention at least three reasons why is it so.
The place where my University is located, is an insurgency ridden state and very few teachers from outside wish to join. As the first priority is to run the University (that means course completion/timely exam/result declaration etc.), ensuring and promoting good quality education is in the backseat.
Except a few stream, majority of the subjects are divorced from the actual need of the society. As a result, unless a teacher proves herself/himself big idiot, nobody questions what is being taught.
In college appointments lots of corruption is there. So, the general logic is, "I have bought the post, who are you to question my credibility?"
Hopefully, things might change due to the institutions like "ResearchGate" which can at least show the real caliber of a teacher/scholar/student based on their contributions that really matter. After all nobody likes to believe themselves inferior in the domain that is being claimed as their principal identity.
Miranda, I tried to respond earlier but my response disappeared into the ether.
My situation is different as I work in a state run institution where most teachers are permanent, wit no financial incentives or penalties. We do not have a unidimensional measure of any sort. Instead we have a conintuous monitoring by middle managers (me). Teachers are supposed to be continuously improving. The direction of the improvement is up to the teacher, school principal, and the middle manager. If I determine that a teacher us not performing adequately, as judged by my standards, then I am required to instigate a remedial action. It is only when that remedial action is exhausted that I may be able to request greater sanctions for a poor pergorming teachrr. I have never had to do this. Every teacher I have ever bern personally responsible for (about 100 or so) has made reasonable efforts to meet expectations for improvement by my standards. I take my primary responsibility as being to set up systems that maximise each teacher's impact, and minimise each teacher's weaknesses.
PS this is obviously subjective, but for measures of teaching, subjective seems fine to me, and in general it works.
In my university, students can give grade to lecturers but they themselves will stay anonymous. As they have to grade at least 4 courses but not more, unfortunately most seem to select only 2 types of courses: those that they like a lot and those that they don't like at all. So, often, courses mostly get only 'very good' and 'very bad' ratings and not much in between and for a lecturer, these are not always very informative as they do not represent the majority.
@Miranda, lecturer evaluation is a part of a quality assurance system, and it is performed every semester. Data collected are part of quality evaluation of institution, and are presented for next accreditation !
Dear Dr. Miranda Yeoh,
Yes, in our institute we have evaluation process. Its an online system with well thought questionnaire and their outcomes rubrics. As far as satisfaction is concerned university asks the opinion and takes the corrective measures on the basis of feed backs.
@Andras, can you please give me the link to the discussion that you mention, if it's not too much trouble? I think I have missed it :(
'Students can score the quality of teachers’ performance but there was a discussion on this subject on RG a week ago.'
Thanks all of you. From your responses, I get a clearer picture. It's always better to think more comprehensively. Some of you have taken the trouble to explain the matter to me in detail. I salute You :)
@Bhairavi, is there no such evaluation system in your present institution?
Dear Miranda:
I work in a public University. In my school the students have to answer a questionaire for each teacher in the courses they take. The qustionaire is revised automatically and the average grade is calculated and given in a closed envelope to the teacher (unfortunately the envelope and the information comes at the middle of the next semester so it is not possible to correct the necessary things in that semester). The evaluation is done once a year, another bad thing because you doesn't have an evaluation each semester. The questionaire evaluates different aspects from the teaching techniques used by the teacher to its hability to answer questions, etc. each question is evaluated in a 1-5 scale. There is no space for comments and the questions are very general, leaving the teacher with very few information in order to take proper actions to improve their classes. I prefer to ask directly to the students!
Thanks for asking!!
Good question and great discussion... Well, in our case ue we have what we refer to as eVALUate - and this needs to be filled in by the students on anonymous basis - they value the unit (course) and the teaching... with the ultimate question - as you satisfied with the unit...
This would become available through the intranet as soon as the results are out... so, the students get their results and we get our own results - which will assist us to develop the unit (course)...
While the teaching is confidential to the lecturer and will not even be seen by the head of school, the unit evaluation is seen by the head of school, and there are percentages of completion (at least 35% of students need to complete) and the overall satisfaction needs to be more than 84% - otherwise you will not receive your monetary token to assist you with your teaching...
While this is great - however, in some campuses other than Australia - the understanding is different, and students might only complete this eVALUate in case they are not satisfied with the unit (course) or their local lectuer! which is a waste of time.
However, I believe this system alone should not be used to evaluate you as a lecturer - but might be one point, and the rest would be your scholarly articles in teaching and learning and your innovative ways of delivery and developments to be taken into account. Another innovation that came through recently is peer-evaluation - where you would invite colleagues to your class to evaluate you...
Hope this adds to the discussion.
Regards
Theodora Issa
@Julio, thanks. I also ask my students for feedback.
@Theodora, thanks. I agree with you that all our RnD and Innovations and activities in ICC (Innovative Creative Circles) should be taken into account. Sure, I hope this is so. But there are many of the staff who won't 'waste' their time and resources on RnD etc
:(
we used to have a evaluation system to evaluate our teaching, the students are giving us some scores based on some preset questions by the university ( score 1 for poor and 5 for best). those scores will be averaged and act as one of the KPI as well.
But this year, the university has made some changes. The score from the students with be mixed with the assessment from our colleagues. I am not sure how it work out yet, as the implementation is only started this trimester (june 2013).
Very interesting question and discussion above... There is a standard evaluation form filled for some (but not all!!) of the teaching activities at our center.. At the end you do get a break-up as to which parts of the evaluation were scored high or low.. Also at the end of a three month rotation, each student/resident fills out a detailed evaluation form for the teacher/teachers they rotate with to give an overall feedback..
Since lectures are not the only teaching activity.. Especially in surgery, there are patient encounters in clinics or emergency room, and operating theatre.. And teaching encompassing cognitive, psychomotor as well as affective domain.. I'm not sure if an evaluation for every teaching encounter is practical.. Any suggestions how to assess these activities in evaluation.
Regards, raza
Dear Dr. Raza Sayyed, Nice to interact with you for the first time. Almost every teaching now embedded with any of or combination of theory, lab. work, field work, project, training, seminars, case studies, ... Teaching in medicine is not much different. However, sets of questionnaires and evaluations procedure may differ.
Dear Dr. Miranda Yeoh,
It seems someone who don't want to debate but judged most of the recent posts as poor contents only due to his / her debating skill weakness and negative approach.
@ Afaq Ahmad : Yes, happened to one of my questions also, for no apparent reason, and even to the post of a Person with Special Needs who inquired about a discrimination issue. ResearchGate was not able to retrieve these posts, which is bad policy and usability, as I told them. Everybody experiencing the same kind of mishap should fiercely but politely complain with ResearchGate and require that they change their policy and improve their usability.
Several respondents and commentators mentioned the use of evaluation questionnaires as part of an overall system to evaluate lecturer/ teacher 'effectiveness'. The issue however is not so much whether questionnaires are used or not, but how the actual questionnaire has been set up and whether it has been validated or not.
Questionnaire design, especially for such high-stakes purposes, should be conducted in a highly professional way. It is a difficult method and technique, full of pitfalls and hurdles. Someone not trained in it and in the underlying psychological and sociological methodologies, should please not set up a questionnaire just based on socalled 'common sense', or worse ...
If such questionnaires are nevertheless used, the results can be unreliable, unvalid and devastating for the stakeholders.
Dear Miranda,
I could not find the RG discussion but there was a similar discussion on Linkedin about a year ago and I show some interesting opinions:
Mel Anderson • Sadly, many end-of-term student evaluations are mostly customer satisfaction surveys aimed at getting ideas for enhancing enrollments and tuition revenue.
Students are NOT customers, even if the school finance office sees them that way. When students walk into my classroom, they are purely students with duties and responsibilities for performance that are graded by me.
When a student doesn't perform well, he earns a lower grade. But he might consider himself a customer and seek recourse or even revenge against the teacher who graded him fairly but lower. The wrong person is evaluating the teacher; the right person is the department chairman who meets his responsibilities by evaluating the teacher's work and providing valid feedback to the teacher.
Maureen Hawkins • To follow up on what Mel said, one factor in student evaluations is grade anxiety. I find one good question to ask is "What grade do you expect to receive in this class?" I also ask the students to evaluate how much time they spent on preparation for the class (e.g., 1 hour / week, 2 - 5 hours / week, 5 - 1- hours / week, more than 10 hours / week) & how often did you attend class (e.g., all the time, most of the time, more than 1/2 the time, less than 1/2 the time, seldom or never). I can then try to correlate these answers with the responses.
I have 2 things I use these evaluations for. 1) I look to see if there are any complaints or suggestions I can do something about as well as positive comments about qualities I can enhance & plan how to use both to improve. I ignore obviously spiteful comments unless they address complaints also made by more balanced responses; some people, whether from grade anxiety or whatever reason, are not going to be satisfied, and there's not much you can do about it at that late stage after the course is over. I also recognize when comments are negative about something I do for what I consider a good pedagogical reason. If I can't find another way to fulfill that objective, I make sure I talk to the next class on the first day about my reasons for the assignment or other aspect of the course that was objected to. I also recognize that students have different preferences and expectations: e.g., one will object to too much time on class discussion & want more lecturing, while another will praise the amount of class discussion. If a majority object, I will reconsider whether I should shift the balance to achieve the same academic objectives; if it's only a few, I will address why I use the balance I do at the first class so students who want a different balance can decide before Drop and Add is over if they want to commit to the course. I also ask students who have had previous courses with me to tell the class what taking a course with me is like & to answer other students' questions on the first day while I leave the room. I find my evaluations are much better when students have a clear idea of my expectations from the first day, & I find they tend to trust other students' impressions more than what I say.
During the term, I maintain a discussion site on-line where students can anonymously comment on any aspects of the class they don't like or ask questions. I make sure I check the site regularly & respond to the whole class. If it's something I can change, I will explain how I intend to address it. If it's something I can't (or won't for pedagogical reasons), I explain why I can't (or won't) change it. I find if students can get their problems addressed before the end of term, they are less likely to complain on the evaluations.
2) My university requires me to submit evaluations as part of my Professional Activities Report. That report has a section where one discusses one's teaching. In that section, among other things, I address the evaluations. If there are complaints or suggestions that seem valid, I discuss what I intend to do to improve in these area, & I make sure to try out the changes the next time and report on the results in the next PAR. I also stress the positive comments & how I intend to enhance what I did to earn them. Similarly, if I do something that was objected to that I do for a good pedagogical reason and cannot find a substitute for, I explain my purpose & state that I will explain it more clearly to students in the future. I also point out the discussion board as an example of my efforts to address problems during the class rather than waiting until it is over.
Leslie Bowman • I have an anonymous survey several times during a course asking for suggestions for new activities or changes in current practices. I don't ask what they like or don't like. I ask for suggestions. As for official course evaluations, I agree that it's nothing more than a customer satisfaction questionnaire. Students who make good grades rarely complete the evaluations. Students who made bad grades can't wait to unload on those end-of-course evaluations. Teachers who have high expectations and grade accordingly generally have lower scores on evaluations than faculty who grade without extensive and comprehensive assessments.
Maureen Hawkins • To further follow up on what Mel said, I find that many student evaluations ask questions the students are not qualified to answer (e.g., "Does you professor know the subject matter well?") or which are matters of personal preference (e.g., "Do your professor's voice and mannerisms enhance his or her delivery of the material?"--to that one, I once had a suggestion that I change my hairstyle!). If there are questions like that on your standard evaluation, you should bring it up with your fellow faculty members, put together your reasons for objecting to them, and bring the issue up with your administration. In my own view, there are really only a few really pertinent questions students are qualified to answer: e.g., "Does your professor seem to care about the subject?" "Does your professor seem to care about teaching?" "Does your professor seem to care about students?" I don't think these are issues on which you can fool the students. However, I've not yet been able to persuade my administration to use them.
Richard Coulson • This is a difficult issue. Most of my career I worked in Medical Schools where evaluations were mandatory and student identification was required on faculty evaluations. Students did not do it until after grades were issued. In 30 years I never saw a flippant, ingenuous, or stupid review. Yes I saw plenty of criticism, most of it positive. The last 6 years have been in public undergraduate institutions with anonymous faculty evaluations. About 1/3 are blank, about 1/3 helpful, and about 1/3 flippant, ingenuous, stupid, and profane.
Mel Anderson • In this and several other forums, educators are ignoring something that happens outside the classroom: the students' emotional environments. Students seldom develop new emotional habits while sitting in lectures; they develop them in their social, professional, family and economic spheres where many other people display their emotional habits.
People come to class spring-loaded in their specific attitudes toward us and learning in general. They carry their personal goals about learning and grades into class, but they also carry their personal fear, anger and opinions about everything. And when a teacher says something that goes against the grain of even one student, that relationship changes.
It's all about relationships, but those relationships mostly reflect how we acknowledge, accept, approve, appreciate and admire one another. This means that the classroom also presents an emotional environment in which learning is best accomplished. And that is really up to us, fellow educators.
Be careful, it's a jungle in here!
Mel Anderson • I remember that about ten years ago, I taught a graduate project management course at a local private college. I thought I did it well, and so did all of the 25 students in the class--except one. That student spent the entire term objecting to everything I taught and said. And she was no doubt the one who graded me at total zeros on the customer satisfaction (student eval) survey.
I always wondered what it was that she thought the class should present, or what I should have said at some moment, or what perhaps I should NOT have said. This event taught me that you can't win them all, and that you'll never know what hit you when it happens. I know a few cute sayings that might fit:
Never get into a squirting contest with a skunk.
Never wrestle a pig; you both get dirty and the pig likes it.
The second mouse gets the cheese.
A ship is safe in the harbor--but that's not what ships are for.
How Come Every Time I Get Stabbed in the Back, My Fingerprints Are on the Knife?
(Jerry B. Harvey--The Abilene Paradox)
I stress that students evaluations on teachers serve mostly the interests of university managers that is schooling and are true prints of the marks students got from the teacher in question.
Dear friends (Milan, Mark, Ljubomir, Afaq, Paul, Theodora and Hu Ng), I tried to send a message to complain about the action of the person who came along to downvote content; but for some reason the mail could not be sent. Mark has said it nicely; it evaporated into thin air. So I made a 'complaint' on Ljubomir's thread: 'Out of all the RG...' RG Community Support is on that thread, so hopefully it will be read!
I have never down-voted content. Even if I disagree or don't understand the intention of the author, I do not down-vote. In fact, I have a feeling that RG system should have a method to prevent lack of integrity among those who down vote mechanically or spitefully, for no good reason. What do you think; should the score of such people decrease? (Perhaps RG already has this, who knows?)
In my Institute a simple likert scale questionnaire is used. It contains around 10-12 questions and is filled by students at the end of each course(semester).
It contains items like preparation of lecture by the instructor, discipline in class, encouragement of student participation, knowledge over the subject, punctuality , clear student evaluation criteria set or not,etc.
A mixed response is observed on the satisfaction of faculty members regarding this evaluation system.
But I think this is just one method, some other ways must exist.
@ Syed Sohaib Zubair : "A mixed response is observed on the satisfaction of faculty members regarding this evaluation system."
One reason may be similar to the effect, why the 'warning messages' of software testers are not always (tongue in cheek) received as what they are: constructive contributions to software development. Instead, those extremely important and necessary feedbacks are often treated by the person or team who developed the software as negative criticism on them. False: the feedback is about the software, not about any person or team.
A second reaction may also be to think or say: testing is easy, finding the cause of the error or - even better - to suggest a correction or solution can be much more difficult, for any sort of reason. Again false: testing may be as complicated as developing, requiring a battery of techniques and skills, and especially challenging if the software specifications are lacking, what is unfortunately often the case.
Now consider the analogy with teaching and evaluation of teaching. Same procedures! Same psychological effects! Same false reactions! All very human-like!
How can we escape this vicious circle? IMHO there is one way: constructors and users of teacher evaluation questionnaires should stop believing, that just giving the answers on a couple of questions, however well-designed (e.g., using Likert scale techniques), will lead to applaus and satisfaction on the receving end and will more or less automatically lead to some improvement.
Rather there should be a kind of theory behind the questions asked and a clear statement following from this theory how the answers on those questions are related to some behavioral changes which may lead to effective improvement in teaching.
If you have done your questionnaire construction all-right, building the chain from questions over answers to suggested remedies should be straightforward and ... constructive. Any other approach is IMHO unprofessional and missing the point and giving away a positive chance of change.
Dear Miranda,
In all government funded Indian Universities, the Academic Performance Index is considered for promotion, from a position of Assistant Professor to an Associate Professor, and from a position of an Associate Professor to a Professor.
This is not counted semester-wise. Further, the teachers are supposed to submit a self-appraisal report once every year.
For appointment to a position of Assistant Professor, the Index is not needed.
No, the teachers are not quite happy about it! As the executive Head of a University, I know that they are not very happy about it!
Thanks for your contributions (Syed, Paul and Hemanta). I think that as teachers we must deliver, but there must be several KPI to evaluate a teacher. I see that some of you agree to this view.
@Hemanta, do you happen to know if there are other organizations like the Azim Premji Foundation in India. Do you see that this foundation is doing a fine job, a noble one?
Dear Miranda,
The Foundation you have mentioned is a private one. Obviously, they have enough money. I hope, you know who exactly Azim Premji is!
Yes, there other such foundations owned by such corporate bosses. Take for example, the Dhirubhai Ambani Group of Industries owned Institutes of teaching and research.
All such foundations are doing fine jobs certainly. As to whether they are doing noble jobs, well, I would like to refrain from making a comment. I am not exactly fit for making that sort of a comment!
@ Miranda Yeoh - KPI to evaluate a teacher.
This is a highly interesting endeavor: to look for really useful Key Performance Indicators to evaluate teachers, either by self-assessment, peer assessment or group assessment, in the spirit of life-long-learning. And if the KPI can be made reliable and valid enough, a KPI-based evaluation may also be run by an external agency, e.g. the Education Quality Center of an university as one of its initiatives and activities aimed at educational accountability and continuous improvement.
As the name KPI implies, performance indicators should be based on low-level observable features of one or more teaching processes, e.g. on course level. As any sort of teaching occurs within and depends upon a given specific context, that context should be part of the evaluation, otherwise you won't know, whether what you observe is caused by the teaching as such or is just a reflection of the teaching's context! Several strongly related features may be bundled to form such KPI's. Thus in the end you will have a small set of KPI's, or perhaps a small set of distinct categories of KPI's.
In order to keep such evaluation activities efficient and robust, I suggest that the individual features should be such that they can be measured on an ordinal scale with no more than five quality levels, e.g.: 1, 2, 3, 4, 5. Because these quality levels are numerals, not numbers, it is better to use some symbols, e.g. - - | - | -/+ | + | ++ or even smileys, so that nobody starts calculating averages or so, which would be non-sensical. Please keep in mind, however, that the restriction to five quality levels is more or less arbritary, it is not part of the general model that I propose here: you may have two levels only, or 10, or 100, whatever suits your needs.
In order to bundle several related features measured in this way into a single KPI, I have developed a new type of score. This score will be a number between 0 and 1, where KPI = 0 means "worst case of this indicatior" and KPI = 1 means "best case of this indicatior". Of course, you can think of such a score as a percentage, or convert it to any other scale which you prefer.
The purely qualitative judgement system I have developed and used since 2007 allows you to do many more things with such features and KPI's. For instance, you may merge two or more KPI's to build weighted super-KPI's. Or you may define certain features to act on previously derived KPI's as a kind of bonus/malus factor, so that the KPI is slightly increased or slightly decreased depending upon the quality level of the additional feature. Etc.
If you are interested in this approach, please see my website. If you want me to give an example of such an evaluation scheme, I will be pleased to set up one, it doesn't take too much time. Just give me your list of features and/or KPI's, doesn't need to be complete, as you can always expand (or shrink) a given evaluation sheet.
http://www.passorfail.de.vu
In our university, evaluation of teachers by students is in practice. I think the teachers are happy about it. I strongly hope this kind of evaluation will improve the teaching standards.
@ G. Rathinasabapathy : "I strongly hope this kind of evaluation will improve the teaching standards."
Teaching and Learning are the two sides of a single coin, called Education, right?
You may also say: they are two roles played simultaneously by the members of a Collaborative Community called Educational Unit (e.g., a class following a course), right?
What is taught and what is learned may be different for tutors and students, but nevertheless there is a constant bi-directional flow of information between the participants, whether one is aware of it or not. For instance, students may be learning 90% about the subject matter and 10% about what it means to teach. For lecturers it may be the reverse.
Also, it is more than natural and sensible, that these role-players reflect upon their mutual communication now and then: its effectiveness, efficiency and satisfaction. Whether you cal this reflection, feedback, assessment, evaluation or even meta-communication is immaterial.
What is still missing in this picture? A defined process or procedure how to cope with unfulfilled expectations on both sides of the coin or communication!
Of course, if all is well, keep going, but if there are deviations from expectations, a natural reaction should be to put the primary process (here: education) on hold for a moment, to engage together in a meta-communication (here: reflection, dialogue) about possible reasons for the observed deviations, and finally to initiate and roll-out some remedies.
As long as the above defined cycle is not closed, it will remain an open loop, and nothing worth mentioning will happen ~ except perhaps paying lip service (an easily paid bill) or in worst case an escalation of misfits and miscommunications (which may turn out to be very costly).
IMHO only natural systems provide closed loops, cultural (man-made) systems have to be designed as closed loops in order to provide the means for intrinsic quality management.
Forgetting or ignoring to close the loop is irresponsible and may lead to low satisfaction with all stakeholders and to high costs for society, material and immaterial. Here, with society I mean of course educational units and institutions on all intended levels, from a single class to the ministry of education.
We have evaluation of professors for ages. It is organized and evaluated by the University. Our University has more than 30 Faculties and Academies. Evaluation is done on the University level, not by Faculties. It used to be a questionnaire on the paper and the questioning was organized during the obligatory laboratories or exercises. The questioning was led by student representatives and was strictly anonymous. They have now changed to e-version. A student is asked to fill the questionnaire before enrolling in the next year, which actually means that student has already passed exams for which he gives his opinion. Most of the student do not fill it.
What is the purpose. a) To serve as an evidence during the habilitation procedure as students give their opinion about the lecturer. b) To serve as a feedback to a lecturer.
While it is very good to have a feedback, there are several drawbacks with respect to our type of evaluation. The scale goes from 1-5, which is ambiguous. Before the university marks in our country are 1 (negative) 2(satisfactory)...5(excellent) and they change to 1-5 (negative) 6(satisfactory)10(excellent) and the questionnaire does not offer suggestions for the meaning of the mark. So, students interpret marks in various ways, giving 2 as satisfactory or 3 as undecided. Next, we used to get distributions, but now we get averages only. In addition, students are allowed to write comments, but lecturers do not get them (stupid), that is, it is actually not a proper feedback. Also bad opinions of students usually do not count much and there are no serious actions with respect to lecturers with very bad marks. A nice message to students, I would say...;-(
Students would like to have public results. But I personally doubt. Two experiences were very instructive. There was a new leadership of student organization at our Facutly, they have taken the evaluation very "seriously" suggesting students not to be generous, the average went down of 0.5, which was huge. Students then claimed that the institution has become very bad in one year, which was in a contradiction to everything else (not many new professors, now changes of rules, etc...).But our dean was very worried. The second experience was that another university made results public. There were several inconsistencies like: a Prof who was for the whole year in sabatical and consequently have not taught anything got bad marks, but only 5 people were giving the marks (which was the threshold for accepting the mark), several Profs have worked for different Faculties, and as the marks were reported for the Faculties, the same Profs got very different marks at different places.
What is then a measure of quality? I deeply agree with several comments above, evaluation is a tricky thing, using results of any evaluation is even more tricky.
Nevertheless, many of us use a very simple method, a piece of paper, marked with a plus on one side and with a minus on the other with a simple explanation: please, write anonymously what you have liked and what you have disliked during the lectures, seminaries... such feedback is usually helpful.
@ Mojca Čepič : "While it is very good to have a feedback, there are several drawbacks with respect to our type of evaluation."
I agree with you, fully. It makes me feel very depressive and helpless, again and again, when I see or hear about such practices as you depict so lively (please, other readers of this column, add your own experiences, only by such a cumulative effect we may hope to increase pressure to change anything at all).
First and foremost, it is unbelievable that even after more than 100 years of psychological measurement, people in an academic position still make such gross and elementary errors as calculating averages of ordinal numbers! How is this possible? Probably, because they are using numeric symbols (called numerals) 1 ... 10 and think these can be treated like numbers! Would they also calculate the average of the beauty of two or more different paintings? Yes, they should have no worries about the latter, if they do the former!
Also, people responsible for rolling out such evaluations probably have read nothing (or ignored everything) about questionnaire design, quality management and intervention practice. They are really unprofessional and irresponsible, I would say, they don't really like their job, they do not have to love their job, just like it a little bit and feel an urge to do it as well as possible.
Evaluation need not be tricky at all if you know what your goal is and what you are doing. You are neither alone nor the first one to do it.
Generations of scholars and practitioners have studied and written about the best ways to conduct evaluative research in practice (inside and outside education) and of avoiding all kinds of well-known pitfalls, methodological as well as organisational. For instance, forgetting to return qualitative feedback to teachers is such an organisational pitfall; giving averages of ordinal scores and not their distributions is a double methodological pittfall.
Of course, if one thinks that a cycle of evaluation can be set up and conducted in three days, one is asking for problems: bad results in all points and bad feelings on all sides.
Worst of all: the managers causing this waste of time and money will not be evaluated themselves on their irresponsible attitude and behavior.
Who will be evaluating the evaluators?
Dear Dr. Miranda Yeoh,
In the university where I am working has 4-tier evaluation process. 1. Self Appraisal, 2. Students evaluation, 3. Recruitment committee's summary and Promotion committee's recommendations and reports .
Merry Christmas to everyone!
In my institution, lecturers are evaluated for their publications, initiatives, participation in international and local conferences, as well as students' evaluation of them. Some of them find this evaluation appropriate, but others don't agree with this evaluation, for example they say students' evaluation is not a good measure of their performance.
Hello folks!
There is the following system in Michigan State University. At the end of each semester students fill out assessment sheet about instructors' work. Here they reflect on the effectiveness of their instructors, their readiness to help students, their enthusiasm, level of satisfaction by the course, etc. Additionally, students evaluate overall instructors' work by assigning them grade (from 2.0 to 4.0). This is a qualitative assessment of faculty members that allows our departments to give informative feedback to instructors.
From my point of view, such an assessment is good for people who genuinely interested in their teaching and in their improvement in this aspect of their careers. However, I do not think that this method provides strong evidence for departments because students' evaluations are biased in many cases due to their desire of specific grades or treatment.
Generally, instructors seems to be okay with this type of their assessment; however, sometimes I see a lot of frustration (due to this bias), especially among young faculty.
Merry Christmas to Everyone!!!
Thank you folks (Mojca, Paul, Afaq, Wajeeh and Pavel), our discussion has brought many things to light. Wajeeh, I see that the evaluation at your college is quite comprehensive, includes many areas of our work:
'evaluated for their publications, initiatives, participation in international and local conferences, as well as students' evaluation of them'
When it comes to student evaluation of us, lecturers, I have some reservations. I have come across students who got confused with the simple 5-point scale (1 was confused with 5), then nothing could be done to correct the error. Also some students were so haphazard, they didn't realize that the teacher had stated the objective for the day, and also written it on the board; they circled '1' instead of '5'.
In fact, that type of questions should be answered with a 'yes' or 'no'. What do YOU ALL think, what other areas should be highlighted to make the evaluation more appropriate? [My Christmas break is just 2 days, 24th and 25th. Today I'm back to work!]
{ edited 2014-01-01 }
Scales are communication devices or languages of a particular type, and like other communication devices or languages their use - i.e., syntax, semantics and pragmatics - have to be learned. Therefore never assume by default, that all respondents will be thoroughly familar with the scales used in your questionnaire.
It is best practice to validate the questionnaire on an independent group of representative users who are not going to be part of the intended evaluation. Only then you will be able to capture - before it is too late - any flaws, difficulties and other problems with the questions and the scales used. Common problems are: two questions in one, one question apparently repeated, complex or ambiguous formulation, negatively worded questions, too few questions, too many questions, ...
Also, I recommend to start with a simplified pilot questionnaire filled out collectively and interactively so as to probe how well the intended use of the full questionnaire will be understood by the participants in the evaluation.
In India University Grants Commission has introduced a system of API. This has been accepted by some universities but still debated in others. Check the following link: http://www.ugc.ac.in/pdfnews/8377302_English.pdf
At some of universities in my country, the lecturers are also asked to evaluate their classes. So we have a dual system of peer reviewing: students --> lecturers as well as lecturers --> students. That's fair, I guess. It is not yet effective because the evaluation loop is not systematically closed.
Paul, what is the effective way that you can recommend, to close the evaluation loop?
@ Miranda : That's policy and politics, and I may be not the best person to answer such questions. There is currently a large pressure on universities and other educational institutions (also within the private sector) to prove that they have a good teaching concept and programme based on that concept, that this concept/programme works out well in practice and is continuously improved based on evaluative feedback.
Evaluation is being formalized and has become (big?) business, it is now being institutionalized as certification performed by officially acclaimed accreditation institutes, guaranteeing impartiality and objectivity.
However, these social-political developments are in deep conflict with the century-old 'freedom of education' politics (at least in Western countries). Still some universities I know of have just set up 'educational quality assessment and assurance' departments as a first, modest answer to the political pressure.
IMHO there is practically nothing we as individual teachers and lecturers can do about that disucssion and development - except to communicate honestly and regularly with our students about what's going on in their courses and what they think of it. Just filling in evaluation forms doesn't change a bit, in that point we totally agree.
{ last modified on 2014-01-03 }
@Afaq Ahmad, what do you and your colleagues think of the evaluation system. If students evaluate the lecturers, and the heads of department evaluate the lecturers, who evaluates the evaluators to close up one loop? Please discuss.
We have a broad based teaching assessment system - with input from students and peers (fellow lecturers). It's based on teaching skills and teaching materials.
@ CC Ho, do you feel that the evaluation is fair to the teachers. Are the lecturers happy? Multimedia University is a private university, right?
Well, I can't speak for everyone, but I am pretty happy. It's a balanced approach and prevents your 'teaching credit' being ruined by vengeful students :P. And yes, MMU is the first and I dare say the leading private university in Malaysia as far as SETARA, MYRA and QS/THES ranking.
@Miranda, something about Your comment : "Who evaluates the evaluators?" I like your approach with loops! :) OK, question of evaluation is not an infinite loop. There are many feedback loops from which we draw conclusions and become better teachers, but consider evaluation in terms of a continuous, multi-year process! Do You agree?
@Ljubomir, it was Paul who helped to put my thinking in place, thanks to Paul.
I agree that evaluation is a continuous process. Each year, about 5% of the staff are given an Excellent Service Award (APC in Malay). In some institutions, the same hard working and productive people get the APC a number of times. But in some places, once we get an APC, we must continue to produce and don't expect another.
Dear Miranda Yeoh,
Could you differentiate between "Peer Evaluation Committee" and "APC"!
Dear Afaq Ahmad,
APC is Anugerah Perkhidmatan Cemerlang, meaning excellent service award. It is given based on all the Key Performance Indices for each of the staff. The Peer Evaluation Committee comprises the Director, Deputy and Heads of Departments in my college. Thanks for your question.
Yes our institution does have a system to evaluate the teacher effectiveness. The important dimensions of the feedback are Teaching, Teachers Attitude, Subject Delivery, Discipline etc. The feedback is taken from the students for every subject.
Dear Miranda
Actually this is a recurrent question and there is no agreement among teachers about which is best. In my University, Dhofar University, Oman, we use a weighted average of Students evaluation, peer evaluation, self evaluation and Dean and Chair evaluation to come up with a score from 1 to 5. We expect teachers to be above the average of their Department, if not we may ask and depending on cases, for some training or staff development.
Perhaps this is relevant & interesting for some of you following this discussion, because evaluation of teachers by students is in some way also "peer review":
Gennaro & Zaccaria 2006 The Fiction of Peer Review ~ Phenomenology of a Catastrophe (see attachment)
However, I have to admit that I didn't really understand it because its readability index is very low (Heidegger-style, for those of you who are familiar with such (putatively outmoded?) philosophy).
Nevertheless, the mere suggestion and discussion that our academic subculture(s) may be intrinsically very flawed and even somehow perverted should wake up everyone who is acquainted with it, i.e. most of us ...
Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance
Literature is examined to support the contention that student evaluations of teaching (SET) should not be used for summative evaluation of university faculty. Recommendations for alternatives to SET are provided...
If one truly wants to understand how well someone teaches, observation is necessary. In order to know what is going on the classroom, observation is necessary. In order to determine the quality of instructors’ materials, observation is necessary. Most of all, if the actual desire is to see improvement in teaching quality, then attention must be paid to the teaching itself, and not to the average of a list of student-reported numbers that bear at best a troubled and murky relationship to actual teaching performance...
https://www.tandfonline.com/doi/full/10.1080/2331186X.2017.1304016
Ljubomir Jacić wrote: "If one truly wants to understand how well someone teaches, observation is necessary." How true!
Then the next question is: "Who will be the observers?" If students are not acknowledged as potential observers and evaluators of this process (and I guess, most of us will agree with that), then who will do the observation and evaluation?
Other teachers? Then we will have real peer review (not of a research paper, but nonetheless of an academic performance) --- and the provocative paper by Gennaro & Zaccaria is fully relevant and applicable!
The general unsolved dilemma behind this approach is well-known in quality assessment circles (cf. the relevant literature): who is evaluating the evaluators, and who does the criteria setting?
As far as I know, there is no agreed upon answer to this question. And IMHO there will never be an objective answer to it, except by acclamation - which is ... not objective either. In other words: no matter how you will do the observation and assessment, it will remain highly subjective and dependent upon the mutual goodwill of the assessors and the assessees.
For some analogy, consider children assessing their parents' performance in bringing them up. (Well, who knows, maybe this is not that bad an analogy, after all...). When the assessment is done *now*, on the spot (i.e., immediately after the course has ended), then various ad-hoc, contextual criteria will inevitably creep in. "I hate Dad, he did not buy me an icecream". Inevitably. Why should one believe that students are somehow privileged in their ability to assume a detached and "objective" stance? In later life, one might reflect: "Dad was strict with me, but now I can see it was for my good, and I will be forever grateful for that". Maybe students should only be allowed to cast their vote after they have experienced the genuine effects of teaching/learning, in their careers? If, then, a student has nothing to say about teacher X, then the effect of X's efforts is purely neutral. Just that. Neutral. No harm done, which is not that bad a result...