Many institutions run student evaluations for new faculty members and those seeking reappointments or promotions only. On the other hand these evaluations can be a useful feedback tool to all faculty members regardless of their years of experience.
As a faculty member of Business Administration i think all of the faculty members should be evaluated by their students. But this evaluation system should be fair enough. On the basis of the student's evaluation a teacher can improve his or her efficacy.
In my university all faculty members are indeed evaluated by students. Moreover, in our country - for the sake of quality and transparency - all faculty members in all prestigious universities are evaluated. Whether they are young or full professors.
Leraning is after all a matter for our whole life and it cannot find a stop point.
As a faculty member of Business Administration i think all of the faculty members should be evaluated by their students. But this evaluation system should be fair enough. On the basis of the student's evaluation a teacher can improve his or her efficacy.
In Brazil, this kind of evaluation is predicted in National Evaluation System of Higher Education (Sistema Nacional de Avaliação da Educação Superior - Sinaes), in all stages of evaluation system, including the institutional self-evaluation and a external evaluation made by Education Ministry.
However, for purposes of career growth, demissions or others management decisions, the students evaluation is more common in private institutions. In public universities the students role in the professors evaluation have lower weight.
In general, the use of evaluations for feedbacks and for professional guidance is still shy.
As with the other posts, I agree that all instructional faculty should be evaluated (anonymously and in a standardized manner across the institution) by students at the end of each semester. This is very important for evaluating the faculty member's teaching performance and for faculty members to evaluate themselves.
I would caution against the increasing over-dependence on a single number to evaluate teaching though. In lots of cases in the US students are asked (on a scale of 1 to 5, for example) to evaluate their teacher's performance. These numbers are averaged and compared. There are often very small differences among faculty members yet these differences are used in merit pay raises. This seems like over-dependence on a number that doesn't reflect the majority of student opinion (whereas the median of the distribution could). The written comments on student evaluations, in my experience, are often more telling than the raw numbers themselves.
As for Ian's question about student-facing admin and computer lab staff, I would say it depends on what the goals of those positions are. If they are serving students in an important way then students should be allowed to evaluate the performance of those staff. If I was in one of those positions and helping students every day I would certainly want to know, from the students themselves, how I was doing. The students probably know better than my boss.
I agree with most of Balazs' concerns. There are frequently correlations between students' evaluations scores and their own grades. Students doing poorly in the class rate the instructor poorly. This may be because of student effort, instructor performance, or both. This is why I advocate for looking at median scores as these correlations then tend to weaken and you get a better aggregate view of all the student views in the class. But you also lose variance across instructors. The written comments are much more revealing in explaining WHY a student has evaluated a faculty member in a particular way. So I advocate that those are what the instructor and the administrators really pay attention to. Consistent patterns in written evaluations can be very informative.
At some schools students are required to turn in evaluations in order for them to get their grades. At my school it is voluntary but we get a 80+% return rate from students and these comments are usually very useful to me as a teacher. If you end up with a very low response rate then I wouldn't weight those comments much as the reports are not necessarily reflective of all the students in the course. In general, schools get higher return rates from online evaluation systems where the students turn these in at the end of the semester but before final exams.
Student evaluations for new faculty members and those seeking reappointments or promotions, are run in most universities. I believe that student evaluations for faculty members apply also to all full-time and part-time faculty members.
Students evaluation of (their) teachers is a system already in place with many institutes. It is a necessary requirement, so it is there. But evaluation by students, is NOT the only evaluation that is done by the employer-institution. It is just a component. Teachers contribution to research, consultancy and institutional administrative role are the other important components.
Points like care-to-be taken & methods-to-be adopted are already well discussed/ presented, as above.
At my university (not King's, but a university in the Netherlands) we have a "student round table" where 2 representatives of each class discuss their concerns with a staff member. This becomes the basis for a rating of teacher performance. Students are also asked to fill out a survey regarding their overall school experience. The survey is part of a national survey in the Netherlands, and is used to rate program performance at all higher education institutions in the country.
Based on the survey, our teachers and the program itself have been the most popular in the Netherlands or close to it ever since the program was founded in 2006. According to my former supervisor (who reads Nederlands better than I do) we have ranked number one out of 1,300+ programs in a couple years, and tend to be in the top 5 in every other year. However, the student round tables paint a different picture. Round table feedback tends to be critical of teachers or there is no feedback at all. Overall teacher ratings are good (above 3 on 5 point scale, or above 7 on 10 point scale) but comments distributed to teachers are almost exclusively critical of performance.
As the founding manager of my department, I saw this discrepancy and performed some exploratory surveys with students on my own to better understand what was going on. The answer appeared to be that the purpose of the national survey was to answer the question 'How much do you like your school?' (a lot), but the round table sessions were asking 'What can we do to improve the program?' This latter question inspired a more critical assessment of teachers and the school than the national survey. This is fine, but to use it as the basis for performance reviews makes little sense because the questions are less about performance than about problems encountered.
Another problem with the round tables came from student representatives. Depending on who was chosen to represent student interests, feedback could be radically different from student to student, depending on their personal interpretation of feedback they received from other students. Some students had serious problems that they attributed to incorrect information from teachers, when it turns out the information concerned was conveyed by students who didn't know the correct answer. Teachers were thus blamed for mistakes made by students who thought they knew things like project deadlines and parameters.
We had one teacher who taught a notoriously difficult class. To make it worse, students didn't understand the value of the class because they didn't see the tangible benefits of the knowledge in industry. This is because he was teaching cutting edge technology that hadn't yet made it into the mainstream. His class evaluations were always very poor, but as his manager I had confidence in the class and let it continue as it was. Then we had our first graduate who had gone through the class. This person had multiple industry job offers before he graduated (using the skills learned in that one class) and took a very highly paid position to work in the core technology group at the world's largest company in that industry. After that happened, class feedback for this notoriously difficult class became much more positive. The class didn't change, but the student's understanding of how it fit in with their goals changed. Moreover, this isn't something that could simply be explained by the teacher, as he had always taken pains to do anyway, the students needed to see another student graduate and do well with the training. Indeed, since that first graduate, every graduate who specializes in the same technique has been hired on the day of graduation or shortly thereafter at prestigious companies. The class has not changed much, but the impression of the utility of the class among students has changed the way they evaluate it.
Based on these and other experiences, I am wary of student evaluations that become a part of teacher performance reviews. I do think these evaluations can be useful and are probably one of the best sources of information on teacher performance, but student comments cannot be accepted as-is. At the least, I think students should be asked to explain their comments, to see if they understand what they've written and appreciate the context of how they will be used. When I told one student that their comments go directly into performance evaluations for teachers, she was horrified. She had the impression they were supposed to look for any tiny thing that could be interpreted as fixable and then complain about it critically to get it taken care of. That said, she had a very positive impression of her teachers and didn't realize that her comments would be interpreted as critical instead of as helpful suggestions.
Having said all this, my university has made it a policy to violate our collective bargaining agreement regarding performance evaluations for the purpose of saving money on automatic pay rises for good performance. For that reason, all managers I've discussed this with are instructed that they are not allowed to rate any teacher as "excellent" in more than one category of performance (to avoid the possibility of an "excellent" overall score) unless that teacher has brought in so much research money that it would be obvious what they were doing by denying the union-contract requirement for a pay rise. What this means is that at my university we get top marks by students in the national survey, then negative marks are emphasized as an excuse to avoid pay rises mandated by union contracts. This makes the whole performance review pointless as a means of honest appraisal. Its only purpose at this point is to save money for the school, so some teachers simply ignore the entire review as completely meaningless. The point of mentioning this is that, if the performance review process is not perceived as credible (or honest) the teachers who are being evaluated may well ignore the review entirely. Either way, it is important that the results of an evaluative process are accurate and make sense to the person being reviewed. If not, it can be a waste of time and effort.