I am completing some research into assessment in Physical Education and wondered if anyone had done anything similar on assessing practical activities and considerations which need to be made to ensure its validity and reliability.
Assessment validity and reliability are both significantly impacted (negatively) when any subjective measure is the focus. "Practical performance" must be broken down into the relevant components each with its own measure. And then multiple raters should probably evaluate them and compare them.
Daisy Christodoulou (spelling?) argues for comparing students' performances against such a standard and against one another. In this way you can consistently check to see where they fall relative to one another such that subjectivity is to some degree reduced.
I am not a PE expert and thus would defer to those who are but for the assessment purposes would say that if you had a specific "practical performance task" you should break it down into its essential parts and build a rubric that delineates between ranges of competence in that performance. Be cautious that you do not allow unrelated aspects to impact the assessment unless they are part of your goal. For example, a student presentation that has a highly engaging visual display and an interactive speaker often outperforms a bland visual display with an orderly and coherent presentation while the content of the 2nd may be immensely stronger than that of the first. Style can often be a problem in such assessments unless that style is part of the learning goals.
Anyway, there are some thoughts for you to consider. Good luck!
Como especialista en evaluvacion me permito recomendar elaborar técnicas de evaluacion formativa como hoja de cotejo, guías de participación escalas conceptuales, escalas numéricas para evalucion formativa de las practicas. y rubricas para evaluar desempeño. en Educacion fisica se debe evaluar ejecuciones o practicas y seria ideal la aplicaron de rubricas que permite evaluar desempeños de ejeccion individual y grupal de los estudiantes,
I am writing to tell you that when I am researching for my doctoral dissertation I encounter the two articles with your topic. I think it might be a good idea to read it.
Teachers encounter some difficulties such as crowded classroom, insufficient time for assessment, inadequate learning environment and technological opportunity and they do not do the objective evaluation.
Abundando en lo que el profesor Rivera señala líneas arriba, corroboro la utilización de instrumentos de medida adaptados a una evaluación cualitativa. En mi caso, añado a las del colega citado la autoevaluación del alumno y la coevaluación llevada a cabo en el caso de que la evaluación se realice sobre un trabajo en grupo. En la asignatura que imparto, los grupos evalúan tanto al resto de grupos (heteroevaluación) como a su propio grupo, y es sumamente ilustrativo comprobar el grado de compromiso adquirido dentro del seno del propio grupo.
Thank you all for your contributions, for me the key difficulty is what has been mentioned a few times about the subjectivity of practical performance. Would breaking down such a performance into more objectively assessed chunks be an easier way? Or could this over complicate the process?
In my experience, the high number of students to be assessed limits the time teachers need to do a good job grading with experiences as close as possible to the actual life. It's also difficult to get some students to take this as serious assessments. many of them think tests are "the" way to show their competences.
1) Develop a written rubric that tells you what components to look for, and that assigns each component a weight in creating an overall score. Test the rubric to see if it gives you intuitively reasonable results, and then modify it until it does. Also, if possible, try video recording performances, and then use the rubric to re-score each to see if you are consistent.
2) The only results/performances that should be included in the evaluation are ones that are part of the regular operation of the educational program--not ones that are tacked on just for the purpose of evaluation. Students often have little motivation to do their best if they are asked to do "extra" activities just so you can gather data. In other words, the assessment activities should be embedded in the regular course, not added on.
In order to increase the reliability and validity of the scores you assign to your students, you need to ensure that you are measuring the competencies that are outlined in your syllabus objectives, and that you are guided by weighting given to each module. The weighting is indicated by the number of instructional hours assigned to each module (or unit). So, if Module A is assigned 15% of your instructonal time, then 15% of your students' total score should be for the objectives that are outlinedin Module A.
You need to ensure also, that if you should judge a student's performance more than once, the student will receive the same score; and if someone else should be asked to give a second opinion of the quality of the student's performance, their score should be pretty much the same as yours.
A good analytic rubric will be able to do this quite well. On this rubric, you should have the competencies that are being judged, and each competency broken down into its constituent parts.
For example, if your syllabus objective says that your student should be able to run 100 metres in 10 – 12 seconds while carrying 10 – 12 lbs of weights, your rubric could look like this:
Run 100 meters while carrying weights (Total 6 marks)
12 seconds 1 mark
11 seconds 2 marks
10 seconds 3 marks
While carry 10 – 12 lbs of weights
10 lbs 1 mark
11 lbs 2 marks
12 lbs 3 marks
In this way, you are ensuring that a) your focus is on the skills that are important to the subject; b) your scoring is objective; c) each person who uses the rubric to judge the performance would have a clear understanding of what it is that should be assessed. This type of rubric also helps the students as they would also have a clear understanding of what good performance looks like (in this case how to earn all 6 marks) and so they will be guided in their preparation for assessment.
For some activities, a checklist may be appropriate.
Readings on how to achieve the following should provide further clarity: a) content validity evidence; b) construct validity evidence; c) inter-rater reliability; c) intra-rater reliability.
However the PE curriculum in the UK is rather open- students must study skills and activities to enable them to achieve specific outcomes but these are not as clearly prescribed as your example. For example one outcome is 'pupils should be taught to develop their technique and improve performance in (other) competitive sports' which leaves the possibility of a significant amount of interpretation from teacher to teacher.
Teachers have recently been allowed to design their own assessment programmes to determine students achievement which allows for significant differences from school to school during KS3 (up to age 14).
This is standardised using a similar protocol to the one you suggested during GCSE examinations, but due to the time, cost and perceived unreliability of this method the practical content of the subject has been reduced this year which is problematic for me, as I see it as a move away from what Physical Education really is about.
What I would like to look at, is if there could be a way to measure and standardise performance in games and activities in the way you have described the running task above.
How would you break the activity down into parts? Is there some sort of rubric which could be used to score a student playing football as well as a similar one for dance, gymnastics, swimming and all other activities likely to be taught in PE lessons.
If the syllabus is open to mucn interpretation, my suggestion would be for you and your colleagues who teach the course to plan your lessons together. The lessons should be aligned to the skills stated in the syllabus.
For students to develop their technique and improve performance in (other) competitive sports, you,the subject experts would know exactly what they need to learn in order to develop their technique etc. and you would create learning objectives that, once mastered, would ensure that students' techniques are developed, and their performance in competetive sports are improved.
It is these things that you teach them that you would be assessing, and therefore are what you would place on the rubric. If all the teachers in the school interpret the syllabus in the same way and use common objectives, then this will assist you in standardising the assessment.
Here is a rubric I found for the backstroke in swimming