Exams with open-ended questions are so beneficial to understand include personal feelings, opinions, or ideas but it is very hard to read(evaluate). Do you have any method are you using to read(evaluate) open-ended questions in exams?
We have recently published research focused on Automated (NLP and NN) Assessment of Open-ended questions in K-12 Biology based on analytic rubrics. The study deals with Hebrew, a morphologically rich language (MRL), as Turkish also is. You can find it interesting.
We are also looking for collaboration with the researchers who are interested in automated assessment in additional MRLs, and you are more than welcome to write if you are.
Article Machine Learning and Hebrew NLP for Automated Assessment of ...
It is important to determine what kind of explanations are expected from the students; arguments? definitions? causal mechanisms? According to what is expected, I build a detailed rubric which specifies what should be evaluated and how.
I second Moriah Ariely's answer here. It really depends on the field. While I was teaching English I tended to go that route myself. Though obviously the rubric was relatively simplistic and focused on their use of language features taught throughout the course, more information-centric fields can still be assessed as long as you design a rubric that is comprehensive and fair. I have found with some subjects and professors that the answer itself isn't always as important as how the student arrived at that answer. Demonstrating thought process and understanding of procedure, for example.
In my biology classroom, I would give my students an image of a wildlife scene and ask them to explain what they saw, where creative answers were encouraged. I knew that some of the most creative and fantastical answers would most likely be farthest from the truth, but creativity was part of my goal. I wanted them to broaden their minds to what could be. As for evaluation, I evaluated the degree to which they could "Observe, Infer, Hypothesize, and Propose and Experiment." These are the elements of scientific investigation and skill they will require. I'm not worried about them attaching this process to fantastical ideas that aren't close to truth because there is a natural regression towards the mean and it is creativity and wonder that leads to the greatest discoveries, even if they aren't initially based in truth. In this way, I provide my students a means to emotionally bind to the questions I propose, while still having salient skills that I can test and give feedback on.
It can be helpful to train tutors - student assistants who are instructed which answers are correct, which words must be included, for example. If this resource exists, I would delegate the task productively. But that's only one possibility.
Another possibility is to ask the questions as precisely as possible and in such a way that an answer, for example, can not be longer than 4-5 lines long. If the question is asked accordingly concretely, students are "forced" to express themselves briefly and they are aware that there are essential words/terms that must be mentioned if you want points.
The words/terms are then indicators of the score and points. In addition, a score that is transparent to the examinees can give them an orientation as to how many terms or how much content they should write in total.
Another possibility are classic gap texts. These can be corrected very quickly. But you only ask for memorized knowledge.
I have a good experinece with evaluation based on some precisely defined qualitative criteria (a pieces of information included in student's answers) - we could talk also about some kind of 'codes' known from the qualitative research methods...
Yes, exams with open-ended questions are good for understanding the level of knowledge of the students. But it is really hard to evaluate the exam scripts perfectly. Because there is no specific method to evaluate open-ended questions in exams. According to me, the following steps can be employed:
· All open-ended questions should be divided in small parts.
· Breakdown of marks for each part of the question should be mentioned against the question;
· Students answering questions specifically and to the point should be given good marks.
· Teachers should give higher marks to them who have answered in their own language and those who have used practical examples of their own rather than them who just have answered from memory.
· Who have made their answers unnecessarily lengthy using irrelevant points should be given poor marks.
· Higher marks should be given to them whose answers match with the learning outcomes of the contents.
Yes, I think Open-ended questions method is good, because students or person can express their knowledge or thinking about topic and we can also know the level of knowledge of students about topic.
Identifying the nature of the questions, it is necessary to develop respective rubrics for evaluation to create a win-win situation for both students and instructors.
It is preferable to analyze students’ answers and put them in an evaluation form according to the rubric model that includes evaluations from (1-5) or using peer evaluation according to a model set by the teacher
It is preferable to analyze students’ answers and put them in an evaluation form according to the rubric model that includes evaluations from (1-5) or using peer evaluation according to a model set by the teacher
As the answer to an Open-ended question is going to reflect the knowledge and feelings of the respondent in an elaborative manner, the evaluation approach is quite challenging and perception for the answer may differ from evaluator to evaluator in some cases. So a good deal of research is needed to come up with a full-proof methodology.
Answers to open-ended questions should be in-depth and specific, but still concise. Avoid going off topic and providing lengthy answers as this can increase the risk of going off-topic. The interviewer may also lose attention if your answers are too long..
Thank you for your question. Open ended questiones gives more outcomes relevent with the topic but to manage/ evaluate all these in well manner to arrange scientifically required appropriate tool and technique. So these should be categorised well in advance.
Eu estabeleço critérios de eliminação e de qualificação. Os primeiros são fuga do tema, originalidade (plágio) e tamanho do texto. Os critérios de qualidade variam conforme os respondentes, mas geralmente envolvem pertinência, forma e profundidade do texto.