https://github.com/mbabisado/DataSets

Academic affect dataset collected during the online examination.

The participants of the study were comprised of 75 students from Higher Educational Institution in Manila-Philippines. Seventy-five participants agreed to take video recorded examination. These were students enrolled in Intermediate Programming and Data Structures and Algorithms courses. The authors employed pre-processing from the collected video recordings. The coders did annotations, and the kappa agreement yielded 62.30%. Indicators for academic affects (frustrated, relaxed, bored, curious, and distracted) were formulated during the annotation process. Classification and Regression Trees were used in the training of the model. The model yielded an accuracy of 92.66%, a classification error of 7.34%, a kappa value of 0.898, a weighted mean recall of 87.28%, a weighted mean precision of 89.15%, and absolute error: 0.101 +/- 0.206. The performance of the designed classification model on the academic affect dataset showed the values provided a visualization of the acceptable performance of the algorithm. The five academic affects: Relaxed, Curious, Bored, Frustrated, and Distracted were models based on the video recording from the participants. The facial expression and head pose are important features in identifying a learner's academic affect during an online examination. This study utilized an open-source Behavioral Analysis Toolkit – OpenFace 2.0 for the feature extraction of the video exemplar collection. The toolkit provides the following values imperative to model the output of this study. There is a .csv file containing the head pose parameters roll, pitch, and yaw, and the facial expression Action Units. A parser is needed for feature extraction to produce the data set from all valid OpenFace 2.0 processed videos. The affect annotation is then mapped to the features extracted from OpenFace 2.0. This dataset contains 720 features as headers, and 3665 as rows. Of these values, only the facial landmarks, Facial Action Units, and head pose parameters are used for modeling and training.

Similar questions and discussions