My answer to Roza is that you should construct your own instruments to measure what you want to see. For example, if you want to know if people was satisfied with the teacher, then ask them that, but if you want to know if people was satisfied with the classroom environment or with the LMS or with any other technology used, then ask them that, my point is, try to create your own questionnaires asking people what you want to know even if that implies to ask them about the snacks you gave them in the middle time if you consider that an important thing to measure.
The important thing here is that you clarify what means satisfaction to you and then create questions that captures information that might be useful for you and that helps you to make decisions later on.
this is why i must agree with Brian in the point that if what you want to measure is learning, then satisfaction might not be the best way, and finally a recommendation to Senthinvel is that considering the way that your instrument is constructed, the demographic/personal data should be allocated at the end of the questionnaire, in order to avoid bias in the answers given by the respondents basically because they assume a position from which to answer your questions. it should be very interesing to compare the results of meassuring in both ways.
All the best for you. The validated questionnaire to measure student satisfaction in blended learning environment is in the attachment. Kindly see the attachment.
Participants completed a Student Satisfaction Survey Form (SSSF) which had three sections. The first section collected demographical/personal data while the second consisted of 35 items on a 5-point Likert scale, ranging from ‘1-strongly disagree’ to ‘5-strongly agree’ for positive items and from ‘1-strongly agree’ to ‘5-strongly disagree’ for negative items. The items were based on the outcome of the literature review, addressing elements integral to student satisfaction in blended learning environments. Out of these, 35 items addressed the following student satisfaction ele ments: 1) instructor, 2) technology, 3) class management, 4) interaction, and 5) instruction (see Table 1). The third section included two open-ended questions.
If you so wish you can use a general student satisfaction questionnaire and adapt the different items to address satisfaction with blended learning. After administering the questionnaire you can perfom a factor analysis in order to establish what factors explain the issue of blended learning and you can calculate the reliability coefficients of each salient factor. This may be easier than using complicated questionnaires especially compiled for specific purposes that are not exactly congruent with what you wish to examine..
Why satisfaction? Good teaching makes students uncomfortable because it pushes to their next level. Those that like novelty will enjoy it. Others will complain because there was more cognitive effort.
Brian: An intellectual challenge do not necessarily is uncomfortable or frustrating. Actually it is better a motivated challenge. All sciences rely on solving challenges. Not in the degree of comfort of the challenge neither uncomfortableness of the solution!
sciences and practices have a lot of examples of motivated challenges, is it not enough difficult to get the competence to professional practice The degree of uncomfortableness is only part of the way. Students or teachers usually prefer paper to write today...
As a Joke: please try writing stones without brush or pen. Just carving to get uncomfortableness in order to achieve the next level ;)
Alejandra, I enjoyed your response, thanks. My main point was why satisfaction? Student acceptance would interest me more. The classic technology acceptance model includes ease of use and usefulness. How often students use a service could be one way to measure usefulness.
Personally, I think strong emotions are needed for a deeper level of learning, either positive or negative. There is a sweet spot. If a task or lesson is too frustration, then it is not useful. If the task is too easy, there is no real learning
My english is not as good as yours. I really consider that you mean. A challenge urging a solution. A question made in such a way that is not possible to abandon it in the middle. Look at this question: the number of people dying on hunger is around 23,000 person a day. The crop surface is already exploded in agriculture. It is a world problem. I know you are the one that is able to cope with the solution. :)
I have a lot of unsolved questions. I know you are the one that knows. It is going to be usefull to solve this first.
My answer to Roza is that you should construct your own instruments to measure what you want to see. For example, if you want to know if people was satisfied with the teacher, then ask them that, but if you want to know if people was satisfied with the classroom environment or with the LMS or with any other technology used, then ask them that, my point is, try to create your own questionnaires asking people what you want to know even if that implies to ask them about the snacks you gave them in the middle time if you consider that an important thing to measure.
The important thing here is that you clarify what means satisfaction to you and then create questions that captures information that might be useful for you and that helps you to make decisions later on.
this is why i must agree with Brian in the point that if what you want to measure is learning, then satisfaction might not be the best way, and finally a recommendation to Senthinvel is that considering the way that your instrument is constructed, the demographic/personal data should be allocated at the end of the questionnaire, in order to avoid bias in the answers given by the respondents basically because they assume a position from which to answer your questions. it should be very interesing to compare the results of meassuring in both ways.
I used it several times, when measuring e-learning.
Lu C., Zhou J., Shen L., & Shen R., (2008) Techniques for Enhancing Pervasive Learning in Standard Natural Classroom. In Proceeding of ICHL, 2008 pp.202-212
“Emotional State Set” Lu et al., (2008) - These will enable to make a start. Build upon the keywords and use other researchers to valid. As this will ensure the research community knows - what methods and tools you are using. Hope that Helps, Rob.
Take a look at the massive work done at University of Central Florida (UCF) at the RITE center (Research Initiative for Teaching Effectiveness), Chuck Dziuban and Patsy Moskal .They have since the later 90ies been using (almost) the same questionnaire with 16 Likert scales on most all courses over the whole university, and they also early had a very insightful definitions of levels of ICT use in courses (web-enhanced, blended, online etc). They now have a huge database (over 1,5 million surveys) with all data from most students in most courses for soon 20 years. Background data connected, as students evaluations and results in other courses, etc. This gives the opportunity to research collected data in relation to new upcoming questions, and is also today a rich resource for developing learning analytics and adaptive learning. UCF is a huge university but also a "college of tomorrow" (that´s what the Chronicle of Higher Education once called them.)
Here you can read about some of their results of their research https://online.ucf.edu/research/dl-impact-evaluation/ and in this paper you have the questionnaire they use: http://www.fgcu.edu/FacultySenate/files/2-8-2013_Charles_Dziuban_-_Student_Satisfaction.pdf (see Appendix A for the questionnaire). More info on their questionnaire in the paper "a course is a course is a course: factor invariance... " http://www.sciencedirect.com/science/article/pii/S1096751611000388
When other researchers sometimes use data from one class last year and another class in the same course this year with an ICT intervention and come to some conclusion of the kind that students perhaps learn better now...RITE uses a much bigger statistical material to build conclusions on, and gets very interesting results. Take a look. This is the most serious and massive evaluation approach of blended learning I know.
I also have the honour to have together with Dziuban and Moskal co-authored a paper with a diverging perspective on blended learning, see "A time based blended learning model" if interested.
Article A time based blended learning model
Article A course is a course is a course: Factor invariance in stude...
Article Student satisfaction with online learning in the presence of...