This is a teaching technique based on group dynamics. It ensures student engagement and supports peer learning. Minimizes the teaching component. very useful in the situations of low teacher-student ratio.
To test the effectiveness of any intervention, one of the research designs that you may apply is experimental. There are many type of experimental, and find one that best suit in your context of research.
If it's the design you're looking for, I defer to Dr. Abdullah's response in that there are many out there and they are dependent on context. However, if you're looking for research-based programming, I point you in the direction of the Classroom Assessment Scoring System (CLASS) used in observations of preschool teaching and even the Sanford Harmony programming (http://sanfordeducationprograms.org/) which also has a focus on early childhood programming in terms of the interventions you'd like to implement.
one of the best ways to inspire students is to determine what motivates or interest them. then one can now work on it. also when you make teaching real students willbe motivated
Thanks for the previous answers and the opportunity to comment.
In the case of comparing pedagogical techniques, your final question is the dependent variable, i.e. which type of instruction gives me the best outcome. The question is how is your outcome measured?
Looking at your idea I note there might be several outcomes:
1) After being taught in some fashion, how does the student score in some form of assessment,
2) how many hours are consumed by the teacher in outcome a versus outcome b,
3) a mix of the two (how much output per teacher/hour results from some for of instruction.
Your research design will be based on your outcomes you wish to measure, as well as your ability to move students from one pedagogical group to another. Sometimes, you are unable to control who is in what group, so there will be confounding variables (e.g. all the kids good at peer teaching and learning are in one class, all the introverts are in another, confounding your outcome measure).
As you can see this is a bit complex, if you'd like to exchange ideas via email, feel free [email protected].
Also, Popham's book on Educational measurement is a nice introduction for this, as is McDaniel's Understanding educational measurement.
All answers are useful and right to the question. I wud just add a small bit. If you have a small number of students. Doing an experiment needs statistical analyses. So sma small number ( let's say less than 30) may not be able for such analyses.
Patcharee is spot on with her answer...non-parametric analyses provide a solution to the small sample or non-normally distributed sample. Thanks for bringing that up!
I would use the method of observation before and after the method introduction and besides that I would also suggest the employment of Grounded theory methods with the staff before starting the research (while you are still developing te concept) till the very end. http://www.groundedtheoryonline.com/what-is-grounded-theory
Especially focus on memos and properties defining techniques of interview analysis.
To assess the effectiveness of an intervention, the appropriate 'research design' would be 'experimental design'. There are at least three experimental designs namely pre-experimental design (without a control group, weakest design), quasi-experimental design (intact groups, with a control group) and true experiment (random assignment of participants into control and experimental groups). These are 'designs'. But observation, interviews, administrating questionnaires, tests are ways to collect data.
thank you for your interesting question, as professor Nurulhuda says research design could be experimental, i think it could be quasi-experimental too. I would like to coment you that there is a formula that is applied for learning gain from pretest and postest outcomes for experimental group(s), after educational intervention. You can revise Hake paper to see fundamentals and experiencies. I hope it is useful to you. There you have Hake paper about normalized gain, they use it a lot for physics education, but it can be applied for learning gain evaluation in any field and level. If your intervention is efficient, you will get a gain near 0.7
Instead of using normalized gain scores, I suggest you just use ANCOVA. This will handle any initial differences as well as any other covariates that may exist in the study.
it is a great pleasure meeting you at Research Gate, have seen a bit of your interesting work and i will take advantage of your suggestion, i will Try ANCOVA, as soon as possible.
The design depends on what kind of 'effectivity' you are measuring. If it is based on scores only, then use a quasi experimental design of the matched treatment/control groups. But if you want to measure other variables of effectiveness like motivation, or if you want to get more insights on the effectivity that cannot be explained by numbers, then use a mixed method design. You can go for the explanatory sequential approach to mixed method ( Cresswell & Clark, 2011).