I assume that effectiveness of day training would be measured by the pre-post test on the day the person did the training, which might not be ideal since benefits of training might not be measured after one (or a few, for that matter) day of training. You also have not provided a comparison group. One of the best study designs would be a trial where you would randomly assign people to do and not to do the exercice and check your outcome on both groups.
The study design is depended upon, whether we want to create awareness among the trainees or there is skill based training.if we want to know the level of awareness after training , then pre and post test can be given but If we want to know the effectiveness of skill based training then we can prepare checklist, randomly select those got training or not got the training, then after interviewing regarding skill and observe activities performed by them regarding the skill, trainer can evaluate the effectiveness of the training.
You seem to have decided on a pre- and post-test study design - which is adequate as an evaluation tool to assess any change in knowledge, skills and attitudes of participants. You can deliver the collection tools on the day, or request participants to complete them on-line a few days/weeks later to assess whether any change is maintained. If you want to have more information about the specific effectiveness of the training then as others have said an intervention study with a control group would be the preferred study design. Random allocation may be more feasible on a cluster basis rather than individual level.
I would go for a crossover design. Intersubjects variability could be controlled, allowing high statistical power with small sample size. According to the measurement scale of your outcome measure, a general linear model would fit well to evaluate trial data.
More on this subject: https://onlinecourses.science.psu.edu/stat509/node/123
The best design I would say is a randomized control design so you can establish causality between your exposure (the training) and skill or behavior change. Pre-post design can serve to establish an association but not causality. So it depends on what you exactly want to investigate. Hope this helps.
Can you explain little more about what you mean by '..to measure the effectiveness of a day training...'. Is it day time of the training (vs other timing) or training (vs. no training). If it is later, I would suggest randomised control trial where participants are randomised in to one of the two groups (training vs . no training ) and evaluating their skill.
This is where some of Campbell & Stanley's quasi-experimental designs come in useful. I have found that a pre/post/post-post design works best (let current experience with the type of variable you are measuring and the known decay rate of those items dictate the actual time lapse). Then construct a parallel group who undergo the same pre-, post, and post-post testing, but NO intervention. The results are analyzed by paired t tests in the same group between pre, post, and the later 'post-post' measures. This measures changes in a time series (both improvement and decay over time.. Then unpaired t tests for the between-groups measures at pre, and again at post, and also at post-post. These measure the effects of the questionnaire itself, and any other extraneous factors. Important to randomize between study and control groups. Useful where you have another similar group to be processed next month as the 'control' but of course they will have become familiar with the testing instrument, so the analysis won't be as reliable next time.
It is a before-after study . This is a descriptive study that some authors describe as semi- experimental (quasi-experimental) design. For the rest, the comments made by Tim Sly are excellent. Statistical tests for paired data should in particular be used at the time of analysis.
Very simple , as you are interested to determine the change after training , in such case the itself will work as its own control . you have to select interventional study design . Ist base line knowledge of participants will be recorded and then after training . . now it becomes simple to see any difference if it is there. apply test of significance keeping in view the type of variable under study.
I deeply appreciate the comments and insights of all that contributed. I am still studying the answers provided to see how to apply them to our situation.
@Kandasamy, we run a resource limited pharmacovigilance setting and are testing how to achieve maximum "outcome" with a one day training on ADR reporting. In other words, "is a one day training enough to conclude that the participants can correctly complete the ADR reporting form?"
In such situation you should have two groups (one group with one day duration the other group with more than one day duration). Randomise the participants into one of the group. Use same evaluation procedure and compare the results for effectiveness
With respect, the purpose of selecting an appropriate design is so that the research question can be addressed. In this case I assume that the investigator wishes to determine the effects of the training. That said, I believe Dr. Ravichandran's suggestion is not practical. Several other answers provide a better design.
The Solomon four-group design is the best that guarantee and satisfy the internal validity of the design but you can apply one group pre-post test design because some times it is easier and more practical.
Traditional study designs such as randomized controlled trials (RCTs) can be ideal for testing the efficacy or effectiveness of interventions, given the ability to maximize internal validity.
Traditional study designs such as randomized controlled trials (RCTs) can be ideal for testing the efficacy or effectiveness of interventions, given the ability to maximize internal validity.