I also support Farhan's pre-post test method and think that this journal article demonstrates how to do this effectively: http://ijds.org/Volume9/IJDSv9p249-270Ulibarri0676.pdf
I agree as well with colleagues replies. However, I like to add the technique I adopted in my study which is an analytic assessment rubric. You may design a rubric for assessing your trainees/ students' performance by analyzing the criteria of your assessment into grades (from 4 to 1) or levels (advanced to primary). In each of the grades you need to add the descriptors which tells the students how can they achieve a certain grade.
As far as the pre-post test, I have used a video camera to record the students' performance. At the same time, I've assessed them according to the rubric.
The video helped me to show the students their own performance to spot the promotion. However, this was only a secondary reason for the videos.
The real use of the videos was that I've used the pre-post videos to make a reliability test (to check the reliability of my assessment tool 'the rubric', and to check the progress in the of the students. An inerreliability test was conducted by asking 10 teachers to watch both videos and give assessment to the students (using the rubric). Also, I've asked them to give traditional assessment (only scores) so that I can measure which tool of assessment was more reliable.
For details please check
Thesis The Impact Of Using Scoring Rubric In Peer Assessment On Pro...
Pre- and post session/course testing is ok as far as it goes, but it may only indicate what the students have remembered and know what they are expected to say.
The best way to test for impact in my opinion is to do follow-ups, such as interviews or questionnaires asking for evidence of changes in behaviour after the session or course. This might include making a change of research design, or using a hitherto unknown or misunderstood research method, for example. If you truly do mean you want a methodololgy rather than a method of testing for change, then I think you need to look at qualitative investigation rather than just quantitative measures. this is 'exploratory' in methodological terms, and your quant measures are also arguably exploratory, unless you can truly show a cause-and-effect relationship between training and practice change, in which case it becomes a conclusive research methodology.
I've used email follow-ups just asking students to provide some account of how their practice has changed following training interventions. Usually most are more than happy to share with you what they have changed and how they have progressed using the techniques and knowledge you shared with them. This is a good way of supplementing pre- and post testing. Hope this helps you.
You've posed a really important question. Certainly in my field of health professions education, there is renewed interest about the nature and purpose of evaluation, and a recognition that we need to be moving away from asking 'does an educational program work' to asking 'how does it work, why, for whom?'.
Educational outcomes are generally measured at a number of different levels as originally described by Kirkpatrick - his model differentiates between outcomes at 4 levels including (participants’): Level 1-reactions; Level 2-learning; Level 3-behaviour in the workplace; and Level 4- results (organisational level). This model has been further adapted in the health professions education field to differentiate between change in attitudes and perceptions (Level 2a) and change in knowledge (Level 2b) and impact on organisational practice (level 4a) and benefits to patients or clients (Level 4b).
It would be fair to say that most educational evaluation or research projects are aimed at demonstrating impact/outcomes at the lower levels of Kirkpatrick's model (i.e. participant reactions and changes in knowlege and attitudes). This is mainly due to pragmatic reasons as it is moire difficult to measure workplace and organisational impact. However, I agree with Judi, that there is a need to develop longitudinal evaluation and research frameworks that explore the 'transfer or application' of learning into practice and translation into applied settings (and also the impact on organisational networks and a range of stakeholders), and that using a mix of methods/tools including qualitative approaches can provide some very interesting insights.
I have included a link to a recent review paper in which we looked at the nature and purpose of educational evaluation - although this paper focuses specifically on the outcomes or impact of inteprofessional education within health, there is some discussion about Kirkpatrick's model, realist approaches to evaluation and evaluation methods, which you may find useful.
Finally, I'm not sure what field you are based in, but looking more broadly across disciplines and not just the literature on evaluating student educational outcomes, but also the evaluation of faculty/staff professional development programs, may also be useful in identifying useful study designs and methods.
Hope this is of help
Koshila
Article An exploratory review of pre-qualification interprofessional...
May be not a measure, but in the paper "Theoretical framework for the development of research knowledge in mathematics education" we present a possible approach. I believe it is transferable to other fields.