I'm looking specifically for measuring the impact of a learning activity 3-6 months post learning, as opposed to evaluating the design or delivery of a learning activity (which appear common).
English in Action (eiabd.com) has been working to support teacher's both primary and secondary to develop their teaching practice using a CPD model and study their classroom practices. You can take a look at the tools (not might be directly linked to CPD though) used in the study.
I think it depends on what you want to measure. Are you asking about pedagogical shift in teacher practices? Are you measuring attitude change in teachers? Learning outcomes of students? What domain/skill set are asking about?
I agree that the mode of measurement really depends on your outcome of interest. That said, in terms of professional development, one excellent resource that weighs some of the issues and presents an abundant reference list of resources is Laura M. Desimone's 2009 article in Educational Researcher laying out a framework for PD research. I'm not aware of existing validated scales, but this article provides guidance in designing observations, interviews, surveys, or some combination.
Article Improving Impact Studies of Teachers' Professional Developme...
Although i'm not in an educational position, a component of my work as a Clinical Nurse Consultant within a teaching hospital is the delivery of mandatory and non mandatory education/learning activities e.g mental health assessment skills; suicide risk assessment and management training; medication management; aggression de-escalation techniques, amongst others.
Typically the Nursing Education department attempt to evaluate these activities but the evaluation is primarily focused on the design of the workshop, the delivery of the presenter, and what the learner believes will be the outcome of the learning. Obviously these things have little bearing on the actual learning or impact of learning in practice.
I would like to develop a mixed-methods evaluation of the impact of learning activity on practice post learner return to clinical area. The qualitative aspect would involve interviews and wondering if any quantitative measures are available for such things which have been validated across a range specialties/professions (or at least exist and can be adapted).
In terms of domains, the ones which predominate in health are knowledge, skills, attitudes.
Evaluations entails asking questions, gathering information, drawing conclusions and making recommendations. The reasons for the professional development programme is the purpose of the evaluation. The information to evaluate the programme can be gather through interviews, observations, document review, questionnaire but that evaluate should use the method best suited for gathering the information relevant to the professional development programme. Stufflebeam (1990) whom I have referred to in my book - Teaching Elementary Mathematics - (Luneta 2013:108), contends that an evaluation must meet certain standards and these pertain to four attributes of an evaluation: Utility; feasibility; priority and accuracy. Evaluation can be summative - designed to evaluate the programme while it is still running or summative - designed to support conclusion, about the worthiness or merits of the innovation. It is important to note that evaluation of educational programs are complex because of the number of variables involved - teachers , learners , the school, parents, materials, etc. and in education they are all interlinked.
Very helpful context. Not knowing the nursing context myself, it's hard to know what kinds of surveys might be most closely aligned with the CPD activities you're presenting. That said, in doing some of my own work this afternoon, I came across an article that may wind up being of some use to you. It's specifically about survey instruments measuring teamwork in a health care setting. I haven't read this article in detail, but I do know some of Amy Edmondson's other work and think it's superb.
Article Measuring Teamwork in Health Care Settings
I was director of education in a 400 plus bed acute care Magnet Designated Hospital. We asked this question over and over as we continued to find we were assigning nurses and other employees mandatory and non mandatory education that seemed overwhelming. The standard tool is an evaluation of the immediate learning and content. We implemented processes to link education to the Quality Improvement process. If we educated on Rapid Response, did we see an improvement in the skilled response over time and did it "stick". At the time we identified the education we tried to identify the outcome. Annually we developed an education needs assessment and a QI plan and tried to link them.
In order for one to evaluate any CPD programme one need to identify the needs of the programme! What was it developed for? The establishment of the programme aims and objectives will support the evaluation tools that will be developed. That should be the starting point. There are several evaluation tools that have been developed but the problem will be to align them to ones programme aims and objective because the evaluation will be whether the aims of the project/programme were achieved and if not what were the problems and how can the problems be addressed in order to optimise the achievement of the aims and objectives.
I did my Masters Research Thesis on How RNs utilise self assessment and performance appraisal to inform their practice. Which is not entirely what you are looking at, but would be happy to share that with you if needed. It was a single centre qualitative approach and is now a few years old, so may have been redone more recently.
Hi all, I meant to follow up on this and forgot. I found an instrument in its early testing stages which meets my needs.
It was developed from a collaboration of health faculties in Quebec Canada and it's attached for those interested.
We've been using it across a range of CPD activities (Violence Prevention and Management Training; Basic Life Support Workshops; and several other CPD activities) and we think it has fantastic utility within the CPD arena.
Relevant citations for this work are below:
Légaré, F., Borduas, F., Freitas, A., Jacques, A., Godin, G., Luconi, F., & Grimshaw, J. (2014). Development of a simple 12-item theory-based instrument to assess the impact of continuing professional development on clinical behavioral intentions. PloS one, 9(3), e91013.
Légaré, F., Freitas, A., Thompson-Leduc, P., Borduas, F., Luconi, F., Boucher, A., ... & Jacques, A. (2015). The majority of accredited continuing professional development activities do not target clinical behavior change. Academic Medicine, 90(2), 197-202.
Légaré, F., Freitas, A., Turcotte, S., Borduas, F., Jacques, A., Luconi, F., ... & Labrecque, M. (2017). Responsiveness of a simple tool for assessing change in behavioral intention after continuing professional development activities. PloS one, 12(5), e0176678.