Often training in health care is assessed through evaluation of learning and evaluation index. what other ways and means can be used to evaluate impact and effectiveness of the training buildings?
Well, there are various ways detailed as such, direct measures, and indirect measures.
Direct measures can be evaluated by Preceptor evaluation of performance for a technical skill (procedure, surgery, counseling....), or soft skills (communication, ...) it can also be summative examination of knowledge, at the end of the training period , or training year. while indirect measures, can be self assessment, that the student can do, or even reflective writing. I hope i was able to answer your question.
Currently in health care, the emphasis is on outcomes evaluation. I agree with Rony Zeenny about self assessment of learning as was as paper and pencil testing as well as clinical performance. There is a hierarchy of evaluation; post-test, pretest post test, comparative groups, control group and intervention group, impact on outcomes and so forth.
What is recommended for hospitals is first to identify and issue, i.e. patient falls. Do a training program on preventing the issue. Then survey to see if the ISSUE or incidence of a program has changed. For example, identify the number of falls in a hospital for 6 months, do the inservice, then identify the number of falls in the hospital for the next 6 months. It is not only if the participants learn and can apply the material, it is also if that material makes an impact on a problem in patient care.
training should have learning outcomes, then we may evaluate training based on assessing these learning outcomes and how much this training contribute to health care practice for example more safe prcatice
the other is to measure the leaning outcomes using key performance indicators (KPI) related to the specific training workshop,then we may evaluate the clients benefits from the workshopoutcomes
Evaluation tools following training should be tailored to the expected outcomes, and this based on the impact on patient care interventions. These may not be standard as the objectives of each training vary according to the problem being addressed.
Impact of training in health care and tools for its evaluation:
Evaluation of Impact of Training in health care will depend on
1. What was the aim of training? Was the training imparted to improve theoretical knowledge or was it for enhancing the trainees' skills or was it for behavior change?
2. Who were the trainees/audiences? Were they health workers or were they community people?
Keeping above mentioned points in mind one could use the appropriate tool for evaluation for impact of training. Usual methods are pre-test, post-test, post test after 3-6 months, Post training and intervention KAP survey compared with the Baseline KAP, on the desired subject.
Impact of training could also be assessed in terms of increase in OPD patients' number. Here I would like to give an example- we trained Community Mental Health Workers(CMHWs) in a slum community. The CMHWs learnt to recognize Mental Disorders through symptoms. The CMHWs started working in Sunder Nagari (a slum in Delhi) community. In 10 month's time some 600 patients with Mental disorders have come to new Psychiatry clinic under the Community Mental Health Programme, which is supported by The HANS Foundation.
So each one of us can think of some way of assessing & evaluating our training programme.
....I think Joyce did a good response to this question.....Some of the tools include: certain indicators which may be in the form of service delivery improvement in the case of training of health care workers, No. of staff with good knowledge on the training details as they may respond well on the post test; A reduction in the health problems that were being addressed during the training in the community; In-case of no change in the indicators, this might mean that the training
Increasingly, education and training of physicians is migrating from being mostly an assessment of knowledge to assessment of a broader set of competencies. For physicians in the U.S., the Association of American Medical Colleges has now specified a set of 13 "entrustable professional activities" (EPAs). An initial survey of graduating medical students and faculty responsible for residency programs (the next step in training) indicated that only about 1/2 of the graduates have attained the desired level of competence in those 13. It is now about 15 years since the ACGME (Accreditation Council on Graduate Medical Education) in the U.S. began to promote a set of six competencies that all residents should attain. More recently, the ACGME has begun to put more rigor in the process. Work has been proceeding in defining "milestones" or intermediate steps in the attainment of the competencies. Knowledge is still an important competence, as are skills, as is ability to behave professionally, communicate effectively with patients and in teams, etc.
On the whole, this seems to be a very positive trend; and as you can infer from the above there is still an enormous amount of work to be done both on specifying the outcomes and in ensuring that the learning objectives of each stage of education and training are aligned with them.
It depends upon whether you want to test cognitive, psycomotor or affective skills for e.g. you might want to see whether the primary health workers can diagnose pneumonia based on inter coastal space movement or whether they are trained in the skills of ORS preparation demonstration etc. and accordingly you would have to devise an appropriate method of assessment. Some methods of assessment are simple as they only test the knowledge whereas others my require a person to correctly perform all the steps of a procedure for e.g correct use of an inhaler.
Quality of service delivery among others is the best measure to evaluate the impact of health training. If the training helped providers to deliver services on the basis of improved knowledge, skill and professional commitment and ethics, I would say it has made an impact. The tools to use to determine quality are (i) supervision using a check list over a period of time which could be done by supervisors; (ii) survey on client satisfaction;(iii) interviews with providers that received the training; (iv)client volume and wait time would also indicate effectiveness of service delivery.
This question is timeless and pertinent to all professions. I recommend the 3rd edition of the ANA's Nursing: Scope and Standards of Practice, page 45: Evaluating Competence "can be evaluated by the individual nurse (self-assessment), peers, and nurses in the roles of supervisor, coach, mentor, or preceptor. In addition, other aspects of nursing performance may be evaluated by professional colleagues and healthcare consumers. Evaluation of competence involves the use of tools to capture objective and subjective data about the individual's knowledge base and actual performance. Those tools must be appropriate for the specific situation and the desired outcome of the competence evaluation."
My organization is working to streamline and improve nurse competence; I am interested in anyone's tools for assessing rehabilitation nurse-specific competencies.
Its important to revise the ways of assessment, regarding both knowledge and experience for medical students. So this question should be asked at the start and end of every academic year. as well there should be a committee to evaluate the evaluators, and train them how to perform the evaluation perfectly.
Educators talk about high stakes assessment, such as formal, particularly, standardized tests, vs. informal assessment. The informal assessment should be much more frequent. When it is, I believe that the learners begin to take on responsibility for their own assessment and the quality of supervision/teaching they are receiving. They know that the assessment helps them acquire competence.
And... Do you use a model for evaluation efficacy of training not only for professionals, but for patients and community too? And, if yes, what is the model?