The materials we develop are single objective SCOs with the objectives determined from observed performance gaps in the target group. Once a performance based, criterion referenced objective is formulated, test development follows. Depending on the nature of the competency, this may be by questions, a simulator, or, in the case of skills, observable behaviors judged by qualified observers. From that, we produce images, video, animations, simulations, and narration to guide the individual to the desired behavior. As the procedure is based on classic, research proven methodololges the results are never in doubt.
Validating multi media content depends on various factors say for ex. the technology yo have employed, interactivity in the content, cognitive load theory principles, etc. Unlike earlier days it depends more on the way the updated technologies made use of . . . . . . . . .
Dear Nachi muthu. The multimedia content has multi dimension. such as technical, content, presentation, language etc To validate a multimedia content it is necessary to consult the experts of all these fields. finally its impact factor. It all depends on the content and the level of target group.
Actually, we've large been discussing the technology or gnosis transfer process rather than assessment. My personal opinion is that training and educations real efficiency and efficacy problems lie with assessment. Consider medicine. Until post diagnosis and treatment assessment of results are complete the success of the treatment is a matter of opinion. The outcome of most training and education is as empirically measureable as medicine and other sciences, but perhaps the single worst failure of our profession in practice.
Recently, my boss asked me how much I could cut out of our new employee three day safety, environmental, and health indoctrination. I told him "Based on pre and post test scores, all of it!" He looked at me in surprise and I explained that the difference between pre and post test scores was statistically insignificant, therefore the time spent was unsupportable from a business standpoint and quiet possibly a liability from a legal standpoint. In defense of my staff, I should state that this test was developed by HSE professionals rather than my group.
My friends, "the tail is wagging the dog" as we say in America. Assessment should mold the training, not vice versa. In my department, we start with objectives at the atomic level, that is, those fundamental components of the desired behavior at the lowest level possible. Once they are stated, we then develop the test. Once we are satisfied that it is not possible to pass the test without exhibiting mastery of the objectives then, and only then, do we develop the training. Each and every audio, visual, and experiential component proposed for the training is examined in context of the objective it supports. Anything not required to achieve the objective is excluded.
I have developed a functional flow chart design for a Competency Assessment System I hope to have built in the next year or two. I'd include this flow but the file attachment service for this site appears to be disabled. However, I'll provide a brief description. The core concept is based on the IEEE proposed specification for Reusable Competency Descriptions and Reusable Competency Maps. The RCD is fundamental a single concept learning objective with a unique ID and metadata for search purposes. The RCM is a sequence of RCDs that describes a competency. An example RCD would be "Able to use a torque wrench to torque a bolt to 75 ft/lbs with an tolerance no greater than plus or minus 2 ft/lbs" in a digital form with search terms like "tools," "wrench," "torque" etc. This, along with several other RCDs is then sequenced in another digital file to describe the knowledge, skills, and abilities to perform a complete task, such as replacing the piston in a particular pump. Stored with a unique identifier and appropriate metadata, this becomes a Reusable Competency Map or RCM.
So, the CAS engine starts either with an existing RCD (for which it includes tools to search) or by creating one. Once created, the instructional designer then creates assessment tools within the system. These may be of a wide variety of question types or by inclusion of simulations linked from an outside source. Each is assigned a criticality level. Further, for industrial use, each job classification will be assigned a different competency level. That is, a basic laborer might need mainly to be fully competent to assist on the job with knowledge of safety requirements on the procedure or device. A higher level employee who is tasked with routine maintenance and inspections would require a higher passing level, while a mechanic or electrician would need a higher level of competency still. For questions, the system is designed to require a sufficient number of each type based on ensuring maximum resistance to a chance passing score. For instance, it might require a minimum of 12 true-false questions in a bank to validate an objective when a single drag and drop sequence might be adequate to ensure competency. I say "might" as I intend to use the services of a rather eminent authority in the science of assessment to validate the rules of the test engine for each question type. The engine will also require a sufficient number of iterations of each question to ensure a random test for each user.
Once the question bank for an RCD is complete, the same CAS software will then guide the designer to sequencing RCDs into an RCM to test a complete competency in some procedure or task. This is then saved by the CAS as a Position Critical Competency, or PCC, again with a unique identifier and metadata. In common usage, a test.
Of course, all the above is from the perceptive my own profession, that of an industrial trainer. However, these concepts are equally adaptable to academics and, as online education becomes more pervasive such systems will play a critical role in ensuring high standards and desired outcomes.
I invite any comments or questions about this concept from my learned peers.
Hello Nachi, you can try with Learning Object Review Instrument, LORI in my experience with this instrument you go direct to your target. The link is http://thunderbolt.iat.sfu.ca/eLera/Home/Articles/LearningObjectEvaluation.pdf.