The operative concept here is "measure". If you accept the notion that a series or panel of questions that have Yes/No responses, and that in aggregate, are representative of specific knowledge, then a weighted summed score distribution would, by definition, represent presence (or absence) of knowledge. The higher the total aggregate summated score, the greater the "level" of knowledge "measured". However, this strategy to measure "level of specific knowledge" is in an of itself, limited, even if quite useful. Perhaps that is a limitation of that particular design (dichotomous Yes/No only questions) of psychometric instruments in general.
It is interesting however to note that multiple choice (radio button) questions are also "dichotomous" in that the response to the item is either True (Yes) or False (No). Multiple choice questions can be worded in such a way as to require much more cognitive input in order to answer correctly that a simple "knee jerk" Yes/No response provides. When correctly combining multiple options is required in order to provide the "best" answer, the aggregate panel of questions (in my opinion) becomes a much more accurate representation of knowledge level, when a weighted summed score with a true calibration of zero is used.
Same concept applies when using Likert scales in a psychometric instrument to measure using a gradient range of responses to measure human affect (opinion, frequency, agreement, etc.). As long as the lowest point in the gradient is calibrated to true zero ("0") as the minimum weight in the scale, the resulting summed score for the instrument is a valid method to "quantify" knowledge level. Take a good look at all of the examples of professional nursing license practice tests. A mixture of Yes/No and Multiple Choice. Why? A "passing level score" is well documented as part of the test instrument (a psychometric tool) design.
Is there a difference in the nursing knowledge of a person who takes such a test and scores 95% or above of the possible maximum score, and a person who scores below 50%? Consider comparing 1,000 nursing students from a particular nursing college who collectively have scored 95% or above of the possible maximum possible score compared to a nursing school where 1,000 students (same year and similar demographic class as the one that scores 95%) only in the 60% percentile. Does the higher aggregate score indicate that a school is doing a better job of nursing education?
Many statistical methods are available to make these types of comparison measurements as well as test the hypothesis that they are significantly different greater than by pure 50/50 chance. But those comparisons require a valid minimum possible score weight of true zero (0) in order to avoid artifact inflation of the resulting distribution centroids that are being measured by the hypothesis tests (parametric as well as non-parametric).
That said, measuring is one thing. Assessment is quite another. Do you want to measure, or do you wish to assess nursing knowledge? One of the reasons that essay (free text) type questions are also included in some psychometric instruments. As well as personal interviews with multiple interviewers and triangulation of findings from the multiple interviews. Unfortunately assessment like this is a value judgement, and as such can not be tested with mathematical hypothesis test statistics. Both quantitative and qualitative methods have been deemed valid ways to "measure" or "codify" attributes of populations.
I hope this helps provide some insight toward improving how to measure and assess nursing knowledge. Perhaps a mixed method approach will provide an ideal solution to this complex and complicated recurring and on-going issue we have world wide in the profession of nursing.
I agree with dr Richard. What I would like to add is that the dichotomous questions have a high rate of chance to answer correctly. To compensate for this you need a minimum of questions.
That might be up to 60 - 75 questions.
And some psychometrics suggest an error correction as well.
I agree with dr Richard. What I would like to add is that the dichotomous questions have a high rate of chance to answer correctly. To compensate for this you need a minimum of questions.
That might be up to 60 - 75 questions.
And some psychometrics suggest an error correction as well.
I agree with dr Richard. What I would like to add is that the dichotomous questions have a high rate of chance to answer correctly. To compensate for this you need a minimum of questions.
That might be up to 60 - 75 questions.
And some psychometrics suggest an error correction as well.
I agree with dr Richard. What I would like to add is that the dichotomous questions have a high rate of chance to answer correctly. To compensate for this you need a minimum of questions.
That might be up to 60 - 75 questions.
And some psychometrics suggest an error correction as well.
I agree with dr Richard. What I would like to add is that the dichotomous questions have a high rate of chance to answer correctly. To compensate for this you need a minimum of questions.
That might be up to 60 - 75 questions.
And some psychometrics suggest an error correction as well.
Hi, Mr. Jabir A. It will be better to use multiple choice questions for assessing knowledge and observational checklist to assess practice. All the best for your study.
I think is better to used either 5 or 7 point likert scale. Then either a mean benchmark of 2.5 if u use 5 points or 3.5 if you use 7 points for decision making. That is implies, mean score equal or above the benchmark signifies high knowledge while below is low knowledge.
Same can be use for practice but if you have enough time, u can use observation method for practice.
in my opinion, you can assess the knowledge level of the participants by using yes/no questions. but, one thing you should remember in that question, you must include all the levels of knowledge.
Yes. Please see an example of a survey tool that I use to educated nursing students about development of questionnaires. It uses the summed score method to measure a level of latent (otherwise hidden) knowledge, in this case about health care work place and patient safety hazards that are undisputed and well known. Each element of the tool measures "a sentiment" of agreement or disagreement with the logic about the specific hazard. Disagreement (wrong answer) is given a low weight, Agreement (correct answer) is given a high weight. The five point weighting scale in this teaching and training aid example ranges from Strongly Disagree = 0, Disagree =1, Neutral =2, Agree=3, and Strongly Agree =4. There are 10 items in the tool rendering a possible total score per survey participant that ranges between 0 minimum to 40 maximum with a score of 20 being the median of the distribution of scores. I think you can see for yourself (any experienced (professionally informed) registered nurse in the world would respond "Strongly Agree" to every one of these statements. Any other response would indicate that the RN (or person taking the survey) lacks critical latent knowledge about the extreme hazard risks posed by each item. "Strongly Agree" is the only correct answer, and can be regarded as "Yes" whereas any other response can be regarded as "No". This is an example of how a range of scores can be configured as a dichotomous binary response (Yes/No) as a method to measure knowledge.
The utility of such an approach to KR (knowledge representation) emerges when participant attributes are collected during the survey process (Age, Gender, Race-Ethnicity, Educational Attainment, Years in Nursing, Years in Nursing Specialty, and so on). These demographics and attributes can be used as independent predictor variables of the dependent target variable "Summed Survey Score", using a wide variety of available statistical modeling methods (Binary Logistic Regression, CHAID chi-square-automatic-interaction-detection, Pearson and Fishers exact Chi-Square, categorical regression, and so on).
So, the short answer (in my opinion) is a resounding "Yes". It all depends on how you set up the experimental model.