What is this table used for? The table entries in the first column ("Weighted Mean") look quite arbitrary. Moreover, a weighted mean, as the name implies, is a single number, not an interval (range of values). It seems to suggest that a response of e.g. "Uncertain" may be replaced by a number somewhere between 2.60 and 3.39. But, how meaningful will this be? Thus, even if someone can tell you where this table comes from, it will be of no value to you for your own research. Besides, there are much better and safer ways of analyzing data measured by a Likert scale (i.e., ordered categories). See the numerous discussions here on RG as well as my own RG project "Educational Assessment Engineering".
Do your mean: "Arnold, W. E., McCroskey, J. C., & Prichard, S. V. (1967). The Likert‐type scale" ? I couldn't find an author named "Best" having published a paper in 1967 about Likert scales ......
The opinion of 'uncertain' (2.60 -- 3.39) in Lerket scale is it relevant in calculation of the value of data? It is because in social research 'uncertain' is undecided and no decision taken. Please comment.
Opara Oguchialu : It seems you are mixing up measurement and decision-making. Measurement involves quantity, at least at ordinal level ("there is more or less of a certain quantity"), often at interval level ("the difference between a and b on a given quantity can be calculated and is relevant"). Thus, measurement aims at locating given objects on a certain scale, as precise as is possible given our measurement techniques and tools. Decision-making on the basis of measurements is quite different. First, it assumes, that measurements of the relevant objects on certain quantities is possible at all. Second, its goal is to test whether a given object "scores" higher of lower than a given test criterion (a given value on the scale, e.g., the score of an agreed upon "standard" object). Third, it takes appropriate actions if the score is higher, and other actions or no action, if not.
Thus, in the case of measuring objects on a Likert scale, using appropriate techniques and tools, should accept *all* measurements for further calculations, unless there is serious reason to believe that there have been faults or failures in the measurement procedure as such. In the latter case, one should repeat the measurement, or if that is not possible exclude the object completely from further analysis. However, that has nothing to do with the observed measurement, i.e., the score on the scale. In principle, any object with whatever score the measurement of which appears to be mistaken, should then be discarded.
Some people (researchers) have designed their survey such that the respondents may opt out while answering the items on the test. Of course, if an item is *not* answered by a given respondent, i.e., left blank or with a cross in an additional check symbol, that item will be discarded for this person. It falls under the label "missing data", no score for that item for that person. So, yes, it will be ignored completely in all further calculations (but it should be reported as such!). Again, this has nothing to do with decision-making or with dropping a certain observed value (score) on the scale. Of course, if there happen to be a lot of "missing data" in your survey, that may be a good reason to reflect upon your methodology, e.g., perhaps there is something wrong with the wording of the items/questions, or with the response format not allowing the kind of answer that respondents may expect to find. If that's the case, there is also reason to doubt the correctness of answers given by all other respondents, even if some of them have answered all items/questions (they may be "polite", or don't want to disappoint the researcher.