Some scholars use mean calculation to analyse responses measured with a Likert scale. But often Likert measures different values such as Strongly Agree, Agree, Undecided, Strongly Disagree, Disagree. what is the right statistic to measure this?
There are some statistical measures to analyze likert from the literature: mode, regression, ANOVA and T-test. See these works to shed more light. I hope they are useful sir.
Emma is very correct. There are so many statistical tools that can be used to analyze likert scale measures. It generally depends on what is being analyzed. For example, T test and Z test can be used when one is trying to carry out a comparative study. Chi square can be used to test relationships between variables, etc.
There is nothing like a sum or a mean of "Agree" and "Undecided". The only operation that can be done with such data is to count the number of occurences and show and analyze frequency distributions.
What you are talking about needs numeric values. These are obtained by constructing a random variable that returns a numeric value for any Likert value:
> Likert value -----> numeric value
BUT BUT BUT: the mapping from Likert to numbers is arbitrary (that does not mean nonsensical). It is up to the researcher to find a sensible mapping rule that has some kind of "real-world meaning" and will be practically relevant (useful). THEN you have some random variable and you may ask what probability model can sensibly be assigned to this variable. IF this is an at least relatively symmetric model (what is unlikely because values are likely close to the boundaries) OR you have a lot of values to average, then an analysis using standard statistical techniques might be useful and appropriate. But the interpretation of the results surely still depends on the mapping. When this mapping is nonsense, the oh-so-significant and nice results will also be nonsense or at least useless (or, more severely: prone to misinterpretion).
Most people simply take the ranks as the mapped values, but it is questionable if this mapping is useful generally. So: if you can provide a plausible mapping rule and if the normal probability model is plausible for the mapped values, then a standard analysis is ok and the results will be interpretable in a meaningful way (and thus be useful). Otherwise it's just digging in the dirt, producing essentially meaningless numbers that can only accidentally be useful.
Could you please put an example so that we can understand what you mean by a mapping rule so that we can understand how to create such random variable?
Thank You Wilhelm for your detailed answer, and Alberto for prodding for further elaboration. It is exactly on this mapping that my question is based. I have noticed that people use these figures (assigned numeric values) to multiply the frequency of occurrence (responses to each particular measure, e.g agree) and then find the mean for each item in the questionnaire in relation to the total number of cases (sample). Of course there should have already been a decision rule predetermined to accept if e.g. the mean is 3.0 or above and reject if lower. My worry is that these Agree-Disagree options or responses are measuring different attitudes or responses. Summing them up based on the mapping and an arbitrary decision rule is incomprehensible to me. Again what is the rationale for the arbitrary set of the decision rule or even assigning numeric values? Of course you have said these are arbitrary?
It is better to use the median for Likert scale data, because those are ordinal (not interval) data; and for similar reasons to use non-parametric statistics for analysis . . .