It depends on the phenomena you are going to study and the analysis you want to carry out. Generally speaking if you consider a continuos phenomena (for example income), it is better to include a continuos variable in the survey . The opposite occurs when discrete or categorial phenomena are considered. Neverthless, if you want to study a specific topic that doesn't need a continuos aproach you may opt to use a clusterization of the variable.
It depends on the phenomena you are going to study and the analysis you want to carry out. Generally speaking if you consider a continuos phenomena (for example income), it is better to include a continuos variable in the survey . The opposite occurs when discrete or categorial phenomena are considered. Neverthless, if you want to study a specific topic that doesn't need a continuos aproach you may opt to use a clusterization of the variable.
I agree with Gabrielli. It completely depends on the nature of the phenomena you're measuring. In education and psychology, the fields where I work, we often attempt to measure un-observable "constructs," which, by their very nature, are difficult to measure (e.g., things like motivation, satisfaction, resilience). When doing so, we often use Likert-type response scales that, although not continuous (strictly speaking) are treated as such during the analysis phase.
Normalmente las variables psicológicas responde a variables cualitativas en escalas ordinales, por ello la clasificación que se hace en términos numéricos corresponde más a una medida de posición que a una relación cuantitativa en el sentido estricto de la palabra. Así, escalas como Lickert o diferencial semántico entre otras, asumen valores de posición en términos de niveles de acuerdo o jerarquías que no son expresables de manera continua.
Pero el asunto del análisis métrico para determinar la calidad de las pruebas puede operar a otro nivel, por ejemplo los modelos de la TRI para el análisis métrico de ítem politómicos en escalas de respuesta graduada (Lickert, por ejemplo) tiende a realizar ajustes de continuidad para dar mayor precisión a la medida.
When option is available (continuous rating scale or Likert scale) I believe opting for the continuous measure would be more advantageous for statistical analysis purposes.
I agree with the above. The scale response format should match the shape of the concept being measured. If the shape is groups, use nominal scaling, ranks, use ratio scaling, a continuous frequency distribution, use interval, counting the quantity of some thing, use ratio. Generally, the scale should balance the ability of the respondent to discriminate with the desire to get as much information about the concept as you can.
I agree with most of the previous responses. I would also add that using existing scales with known reliability and validity is also a consideration (this is something that reviewers and editors like to see). At any rate, you need to examine the distribution of scores (primarily for continuous scales) for deviations from normality (skewness and kurtosis among others). If you are measuring new constructs you should test the items first for psychometric properties and factor structure. Then use confirmatory factor analysis if there are multiple factors. Finally, check for composite reliability, convergent and discriminant validity.
Agree with above. Of course, sex cannot be continuous (there are only really two sexes). but gender orientation could be construed as continuous from highly hetero to highly homosexual. so a continuous scale could be appropriate depending on how you understand the construct.
PS. a significant problem with the conventional Likert scale is that there are only 2 positive options. If it is likely that people are biased toward a positive attitude (multiple causes including social desirability, cultural norms, response toward authority, etc.) then it makes more sense to offer more options. Rather than increasing options on both sides of zero, I prefer to give more positive options. This is called positive-packing. There is evidence for the validity of this response scale. See
Brown, G. T. L. (2004). Measuring attitude with positively packed self-report ratings: Comparison of agreement and frequency scales. Psychological Reports, 94, 1015-1024.
Hancock, Gregory R., & Klockars, A. J. (1991). The effect of scale manipulations on validity: Targetting frequency rating scales for anticipated performance levels. Applied Ergonomics, 22(3), 147-154.
Klockars, A. J., & Yamagishi, Midori. (1988). The influence of labels and positions in rating scales. Journal of Educational Measurement, 25(2), 85-96.
Lam, T. C. M., & Klockars, A. J. (1982). Anchor point effects on the equivalence of questionnaire items. Journal of Educational Measurement, 19(4), 317-322.
Deneen, Christopher, Brown, Gavin, Bond, Trevor, & Shroff, Ronnie. (2013). Understanding outcome-based education changes in teacher education: Evaluation of a new instrument with preliminary findings. Asia-Pacific Journal of Teacher Education. doi: 10.1080/1359866x.2013.787392
As others have mentioned, the type of scale you use depends on the phenomenon you are trying to measure. However there are also a couple of other things you should keep in mind: one - what kind of analysis are you going to do? Are you trying to do a correlation? would you like to do a logistic regression? an analysis of variance? Another thing to consider, is whether or not this question is part of a validated index or set of questions. If so, I don't recommend changing a thing. It is likely these been validated to work as a unit.
Thanks for respond, the answers clarify many of possibilities. My research applied for a multivariate analysis, specific factor analysis and cluster analysis.