that's a difficult question. Most importantly would be to consider/evaluate whether the original set of questions truely imply a true common factor structure. Most questionnaires are either developed by using a principal component analysis which is simply computing a variance-remaining summative composite. A factor model implies that the factor represents an existing entity which is the cause of the item reponses. The essential testable implication of the factor model is conditional independence of the items given the factor (local independence). Most scales violate these assumptions - hence you do not know whether the reason is only some slight and unimportant violations or a fundamental problem with the structure (which is a problem for its validity).
IF the model holds it has some advantages when undertaking cross cultural research as you can test for "measurement invariance". I post some papers for your interest.
The other possibility is that the set of items simply form a "collective set" - which means that each/ or some items measure different things but the the "construct" is simply a set of these things (like an index or umbrella term). Actually, I do not know how to evaluate cross-cultural equivalence of such a composite. I could imagine, placing the set of items in a network together with validation criteria. The problem is how to evaluate the overall match. A topic for future research :)
Schaffer, B. S., & Riordan, C. M. (2003). A Review of cross-cultural methodologies for organizational research: A best-practices approach. Organizational Research Methods, 6, 169-215. doi:10.1177/1094428103251542
Taras, V., Rowney, J., & Steel, P. (2009). Half a century of measuring culture: Review of approaches, challenges, and limitations based on the analysis of 121 instruments for quantifying culture. Journal of International Management, 15(4), 357-373. doi:10.1016/j.intman.2008.08.005
Steenkamp, J.-B. E. M., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25, 78-90.
Vandenberg, R. J. (2002). Toward a further understanding of and improvement in measurement invariance methods and procedures. Organizational Research Methods, 5(2), 139-158.
Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: Towards an integrative and analytical framework. Organizational Research Methods, 4(2), 144-192.
Edwards, J. R. (2011). The fallacy of formative measurement. Organizational Research Methods, 14(2), 370-388.
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155-174.
And the difference between factor models and composites
Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J.-Y. (2003). The mismeasure of man(agement) and its implications for leadership research. The Leadership Quarterly, 14, 615-656.
Bandalos, D. L., & Boehm-Kaufman, M. R. (2009). Four common misconceptions in exploratory factor analysis. In C. E. Lance & R. J. Vandenberg (Eds.), (pp. 61-87). New York: Routledge.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., Strahan, E. J., MacCllum, R., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. doi:10.1037/1082-989X.4.3.272
that's a difficult question. Most importantly would be to consider/evaluate whether the original set of questions truely imply a true common factor structure. Most questionnaires are either developed by using a principal component analysis which is simply computing a variance-remaining summative composite. A factor model implies that the factor represents an existing entity which is the cause of the item reponses. The essential testable implication of the factor model is conditional independence of the items given the factor (local independence). Most scales violate these assumptions - hence you do not know whether the reason is only some slight and unimportant violations or a fundamental problem with the structure (which is a problem for its validity).
IF the model holds it has some advantages when undertaking cross cultural research as you can test for "measurement invariance". I post some papers for your interest.
The other possibility is that the set of items simply form a "collective set" - which means that each/ or some items measure different things but the the "construct" is simply a set of these things (like an index or umbrella term). Actually, I do not know how to evaluate cross-cultural equivalence of such a composite. I could imagine, placing the set of items in a network together with validation criteria. The problem is how to evaluate the overall match. A topic for future research :)
Schaffer, B. S., & Riordan, C. M. (2003). A Review of cross-cultural methodologies for organizational research: A best-practices approach. Organizational Research Methods, 6, 169-215. doi:10.1177/1094428103251542
Taras, V., Rowney, J., & Steel, P. (2009). Half a century of measuring culture: Review of approaches, challenges, and limitations based on the analysis of 121 instruments for quantifying culture. Journal of International Management, 15(4), 357-373. doi:10.1016/j.intman.2008.08.005
Steenkamp, J.-B. E. M., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25, 78-90.
Vandenberg, R. J. (2002). Toward a further understanding of and improvement in measurement invariance methods and procedures. Organizational Research Methods, 5(2), 139-158.
Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: Towards an integrative and analytical framework. Organizational Research Methods, 4(2), 144-192.
Edwards, J. R. (2011). The fallacy of formative measurement. Organizational Research Methods, 14(2), 370-388.
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155-174.
And the difference between factor models and composites
Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J.-Y. (2003). The mismeasure of man(agement) and its implications for leadership research. The Leadership Quarterly, 14, 615-656.
Bandalos, D. L., & Boehm-Kaufman, M. R. (2009). Four common misconceptions in exploratory factor analysis. In C. E. Lance & R. J. Vandenberg (Eds.), (pp. 61-87). New York: Routledge.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., Strahan, E. J., MacCllum, R., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. doi:10.1037/1082-989X.4.3.272
Holger Steinmetz, thank you for your response. I am trying to draft my proposal for my dissertation. I plan to use Rosenberg's Self-Esteem Scale, Spreitzer;s Psychological Empowerment Scale, Konczal, et al Leader Empowering Behavior Questionnaire and Timmin's and McCabe Assertive Workplace Behavior Survey Questionnaire among NURSES in the Philippines. I have read in some cross cultural validation model about the need to run factor analysis. However, there are also some scholars not mentioning about it. I plan to compute for CVI as a validity measure and run Cronbach's aplha for reliability. I am however not sure if these are enough.
whether you really have to compare your measurement model with orginal versions and original samples, depends simply on the opinion of the reviewer.
First, be sure that there are not yet translations of these scales. Would be a waste of time if you are not the first to analyze these constructs.
If you are indeed the first, I would start closely inspecting the original questionnaires and select appropriate indicators. As I guess that these questionnaires are expected to follow a common factor model, the indicators are interchangeable (see Bollen & Lennox, 1991), hence, you can select indicators that perfectly match the supposed latent variable.
Then translate the items and apply cognitive probing (Willis, 2005) to check whether the meaning of the translated indicators match the latent variable(s). If problems occure, change the translation and test it again. This is an iterative loop until you get a well-translated measure. It may be that this will not satisfy reviewers but you cannot avoid rejections anyway :) If you learn that your community has expecations about formal invariance tests, you should do it (and by the way: I would also agree and demand you to formally compare your measure with original versions. You wouldn't necessarily collect English data on your own but rather contact authors whether they forward the raw data to you.
HTH
Holger
Bollen, K. A., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110(2), 305-314. doi:10.1037/0033-2909.110.2.305
Willis, G. B. (2005). Cognitive interviewing: A tool for Improving questionnaire design. Thousand Oaks, CA: Sage.
Voss, K. E., Stem Jr, D. E., Johnson, L. W., & Arce, C. (1996). An exploration of the comparability of semantic adjectives in three languages - A magnitude estimation approach. International Journal of Marketing Research, 13(5), 44-58.
Holger Steinmetz Thank you for those responses. I don't intend to translate the scale as nurses here in the Philippines are good English speakers. My concern is the validity of using the scale among nurses in a different culture or country hence I feel the need for a pilot validation study.
Ah I see. That makes it a lot easier. As I said: I could contact researchers who had conducted research in other cultures and ask for their data said. Would be a nice cooperation project and if these researchers become co-authors, nobody will disagree and you get folks into the boat with publication experience :)
I actually have not conducted collaborative researches from scholars in other countries but I am very interested if given a chance. I hope this platform (RG) can put that into reality. Thank you very much for your responses. I really appreciate it.
Ideally, you should as factor analysis is a more comprehensive analysis compared to other analysis such as test-retest reliability. If factor analysis was not carry out for the original questionnaire, I suppose it is alright for not doing it.