The questionnaire has a Cronbach's alpha of .838. However I am concern that the questionnaire is not having a correct construct. So, how can I validate a questionnaire's construct using SPSS?Thanks in advance.
Cronbach's alpha of .838 is a very good result. So please could you clarify what you mean stating: "the questionnaire is not having a correct construct". Is it a questionnaire you have just designed or well-established, psychometrically sound instrument?
Cronbach's alpha is measuring how well all the items stick together (roughly). Validity is more complex and there are different notions of what it means. A good discussion is in the linked paper.
What Daniel said is worth noting. In addition, I think you are trying to ask about testing the convergent and discriminant validity(?). It looks like you have satisfactory level of reliability (alpha value), however, without validation, high reliability is of little value.
The first link briefly explains the meaning of construct validity. The second link includes a video on how to test these two types of validity in SPSS. On another note, it is extremely important that content validity be assessed through literature and/ or expert opinion before working with data. Hope they help.
It is a driver behaviour questionnaire (DBQ) modified from The Manchester Driver Behaviour Questionnaire to suit the Malaysian driving culture. Therefore, I am uncertain whether it is correctly constructed and as mentioned by Khandoker, without validation, high reliability is of little value.
Besides literature and/ or expert opinion, is there other ways to validate the construct of a questionnaire since the questionnaire I did is a modified one?
Is the measure "a theoretical infant", use a Principal components analysis (PCA) or a Exploratory factor analysis (EFA, see for example the following Youtube-video: https://www.youtube.com/watch?v=-6tw7ebr860). Is the measure a grown up adult, use a Confirmatory factor analysis (CFA; for running it in SPSS you have to buy a special module called AMOS).
Raymond, high reliability may not necessarily reflect its validity, it only shows that the measurement items correlate well among themselves. It is also possible that the items (or indicators) actually miss a large portion of measuring a construct yet shows a high reliability because they tend to measure similar low-impact effects on the construct. The first suggestion, and I think you already have done that, is to depend on established and previously validated scale. Minor modifications could be done and it could rationally be expected that validity may not be compromised due to modification for the purpose of maintaining context relevance. In that case, since the scale can be assumed to be a grown up adult (as Niklas Hansen explained it nicely in his foregoing post), you can do CFA and see how it goes. Besides SPSS Amos, you can use PLS-SEM (e.g., SmartPls) to do a CFA analysis, as an alternative. However, If you think you have modified the scale in a major way, then EFA would be recommended.
Construct validity is more conceptual than statistical in nature. Even the exploratory factor analysis, that looks at relationships among items that supposedly measure the same concept or construct (and by the same token, poor relationship, or lower correlations, among items that measure different concepts/constructs) depends on on a contextual judgment as to the commonalities of underlying construct. You can address that through a through literature review about your topic, and if there are already tools out there that measure concepts related to your topic, you can collect pilot data and run a convergent/discriminant analysis of responses on these tools compared to responses to your questionnaire.
You must use a confirmatory factor analysys (CFA). The objective of CFA is to test whether the data fit a hypothesized measurement model (this is to say, a construct). In standard SPSS, there is no CFA. Stata 11 and 13 have CFA...
1. if you have a theory interpreting the construct of scale you must use confirmatory factor analysis.
2.if you have a poor fitted model, you must test a competitaion models with respect to CFA and the best one is your construction.
3. if you haven't a theory interpreting the constuct of scale you will use Exploratory factor analysis, then confirm it using CFA.
4. If CFA and EFA failed to interpret your construct and you had a poor fitted model, you must use ESEM exploratory structural equation modeling using MPlus
you can go for inter-item reliability... divide the whole questionnaire in sub set of each dimension. then check reliablity through cronbach alpha ...if ur score is high it means that construct validity is there..
Prior to conducting any formal statistical analyses, preliminary steps to ensure the quality of data were conducted to ensure it worthy further analysis (Sekaran, 2003). There is no missing data and it is not substantial enough to warrant any action.
Secondly, construct reliability test using Cronbach’s Alpha was conducted using SPSS version 20. The purpose of this test is to assess the internal consistency reliability of the instrument used. The Cronbach’s Coefficient Alpha values for the 2 variables are above 0.7 which considered as acceptable, hence the instrument is appropriate for use in this study (Field, 2009; Hair, 2006; Nunnally, 1978; Sekaran, 2003; Smith, 2011).
Thirdly, the test of normality was performed. Although Kolmogorov-Smirnov statistics indicated that all variables are significant hence non-normal, test of normality is sensitive and “often signal departures from normality that do not really matter (p.46)” (Tabachnick & Fidell, 2007). To make sure, that the data is approximately normal, the study identified outliers using boxplot. Skewness and kurtosis statistics showed that the z-scores of the variables were within +/-1.96 suggesting approximate normal distribution.
Next, evaluate convergent and discriminant validity. I attached a guideline for this purpose.
Assessing validity is quite different from assessing reliability, which is what alpha represents. In line with the concept of "construct validity," you want to examine whether your measures performs as you expect it to (hence Orly's statement that validity is conceptual). Does your measures correlated with things it is supposed to be similar to (i.e., convergent validity)? Is it distinct from other concepts (discriminant validity)? etc.
You could use confirmatory factor analysis to assess these relationships, but with such a high alpha, CFA would mostly be a refinement on simply summing your items and doing your testing with regression. (In general, CFA will only make a difference if there is substantial variance in the loadings on your items, and in this case, the high value of your alpha clearly suggests that "unit weighting" is good enough.)