I have a SEM model that I'm having difficulty interpreting. It has one direct effect with a Beta/standardised regression weight that appears to be high (0.80) and significant (p
before you can interpret the estimates and their standard errors (hence t stats), the model has to be correctly specified. That is tested by the chi square test. I know that there is a practice of using fit indices but optimally you would use the test as a rigid and strong falsification tool. Without knowing the fit, I would see the specification of the variables as common factors of various indicators as doubtful. This is most prevalent for environmental behavior. Modeling this as the common underlying cause of specific behaviors (which I assume is case here) propbably changes the meaning of the resultant latent variable towards some sort of disposition (vs. behavior as an output). I actually would either model a composite of these behaviors or use the behaviors as specific outcomes for themselves.
In this recent thread, you can find more about the fallacies of specifying behaviors as latent variables: https://www.researchgate.net/post/What_is_the_best_approach_to_examine_the_dimensionality_of_related_behaviours_and_to_develop_theory#view=5fea158a4fc94d6d751d3f7d
Modeling self-transcendence as a common cause of the more specific values may be more appropriate as it is a higher-order value type. But please consider that using factor analysis to Schwartz' value types may be the routine approach but does not reflect the proposition of Schwartz (who does not regard value types as factors). I made that error in my first paper 2008 where we applied CFA to the Schwartz value types. Hence, you can go with that or--again, better use the three types as singular predictor (which will propably turn out the universalism is the relevant predictor.
Finally, wellbeing is rather the sum of the specific forms and not the underlying cause, isn't it? This is probably a final way where you may err. But this is testable and at least not unlikely.
Specifying the correct number of latent variables is important in an SEM because otherwise you will get upward biased estimates (e.g., the .80 looks suspicous).
Steinmetz, H., Schmidt, P., Tina-Booh, A., Wieczorek, S., & Schwartz, S. H. (2009). Testing measurement invariance using multigroup CFA: Differences between educational groups in human values measurement. Quality and Quantity, 43(4), 599-616. doi:10.1007/s11135-007-9143-x
In the preparation of the paper (and years after), I had many discussions with Shalom and was confused that we had absolutely no common ground regarding the ontological stance behind entities overall. While I approached his value theory with my notion of a pure demarcation between two entities (which is the basis of factor models), the discussions showed that he simply don't think in terms of factors and distinct entities. This is most obvious in his idea that the value orientations form a continuum of motivational orientations (like a color cycle). The demarcations of distinct value types that emerged from his smallest space analysis (SSA) simply were convenient drawings (in the literal sense as he draws lines with a pencil on the print out of the SSA). I was shocked at this time as, again, I tend to think in ontologically distinct entities. In this moment, I knew that he did not conceptualize the value types as factors. The reason why he had never objected our approach and the prior projects in which he and one of our co authors Peter (his friend and partner and my dissertation supervisor) was simply that he did not care to much about the factor thinking and (I guess) the fact that psychologists do factor analysis all the time whether it is appropriate or not.
It then turned out that I told him what factors ontologically mean, resulting in the formulation of my suspicion that he would not regard the values types as factors and he said clearly that this is absolutely true. This was enlightening to me. Later he refined the 10-value type theory to a 19 value theory but at that time I knew already that again this was just a convenient increase of granularity. It is just cutting a piece of cake. There is no truth in it weather you cut it in 2 halves or 20.
I was really astonished since then that still factor analytical thinking regarding his value types is still so present. When I often complain about the "factor reflex" (the tendency to only be able to think in factorial terms), these experiences at that time were strongly influential. The 2009 paper is still a nice tutorial about invariance but I would strongly detach from the content of the analysis done there (the value types). If I would be more attentive to the chi square test in these models (which were all highly significant) I would have probably noticed that :)
The same holds true when you move to the 2 higher-order "dimensions". Again these are theoretical dimensions not necessarily ontologically existing, quantitative entities. That is, a bunch of value types share a common theme or aspect which is often equated or confused with a notion that this dimension is a entity in itself. For instance the axis self-enhancement vs. self-transcendence reflects the focus on the self vs. sacrificing some self-interests for the good of others. This MAY of course reflect a higher order entity and should not be discarded from testing. But as in any other second order construct, this requires that the causal dynamics with other variables has to involve the higher order construct (as a cause or a consequence). Otherwise it is just a fancy factor model with any epistemological use (as the myriad of factor models in psychology). In Alex' model, this is the case and I would love to see wether this part of the model fits (i.e., whether covariances between the three value types and, say nature connectedness, can be fully explained with a single, intermediate higher order factor "self-transendance". Or in other words, an essential implication of this part of the model is that covariances between the three values and nature connectedness become zero once the second order factor is "held constant". I doubt that this is the case. But we will see. Alex Gaut if you like you can forward the raw data or covariance matrix to via pn and I run the model for myself.
In this thread, I wrote some things about the typical misconception on second order factors, if you are interested
Holger Steinmetz I have so many questions! But first some info for you. The three endogenous latent variables are derived from pre-existing, validated published surveys developed by others, which I then used in a survey in an action research project. And for Schwartz's values I used a short 10-question version of the original survey. I constructed each of those three latent variables based on the factors they identified in their papers.
The model fit indices for this model aren't great but they're not terrible, and they're way better than the other model I was using:
- X2 = 133.34 (p=0.02)
- X2/df = 1.32
- CFI=0.92
- TLI=0.91
- RMSEA=0.06
- SRMR=0.09
I have a small sample size for SEM, just 101, and I know that Chi-square can be sensitive to sample size so I can't rely on it as a sole indicator of fit. I know there are problems with all model fit indices but I have to use something.
Perhaps I have completely misunderstood these models. Your comment on wellbeing is making me rethink. You are right in that wellbeing is not a cause but the sum of the factors attached to it. My supervisor would also agree with you that behaviour is not a latent variable but I want to have both behaviour and wellbeing as outcomes, and that is what I tested for, so how do I include these as outcomes in the model if not by using the survey factors as observed variables?
I think it's fascinating that you got to work with Shalom Schwartz. I'm a big fan of his model. I'm not sure I understand how to include the three predictor values in the model as singular predictors? And yes I agree that universalism will almost certainly come out as the main predictor. To confirm the value structure, I conducted an EFA on my values data and it fit the circumplex dimensions very closely, with high values (except for hedonism). Then I used the factors as they fell out of the EFA to develop the latent variable that I called self-transcendence for consistency with Schwartz model but in fact, in his model, self-transcendence is only universalism and benevolence, and does not include self-direction.
Perhaps it is not possible to create an SEM with the models and data that I have? I will send you some data. I have to admit I have been having to teach myself SEM over the past year or so as my supervisors have no idea how to help me and haven't pointed me in the direction of any help. So, I have no idea about much of what you said but you can see from what I write where I'm at. I have to hand up in February and am starting to panic a bit, so any help is wonderful, thank you.
Holger Steinmetz I also need to ask - I found the attached paper where they did the same thing, i.e. turning 10 values into four latent variables and then plugging those into a SEM. This is what I would like to do but do your comments mean that this is not really plausible? Thanks.
1) It is important to detach a model and its fit from any emotional response. A fit is not "excellent" vs. "terrible", hence. Such a practice is one of the many causes why people tend to explain away misfit because it hurts them or they see the misfit as some kind of personal failure. Testing models and responding to the misfit practically is at that heart of a Popperian way towards science. We learn by doing, getting negative feedback, and learning. Model testing is no different. I am sure, Karl Popper rotates in his grave while seeing the way SEM is used.
2) Your model shows a significant misfit. That is, there are some beyond-random deviations from the data. Please avoid the sentence about the sensitivity of the chi-square test. If a model is correctly specified, than a large sample will not lead to a significant chi-square test. Yes, with a large sample size, trivial misspecifications will lead to a significant chi-square test but that's no problem or failure of the test. To the contrary, the sole goal of the test is to evaluate beyond-chance deviations. A trivial misspecification is nonetheless systematic and will correctly lead to a significant test. Further, people tend to commit the Bayesian conversion error and turn p("sign. chis-square" | "trivial misspecification") into p("trivial misspecification" | "sign. chi-square"). In words: the fact that trivial misspecifications (in large samples) lead to a significant test does not imply that sign. chi-square test mean that the misspecification is trivial. I had many models where I learned a lot by taking the test seriously
With regard to a SMALL sample size (as in your case), you are right: the chance of incorrectly rejecting a model is higher than 5 percent. That's correct but can be corrected by applying the SWAIN correction to the chisquare. I will do that with your data. This can be easily done in R by running a function and than simply typing some values (e.g., the number of variables, chisquare value etc.). The function than spills out the corrected test statistic. The source is
Herzog, W., & Boomsma, A. (2009). Small-sample robust estimators of noncentrality-based and incremental model fit. Structural Equation Modeling, 16(1), 1-27. doi:10.1080/10705510802561279
3) The fit indices are not informative (despite their use). There are two reasons for that: a) The fit indices measure the degree of difference between the model-implied covariances and the empirical covariances. This cannot be equated with the degree of causal misspecification. There are situations in which there is only a slight difference despite a fundamental model error and b) The recommendations of the cut off values in Hu and Bentler's famous study (that NOBODY who applies SEM has ever read!) stem from a simulation study in which the model was a factor model and one of the tested misspecifications was an erroneously omitted double loading. There results are dependent on this kind of scenario and cannot be generalized to there (more realistic) forms in which the model is completely messed up.
4) The term "SEM" is most often used for latent variable models but that does not mean that you have to push factor models into it even if they are wrong. As I said, you could either form a composite of the wellbeing scales and behavior scales, which is quite handy and sparse but you loose information. The alternative is to use the different scales as specific outcomes. The fact that they are observed (and contain measurement error) does not matter for the unstandardized effects, standard errors and test statistics, only for the standardized effects (they will be lower). With regard to the value type factor, I would start by keeping that and than see how far that goes.
We did not have yet talked about the mediator. Can you tell me what the indicators refer to? I have the feeling that there are some issues with the factor structure as well....