SEM is nothing but multiple regression you can need to check various relations and their impact on one another one variable might be impacting the other. For details you can refer to book of barbara byrne as attached.
Can you elaborate on the model variables. Further if you are testing any hypothesis and the p values are high it denotes insignificant relation and your hypothesis stands rejected. But if your objective is achieving goodness of model fit then you can delete insignificant relations to achieve the same (arbuckle and Wothke, 2004)
Dear Saurav, Your research objective is not clear from your question. In general, while testing relationships, we formulate null hypothesis that regression weight is zero for the assumed relationship. Higher than .05 p value ( 95% level) signifies that regression weight is not different from zero and hence we conclude that hypothesised relationship is insignificant. If you are testing moderating relationships through SEM then in such cases we delete all insignificant relationships and retain significant relationships . I hope your query is answered.
The question is not about SEM but rather about statistical hypothesis testing. Currently, the treatment of hypothesis testing in standard textbook is a mash-up of contradictory frameworks. It seems not to make sense because it DOESN'T make sense. See Gerd Gigerenzer's (2004) Mindless Statistics paper (attached), or for more background read Ziliak & McCloskey's (2008) book, The cult of statistical significance: How the standard error costs us jobs, justice and lives.
Your parameter estimates are your best available estimates of the values of the parameters. Being "not significant" does not turn your best estimates into 0. Your estimates are also not significantly different from a host of other values--including the values that you estimated with your data--so what's so special about 0?
Don't discard the estimates just because they are not "statistically significant." But don't ignore the fact that your estimates may be wek or your statistical power may be low.
Eds arguments are great. really clear points that everybody should take in and learn.
Thanks for adding books to my reading list ED!
I review a lot of papers for a lot of journals. Many are cross/disciplinary journals. Better journals will often circulate the collective reviewer/s narratives back to not only the authors but ALSO the reviewers. It is interesting to see how other review and respond compared to what you focus on! part of the learning.
what is really noticeable to me *and it may be more the case with complex formative models (or they are just the ones I have been reviewing lately)... is that reviewers and even author's see a couple of non significant parameter results when the parameter estimates make perfect sense... AND THERE ARE THEN MAJOR PROBLEMS WITH EVERYTHING! Reject or Major Revision for the manuscript.
People think instantly that the results are not of merit, a negative results paper or even that model misspecification is the main problem (which could or could not be the case).
Power is barely even reported or even discussed as a limitation or possible cause of the issue at hand by authors.
BUT Ed's comments do not address the prevailing issues of positive results publication bias that exists in journals and I would argue the BETTER or higher ranked the journal the more the bias exists in most cases.
Some forward thinking journals have devoted sections in their issues to replication studies and even negative results papers in recent times. ... Social science will be better for it...
BUT learning the principles Ed reinforces will ... lead to better research and more understanding reviewers too.
Periodically (to take this thread off on a tangent), people have suggested reviewing papers without results, to avoid this bias toward positive results. Sadly, the idea ha not caught on. It is easy to believe that the replication crisis affecting so many fields is partly driven by this bias, which drives researchers to engineer "positive" in non-replicable ways.
That is a very interesting idea and perspective in reviewing that I had not considered.
Nobody trains you to be a reviewer either, I believe... Maybe you might do a few conference reviews or one journal review under the guidance of your supervisory team if you are lucky as a PhD student. OR YOU LOOK OVER THE SHOULDER.. BUT IT IS BLIND REVIEWING ... Nobody also talks much around the water cooler about reviewing issues etc (with topic and authors hidden of course)...
In our day to day job we are paid to provide some form of student feedback.
in our reviewing duties... no pay etc, goodwill dominates within the constraints of our other work duties and life...
Until Ed mentioned it... I had never thought to construct research that would be engineered for entirely positive results and have no change of being replicated!
What I meant was researchers who fish through their data, or modify their model (same thing), looking for a form that yields the positive result that they think reviewers will want. Like as not, they are capitalizing on chance, and the odds are against another dataset having the same chance variation. If you have not read it (lately), Ioannidis' paper, Why Most Published Research Findings are False, is worth the time.
Joreskog and Sorbom in their earlier work talked of SC, AM and MG. Strictly confirmatory, alternative models and model generating approaches. It is like everything for even new research domains etc implicitly follows a SC style of reporting *and the journals favour that. Absurd really when you think about it for a moment.
I think that the first question one should have to answer is: Am I trying to identify patterns of relations between constructs on a specified population - data driven research? Or am I trying to test a model that applies to any population - model driven research?
If your research is a data driven mode, then you could (not should) discard the coefficients, but pay attention to your hypothesis. If your research is a model driven mode, then you should not discard the coefficients, considering that you are trying to check if the conceptual basis that you used to propose your hypothesis is applicable to different contexts.