It is always bettter to use probalistic sampling techniques because of some of the requirements for running SEM. However, I think that if your data meets the criteria of tests of normality, high internal reliablity and have strong factor loadings, then you can go ahead and use it.
But you are not allowed to use your model to get reasonable predictions. Forget too if you plan to use your model to infer to the population.
Why?
Try to imagine a reasonable way to get your alpha value (% of risk) by the time you want to do some classical inferences using your model. How will you do that without exposing you to hard critics for yours results?
Take care! This is a very common mistake which can easily move any modeling work into the garbage.
I keep what I said to you before. But, sorry because I did not give you any option. I just said you should not do it.
One solution could be to use a little trick of language saying " Referential significance level of 95% (assuming a referential risk of alpha =0.05, if the sampling method applied had been probabilistic)". I have used this trick several times when I have not been able to guarantee that data have been obtained by means of a probability sample.
By the way, all the results generated are conditioned too. So, the respective inferences which could have been carried out in normal terms should not be validated.
On the other part, I would do everything possible to try to verify that my data tend to comply with the rest of the necessary assumptions that require the methods to be used. For example, normality,..., etc.
Also, in my conclusions I would leave very clear that the sample is not a probabilistic one and that the results and respective inferences are conditioned to this situation.
I hope you like this solution. It is based on the truth with a good perspective and professional vision.
Finally, it is good to note here also what George Box once said:"............... all models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind."
In relation to what you use as a fact quote “… the vast majority of published studies in the social sciences (management, psychology, sociology, etc) are based on convenience (nonprobability) samples. Using convenience samples does not keep studies from being published. Just mention the lack of generalizability in the limitations section of the Discussion, and understand that significance tests with convenience samples are crude approximations.” I only have to say that - if a lot of people walk to the top of a building and start jumping down to kill themselves - that is their problem, not mine.
Convenience samples are "nonprobability samples". Moreover, using your second point quote "2) Strictly speaking, all significance testing assumes the sample is a probability sample from some target populations." and considering the fact that all statistical test are based on probability models (even the non-parametrical ones ... they use approximated model to get the distribution of a particular statistics used in their tests) then I do not know why we continue talking about this.
What I suggest to do is a more honest position than keep doing something that we all know it’s wrong. Don´t you think so?