I am currently trying to fit the same models in lavaan (R) and Mplus. However, I keep getting slightly different parameter or goodness-of-fit estimates.
Have you checked the default convergence criteria? Perhaps they are not exactly the same in lavaan and Mplus. Normally, with the exact same loglikelihood value, you should also get identical parameter estimates. How discrepant are the parameter estimates?
Fit statistics may be calculated (slightly) differently in the two programs (you could find out by studying the program manuals/technical appendices).
Thank you for your answer! Lavaan uses an NLMINB optimization method(max 10000 iterations) , while Mplus uses max 1000 iterations and a 0.500D-04 convergence criterion, as well as max 20 steepest descent descent iterations. I have tried finding out what 0.500D-04 means, but I haven't gotten far. :(
I have attached my lavaan and Mplus outputs for the model parameters. The weirdest thing is, when I do not specify the fixed variance in lavaan (HiTOP_ges~~ 1*HiTOP_ges) then I get the exact same parameter estimates as in Mplus with fixed variance. Still slightly different fit indices though (see third attachment). Does this maybe have to do with the way the MLMV works in the two packages?
The 3 models in your table are not the same. You can see this from the fact that lavaan and Mplus report different numbers of parameters and df (the lavaan model has one additional df). In the lavaan output that you posted, it appears that you fixed both the factor variance and the first loading to 1.0. This is most definitely not what you want (you would either fix one loading or the factor variance but not both).
For all the other newbies to SEM, who have a similar problem: I realized that if you fix the variance in Mplus, it seems to realize that it is unreasonable to do as the variance of a single latent variable is 1 anyway. If you do that in lavaan, it seems to really mess with the residual variances of the manifest variables and does not realize it is a "mistake" like lavaan does.
Additionally, lavaan has different underlying model constraints. Through the "mimic=mplus" addendum in the sem() line, you will get a different number of free model parameters. E.g., without this line, my model has 11 model parameters, with mimic it has 16, same as mplus. Another page online said that this is because Mplus has a different number of equality constraints (https://stackoverflow.com/questions/59817678/number-of-free-parameters-reported-in-mplus-vs-lavaan) than R!
I'm not sure about your statement concerning "the variance of a single latent variable". It is neither per se unreasonable to fix a factor variance to 1.0, nor is it per se the case that a factor variance must always be equal to 1.0 anyway. That is, a factor variance can be estimated as a free parameter and can take on values other than 1.0 in the unstandardized solution. In case you decide to estimate a factor variance as a free parameter (this is the default in Mplus), you need to fix one loading on that same factor to a non-zero value for identification. Typically, a fixed value of 1.0 is chosen for ease of interpretation, but you could pick other non-zero values as well for identification. Mplus by default fixes the first loading on the factor to 1.0 and estimates the factor variance as a free (unrestricted/unconstrained) parameter.
If you decided to instead fix the factor variance to a value > 0 for identification (again, typically 1.0 is chosen for ease of interpretation), you would (in almost all cases) not also fix a loading on that same factor. Instead, you would then estimate all loadings as free parameters.
Both parameterizations (fixed variance vs. fixed loading) lead to equivalent solutions. That is, regardless of whether you fix the factor variance or one loading, you should get the same chi-square, df, and also the same completely standardized solution (STDYX in Mplus).
Fixing both (the variance and one loading for the same factor) typically leads to a misspecified (overly restricted) "nonsense" model. This can result in a large chi-square as well as potentially obscure/non-sensical/uninterpretable parameter estimates.
Why would we ever fix a factor variance rather than going with the typical default of fixing one loading per factor? One reason that come to mind is when the indicators of the given factor are measured on very different scales (have very different metrics). In that case, the variances may differ strongly between indicators, and the model may not converge under the fixed-loading approach to identification. In that case, fixing the factor variance instead can sometimes help resolve the convergence problems.