16 latent variables is a lot but usually, having many latent variables will not make your model saturated. On the contrary, large models with many indicator variables tend to have many degrees of freedom due to the large number of observed covariances that are being modeled. This can lead to difficulty in model fit assessment due to the so-called "model-size effect." See:
When I hear someone wanting to use a lot of latent variables I am reminded of the following quote:
\begin{quote}
The reader already familiar with factor analysis may be surprised that our emphasis, in theory and examples, is on models with only one or two latent variables. On the other hand, we pay more attention to questions of sampling variability and goodness of fit than is usual. This shift of emphasis is deliberate because we wish to stay within the bounds of what is statistically defensible. As we proceed, it will become apparent that serious questions of identifiability, precision and interpretation arise as the complexity of the models increases. We think the advance of knowledge is best served by building on relatively secure, if modest, base rather than risking becoming lost in a morass of ambiguity and uncertainty. \citep[pp.~xi-xii]{BartholomewKnott1999} \label{foot:Barthquote}
with 16 variables you have no chance to get the implied mass of constraints right. As a consequence, the model will not fit and you have no chance to diagnose it and to improve it.
In the years, my experienced converged to the following two convictiosn
1) Focus on at most 2-3 causal effects and think hard about reasonable control variables and instrumental variables in order to identify the effect. Further: really test the constraints with a chisquare test
2) If you have many variables, rather start with a full blown exploratory model (e.g., using the TETRAD model) and then use this for a sucessive confirmatory SEM
AMOS does not have a fixed limit on the number of latent variables that can be included in a model. However, including too many latent variables can make the model very complex and may result in convergence issues or poor model fit. Therefore, it is recommended to limit the number of latent variables in a model and to use parsimonious models that explain the data adequately without including unnecessary complexity.
That said, you could check out the trick in example 4, as highlighted in the attached capture representing a direct quote from the AMOS 26 manual, downloadable via the link below.
I think it is more than want parsimony for parsimony's sake (i.e., Occam, or as Lyes Rahmani says the principle of parsimony). The points above are more related to the bet on sparsity (i.e., that you wouldn't trust the results of the complex model).
I would agree that smaller models are preferable but not for the sake of parsimony. A model has the intend to reflect reality and if the reality is complex that a model which does not reflect this complexity is misspecified and results in biased effects. My point was that you can cut of a piece of that reality, incorporate all surrounding variables *that are important* for estimated the effects in this piece and get the correct effects *if your assumptions about the effects and their surrounding were correct*. And because of the smallness of the model, you can test the constraints and these tests are meaningful.
That is what economists do all the time: Focusing (at the extreme) on one single effect. Complex models have so many constraints that reflect what Pearl and Bollen call "strong assumptions" (statements of not-existing effects) which are part of the model test and will bias the target effects when violated. Simple use dagitty.net and draw a model. The upper left frame in the browser software prints the list of implied testable implications.