If i want to perform a factorial analysis in a questionnaire based in a theory of personality, which has constructs very close conceptually and empirically, the best choice is a procrustean rotation or a confirmatory factor analysis.
Factor analysis is getting kind of old school for the reasons above – your results are very much method-dependent. I'd tackle this with structural equation modelling. That way you are in control, and you can build knowledge about your data (such as shared error variances between items) into your model.
Structural equation modelling (SEM) is a very precise technique, and arguably too precise for personality research. In the McCrae article mentioned earlier, they provide some excellent explanations for why SEM may be less than appropriate for fuzzy constructs. A more current reference is:
Hopwood, C.J., & Donnellan, M.B. (2010). How should the internal structure of personality inventories be evaluated? Pers Soc Psychol Rev, 14(3), 332-346. doi: 10.1177/1088868310361240
One alternative is E-SEM, which can be a little more flexible. I have no experience with it myself but colleagues of mine have used it successfully in personality research. I believe that MPlus can be used for E-SEM.
Procrustes rotation can be used in two ways. In the first, you specify a target matrix (usually a perfect matrix but not always) which might look like this:
Actual factor loading Target factor loading
Item Factor 1 Factor 2 Factor 1 Factor 2
A1 0.8 0.1 1 0
A2 0.7 0.1 1 0
A3 0.6 0.2 1 0
A4 0.4 0.3 1 0
B1 0.1 0.9 0 1
B2 0.3 0.8 0 1
B3 0.1 0.8 0 1
B4 0.1 0.7 0 1
The target loadings are hypothesised to be perfect, where items load on the intended construct only, and not at all on the other. From the example this would be seen to be mostly true, however item A4 would be highlighted as demonstrating significant misfit. If you had a theoretical reason for specifying a different loading for an item, you could do so however I have rarely seen this in the literature. This example gives us an measure of model fit, factor fit, and item fit.
The alternative use of Procrustes rotation is to evaluate the fit of a model between samples. For this the target loading is not a perfect matrix, but is rather the actual achieved factor loading table from another study. For example, if you wished to compare how a scale validated in a US sample compares when used in a different country, you would borrow the factor loading table for your comparison, and it might look like this:
Current sample Previous sample
Item Factor 1 Factor 2 Factor 1 Factor 2
A1 0.8 0.1 0.7 0.1
A2 0.7 0.1 0.7 0.1
A3 0.6 0.2 0.7 0.2
A4 0.4 0.3 0.6 0.2
B1 0.1 0.9 0.1 0.9
B2 0.3 0.8 0.2 0.8
B3 0.1 0.8 0.2 0.7
B4 0.1 0.7 0.2 0.8
This comparison would tell us how well our instrument compared to the previously published version. Again, item A4 isn't looking good in the current sample, however demonstrated better results in the previously published sample, leading us to conclude that something (translation, cross-cultural equivalence, who knows) has caused the difference in performance for that item.
There is an excellent little program (free) that is available which can be used to conduct a Procrustes rotation once you have your EFA/PCA loadings in a table.
I agree with Ertugrul that the goodness of fit statistics in CFA are a major advantage. They are especially useful when you need to test comparisons between models via differences in fit, such as models that do and not contain cross loadings or correlated errors.
From a psychometric point of view I recommend you to use CFA with a clear a priori latent model. Exporatory analysis could lead you to capitalization on chance and to find very different factor structures with the same test across different samples. I am not a fan of 16PF because it was build with a non theory-driven approaach. A very strong symptom of the problem could be seen by means of the names of 16PF factors.
In sum:
1) analyze your construct and the personality theory behind,
2) according with 1), analyze the content of the items and its capability for operationalizing the construct,
3) generate an hypothetical latent model, and
4) test it with CFA, or a more complex SEM model that includes a nomological net for your questionnaire.