Factor score indeterminacy (FSI) is a well-known phenomenon in Factor analysis. Since IRT can be conceived as factor analysis with categorical indicators, the same phenomenon must be present in IRT as well, but the IRT literature (up to my knowledge) never explicitly mentions it. Why so? I suspect the reason is that FSI cannot be distinguished from the measurement error. In the context of factor-analytical approach to test theory this seems to make sense (if I’m correct, the FS determinacy coefficient equals the McDonald’s omega for factor scores computed by the regression method). What puzzles me is that the FS determinacy is defined on a group-level, while measurement error in the IRT context is conditional on the latent trait value, so these two concepts are not (completely) equivalent in IRT. Is FS determinacy (roughly) equivalent to the "marginal reliability", reported by some IRT software?