In a tensor decomposition like PCA, the percent of explained variance can be calculated as the ratio of each eigenvalue to the sum of eigenvalues. The number of dimensions to reject should depend on the amount of variance to be maintained. In practice, this variance should never be less than 60%, although 80% or more is preferred. The lesser the explained variance, the lesser the confidence in the model.
If you have tensordata that is data arranged as cube or hypercube I think you should look at methods such as PARAFAC or TUCKER3 instead of "plain" PCA. Then you can more easily have an interpretation along the different directions of the datacube.