In this scenario, if the standardised scale is well established and widely used by other researchers, then the problem probably lies with the translation. Therefore, the ideal thing to do would be to improve the translation of the problematic items/questions in the scale and try the analysis again.
Removing items from a standardised scale will affect the scale's validity and reliability.
Did you use a back-translation approach to questions, where one translator first goes from the original language to the new language, and then a second translator returns the converted version to the original language. This can help spot difficulties that are due to the translation process.
Even then, problems may be due to culturally specific terms in either of the two languages. If you think this is the case, then I would explore the reactions to the translated question using "cognitive testing" Here is a link to a rather detailed "how to" description of cognitive testing.
To give you one rather extreme example, a commonly used measure of depression in the U.S. asks the respondent how often they felt blue during a given time period. In U.S. culture, feeling blue is a synonym for being mildly depressed, but when the questionnaire was translated into Polish absolutely no one reported feeling blue -- for presumably obvious reasons.
The more I use questionnaires with "instruments" that have been tested and validated in English and thereafter translated, the more I doubt the wise thing in this practice. The more complex instrument, the more problems seem to arise.
Here there are multiple things that can go wrong. Cultural adpation of the question itself or the response alternatives are one such thing, reduction of items, often on unclear grounds is another, reversing of items is a third.
On top of that factor analysis of the different versions of the translated instrument often comes to other conclusions about the construct of the instrument than the factor analysis the original instrument did.
And as if that wouldn't be enough, the description of the original instrument or the scoring of it is not seldom incomplete or ambigous. This might concern such things as which items that should be reversed and how scoring should be calculated.
So the answer to your question is you CAN do it, but it is not adviced. Then your results will not be comparable to other studies using any other version of the instrument than you have used since it is not the same instrument any longer.
This is the point were i find that most researchers seem to fail in their understanding. If you do not ask the same questions, in the same way, with the same response alternatives and analyse the data in the same way, comparisons with other studies is not meaningful.