I have adopted items of my questionnaire from different scales and it reached to 100 items, is there any logical and valid way to cut down the items with out harming the validity?
Since you don't have any data yet, your options are limited. As others have noted, if you want to retain the reliability of the original scales, you can't modify them substantially.
One other option is to work out your theoretical model and determine where all your scales fit into it. Are some of them measuring the same concept twice? If so, drop one of them.
You might also look into alternative measures of your concepts to see if you can find any shorter scales that capture the same thing.
You could run exploratory factor analysis using Mplus (you could also use SPSS however there are some differences) and look at factor loadings. However, only when your all items pertain to one construct (e.g. self-efficacy, PTSD, etc.). Also on SPSS you could run internal consistency and obtain Cronbach's alpha - if it is above .7 than it is good. Maybe some items are decreasing Cronbach's alpha substantially. Hope it helps a bit.
There are statistical ways...which can be seen as logical and valid way if you are using Likert Scales for measurements and when you already have data with you - means you already have collected responses. Please follow the advice of Witold Orlik then. If you are not using Likert Scale or you do not have data yet, then you can use experts' views to get the questionnaire validated after removing repetitive questions, if there are some. Please note that I need more details of yor questionnaire to give you any solid advice on it.
thanks Witold Orlik, and Anand Agrawal, i have idea about this but for this i have to go through a pilot study, i was wonder as i have adopted the items from different studies there might be process to cut down with out any data, this is my confusion.
As you said you constructed the questionnaire from different scales, so all the items were presumably validated by other researchers before. Now, you have two options. First, go for a factor analysis and automatically many items will be dropped (that what exploratory factor analysis may yield). Or, second, go back and research on the scales that you actually adopted in the first place. In many cases, alternative shorter versions of scales might be available instead of the scale that you adopted. This is more preemptive because you are keeping your scales short (yet previously validated) that might yield a shorter yet valid questionnaire, hopefully.
Since you don't have any data yet, your options are limited. As others have noted, if you want to retain the reliability of the original scales, you can't modify them substantially.
One other option is to work out your theoretical model and determine where all your scales fit into it. Are some of them measuring the same concept twice? If so, drop one of them.
You might also look into alternative measures of your concepts to see if you can find any shorter scales that capture the same thing.
On the other side of the coin, you have a perfect question to be addressed by a pilot - ARE 100 items too many for this survey? I have certainly seen longer surveys (and used a few myself). You can sit down with a helpful volunteer, ask that person to take the survey, see how long it takes, then debrief your volunteer about the survey, the items, the length, etc.
By combining items from different questionnaires, you actually created a new measure, in my opinion. Therefore, you can't predict it is valid, just because it comes from validated questionnaires. Validity of your measure has to be established separately.
If you want to cut down the number of items before your validation study, you can use consensus process (e.g. Delphi), where experts from the field decide whether to retain or drop items.
You can conduct an item analysis through expert (researcher in the field) opinion to retain theoretical relevant items (keeping in mind your target sample). After, then you can conduct a discussion session with 5-10 respondents to delete and change the item wording. For detail see......
Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of management, 21(5), 967-988.