If you're looking for a checklist, there is an AMEE Guide that has some very nice tables and a step-by-step approach. Very practical and shorter than a full book. Happy validating!
http://www.ncbi.nlm.nih.gov/pubmed/24661014.
Artino AR et al. Developing questionnaires for educational research. Med Teach. 2014. 36(6):463-74.
You will have face validity - the items look like they measure what you claim they measure. You will have reliability - the participants' responses to items are consistent. But you will not have content validity or construct validity unless you specifically conduct your pilot to test for those areas.
I like Newton & Shaw's book on validity in Educational and Psychological Assessment (because that is my field), but there are other sources for more info on these areas. Basically, you don't want to run your survey until you are reasonably certain that the survey items are measuring what you claim that they are measuring - that the content is solid and that the constructs are being captured correctly.
After developing the items, you can implement the questionnaire to the sample and run an Exploratory Factor Analysis to see whether the data is consistent with your theoretical construct. Item-total correlations would show the "weak" items. You can strenghten the survey's validity (though I donot know what you plan to do) by conducting a focus group study with the target group members, using other qualitative methods such as interview. All of these depend on the purpose, context, the target group and ,of course, your resources. You can also use a similar (or dissimilar), previously validated survey, or an outcome variable and calculate correlations to see the convergence (or divergence from your results). For instance, if you are measuring organizational commitment, you can use a scale on organizational alienation. if you have correlation of -.70, this is a proof of divergent validity. You can also conduct a re-test, to see reliability over time. If you are using a one factor survey, you can look whether the upper and lower 27% have significantly different scores. This is to see whether the survey differentiates the sample in a significant manner. I would also suggest to examine validation papers relevant for your area of study.
It really depends on your questionnaire type but you may take advantages of more advanced steps. for instance:
for reflective constructs
Internal Consistency Reliability
Cronbach’s alpha
Composite reliability
Convergent Validity
Factor outer loading
Average variance extracted
Discriminant Validity
Cross-factor loading
Fornell–Larcker criterion
For Formative constrcuts
Collinearity Issue
Tolerance
Variance inflation factor
Significance Outer weight
You may also find this paper helpful: "Construct measurement and validation procedures in MIS and behavioral research: Integrating new and existing techniques".
Thank you all for your input. Noted the EFA will help to remove weak items in the inter-item correlation after administering the pilot. Will check out Newton and Shaw's book as well as the paper on construct measurement and validation procedures. Thanks again.
If you're looking for a checklist, there is an AMEE Guide that has some very nice tables and a step-by-step approach. Very practical and shorter than a full book. Happy validating!
http://www.ncbi.nlm.nih.gov/pubmed/24661014.
Artino AR et al. Developing questionnaires for educational research. Med Teach. 2014. 36(6):463-74.
A comment which may or may not apply to your research: sometimes researcher use existing questionnaires in English and translate them to their native language. For example, I have conducted research in French with translated questions.
Beware that the questionnaire in the target language must be validated as if it was a new questionnaire. Moreover, for researcher conducting multilingual research, it is best to ensure that linguistic versions are equivalent to each other.
We wrote to papers on this topic, a book chapter in English and an article in French. The English paper may be downloaded here:
You are getting very good advice here. I especially appreciate Francine's intervention because I do research using very large multi-country surveys (such as the World Values Survey) and in my opinion even if the translations of individual items (questions) is good, that does not mean that the combined indices mean the same things in different cultures. The larger point is that even if a multi-item scale has been validated repeatedly in one culture (and one language), that does not mean that is valid for the population that you are studying.
Assessing psychometric properties of a questionnaire and assessing validity of a questionnaire are two different things used interchangeably by some researchers (they often call it validation). First you have to make sure what you want to do.