Yes. First it would be appropriate to propose a framework/model of what we intended to measure before finalizing the Questionnaire. Based on that, several variables are chosen in form of Questionnaire items. Once the questionnaire is built, it needs to be validated using both exploratory as well as confirmatory factor analysis. In addition, a Structural Equation Modelling (SEM) is usually done to validate any proposed model.
sir i have framed objectives accordingly have build a model and prepared questionnaire so for checking EFA and CFA can i go for PILOT SURVEY and then changes to be made before proceeding with the final analysis.
To measure your objectives, you need to select/develop variables to be included in the questionnaire i.e. Choose what are the elements to be measured?. These elements are called as 'items' in the questionnaire and it is usually derived from relevant literature (through content validation). Once the Questionnaire is built, you will go for a pilot study to collect data for its validation. A minimum of 150 samples or 10% of your total population whichever higher is chosen during Pilot study. Upon collection of data from pilot study, we usually carry out required statistical testing to validate the questionnaire tool. Please see the attached model article which will explain the methodology.
Yes, Dr. Arun is right upto questionnaire formation. After the questionnaire formation, researcher need to do the pre-test. Then the researcher can do the pilot study. In addition to this I disagree with Dr. Arun regarding minimum sample size for pilot study. The minimum size should be 100 (Awang,2012, 2014,2015, Hair, 2010).
Seconding the excellent advice from Dr Arun on choosing items. I'd add that if there already is a validated instrument in the literature then I'd use it rather than make up my own questions for that topic - obviously all this has to be made to work within the time and length constraints that you have for a participant to complete a survey.
For example, if you wanted a measure of depression, the Beck Depression Inventory is a gold standard so one could use those questions to get a very widely accepted and credible depression score, BUT, it is has many questions and perhaps depression is only one small datum so you can't afford to spend two whole pages of your survey on this topic or ask 21 questions on it, in which case there are other shorter depression screens that are also well-validated and involve a much shorter list of questions; e.g. the 9-item Patient Health Questionnaire.
Other considerations for using validated instruments in your survey are whether you prefer to err on the side of detection/yes (in which case use sensitive tests - fewer false negatives) or on the side of ruling something out (then use a more specific test that makes fewer false positives). Many English validated instruments that were first developed in American or UK populations have been translated or adapted for different cultural contexts and also validated for those
The pre-test is a critical step especially if the survey will be self-administered. As researchers we are often poorly placed to have a good sense of how a survey participant would understand a question - we know too much and make too many assumptions that we can't be sure our participants will also share.
If you are doing an interviewer-administered questionnaire then I'd highly recommend observing 5-10 test interviews to see how easy it is for the interviewer to ask the question appropriately and how they respond to questions the test-participant asks regarding clarifying the meaning of the question and the response options. This helps ensure the wording flows well and uses language that is easy to say in a reliable, non-confusing way and to clarify any potential interpretation/intent problems that your interviewer training needs to cover so you get consistent quality and understanding across the interviewers.
If the survey is self-administered, then I'd highly recommend observing 5-10 test runs where each participant does a "talk-aloud"; i.e., as they do the survey they read the questions out loud and verbalise how they are interpreting the question and deciding among the response options to get to the one they end up choosing. In other words, they speak out loud the mental processes they go through as they work their way through the survey. This helps clarify ambiguous question language, identify potential for more than one interpretation of a question's intent, uncover cases where the response options listed are not complete to cover all the possible ways of answering, and whether the different response options are sufficiently different from each other and consistently understood to capture the data of interest in the way the researcher intends.
On a simple logistical note, I've learned the hard way that it pays off to think about the actual conditions that the questionnaire will be completed under. If it's out in the field (not in your nice office) and possibly even outdoors, then I'd print on only one side of each sheet of paper and double staple the pages of each questionnaire together. This makes it much easier for the person writing in the answers to use their lap or a clipboard as a writing surface, and prevents multi-page questionnaires from coming apart as they are pulled out of and stuffed back into bags during handling in the field or in windy conditions.
I'd also recommend putting a unique identifier on each questionnaire - many times I needed to go back and find the original paper to check if something that looked strange was a data entry error. I would make up a composite identification number that included the interviewer initials, the day of the interview and a sequence number - e.g., WM-07-001 for my first interview on say 7 March. This has helped me know which interviewer to ask if I needed to clarify something they wrote on a questionnaire at the end of an interviewing shift, or to help me check for things like gender bias between male and female interviewers and the number of male and female participants they interviewed in surveys where the interviewers were also doing the recruiting.
There are two categories of recommendations in terms of minimum sample size in factor analysis. One category says that the absolute number of cases (N) is important, while the another says that the subject-to-variable ratio (p) is important. Check the following web link for more clarifications:
Good luck for your research. We develop a model from existing theory. Then the model is tested. For testing the model a questionnaire can be used. To develop the questionnaire, we can use existing scales/ adapt the scales for each construct and then test the model.