If you are unsure about the structure of a measure and wish to evaluate it, you certainly may do so (typically with exploratory factor analysis) after data collection. Doing so may inform you about the most suitable way to handle scores and/or subscores from your measure.
On the other hand, if you'd like to affirm whether a previously asserted structure for a measure holds with your sample (typically checked using confirmatory factor analysis), then you obviously would have to collect (actual!) data in order to be able to do so.
All of which is to say, of course; these inquiries can legitimately take place after data collection. Test developers would consider this data collection as part of the tryout phase of the measure, and that may be where your perspective about such questions being addressed in a pilot study originated.
I always recommend - explore your data. The more you get to know your data, the better.
Conducting Exploratory Factor Analysis (EFA) on pilot data could certainly inform how you move forward in the study (e.g. deleting items on a questionnaire, as you discover it is not measuring what you intend to measure).
However, EFA should not be discarded after the main study data has been collected. EFA can be a step in getting to know your main study data - e.g. to see if any items of the main study questionnaire might need to be removed before commencing analysis.
There is no hard and fast rule - it all depends on your RQs, i.e. the aim of what you are trying to do :)
Actually, this is mainly related to your research objectives as well as what is known about the dimensions of a construct. More precisely, if the factors are ambiguous EFA is recommended. In contrast, if the factors are well established in the field, you may directly move to CFA. However, it is better to conduct EFA after your pilot study to achieve a robust results.
The problem with doing EFA on a pilot study is almost always the instability of the results when using a small N. I agree with others that EFA with a small sample is most likely to be useful when your goal is to determine where a set of items form a single factor. It is less likely to be useful if you want to evaluate a more complex factor structure.
I'd like to echo the comments of Muhammend Farrukh. If it happens that the results of the pilot test is showing variability between the measured variables and the constructs, then the need for EFA becomes inevitable
I continuation to the question what is the minimum sample size to perform CFA and also can CFA be done on SPSS as well.....which is the best software for CFA
Regarding your question about sample size, there is a rule of thumb that says that we have to have 5 to 10 respondents for each question in the questionnaire you use, except the variables used for descriptive statistics, such as gender, education, age, etc.
Regarding software, as far as I know, SPSS does only EFA and not CFA. For CFA you need some specific software.
I have used AMOS and SmartPLS more frequently, and STATA once in a while, with quite good results. People used to treat these software as a sect: those who love AMOS, say any other is bad; people who love SmartPLS, say any other is bad, and so on.
I prefer to be more eclectic and use the software more appropriate for the kind of search I'm facing. For instance, PLS works well with formative constructs, which does not happen with AMOS. PLS works well small sample size, AMOS not.
AMOS and STATA are covariance-based software, which is believed to be more precise, SmartPLS is a variance-base software, which is believed to be more lazy in the relationships studied.
So, the choice depends upon what you want to do and what kind of data you have in hand.