Once you have completed your data entry, you could run virtually any statistics with little to no alteration/manipulation to the data. So yes, you should run your factor analysis then multiple regression as you see fit.
One important note to consider when running multiple regression however is; your categorical independent variables needs to be (re)coded 0 and 1.
As Mohammed indicated, once data are entered into SPSS, you are free to analyze your data multiple ways. However, if you're really asking how to reduce a variable set into linear composites for further analysis as independent variables in multiple (linear) regression, then common factor analysis (in an EFA) or principal components analysis may be used at the front end. For your chosen extraction method, decide upon the appropriate number of factors/components to extract. You may then either have factor/component scores saved to the data set (analyze/dimension reduction/factor/scores/save as variables), or simply give each variable in a factor equal nominal weight and compute scores yourself (e.g., compute f1 = var1 + var7 + var9 + var10.). Then use the resultant scores as variables in your regression analyses.
A factor analysis requires that the input variables are continuous and that they are correlated with each other. It would be useless otherwise to use FA. In Multiple Regression the regressor variables ( or "independent variables") must be independent of each other or your model may be affected negatively by multicollinearity.
In order to use FA and regression, one approach could be to run FA and to output the factor scores. If an orthogonal rotation is used, the resulting factors will remain independent of each other. Then, you can use the factor scores in a multiple regression model.
For future reference, have a look at the references in the Wikipedia article on data-dredging. Here is quote from the article:
"A key point in proper statistical analysis is to test a hypothesis with evidence (data) that was not used in constructing the hypothesis. "
The statement could be more specific:
"A key point in null hypothesis testing (p-values against a criterion) is to test a hypothesis with evidence (data) that was not used in constructing the hypothesis. "
There is no reason you can't use delta AIC (change in AIC) in place of p-values, after exploratory analyses to construct hypotheses.
Rather than take my word for it, have a look at the 4 references in the article.
Yeah, sure you can! The confusion may stem from studies doing both EFA and CFA; then you should rather do a random split of your sample and do the CFA on the second half (confirmatory analyses should not be done on the same sample as exploratory analyses).
Good idea, split the data set. One of my students had a large data set on methyl mercury in fish. It seemed a shame to blow it all on a single confirmatory analysis. So he did exploratory on first year of data, then confirmatory with next year of data relative to model from first year. Then revised the model and did confirmatory with next year of data relative to previous model. We both learned more from this then from a single confirmatory analysis, for from a single exploratory analysis on the entire data set.
However, if data are not ordered by time, better to split the data at random, as from John-Kåre (and don't look at the data held in reserve for confirmatory !). From my experience with the mercury in fish data, and you don't have time-orderd data, you might want to think about random split to 3 groups, if you have enough data. That way you get 2 confirmatory runs.