Coding as zero would cause any of your estimates to be heavily biased. That said, Ehsan's suggestion would simply delete these records would be better, but also may induce unexpected biases, particularly if the missing value pattern is correlated with the dependent or collection of independent variables.
For example if values are missing because they are censored on the left, for example then regression coefficients would be biased in predictable ways. (Intercept too high and slope too low)
I would suggest getting up to speed on statistical methods for censored data and multiple imputation methods.
Dear Zerihun, you can only dismiss them. Considering them as zero is not an appropriate strategy. Nowadays statistical softwares can handle missing data correctly. You can simply analyse them using SPSS or SAS with missing values as dots or other signs depend on the software.
Coding as zero would cause any of your estimates to be heavily biased. That said, Ehsan's suggestion would simply delete these records would be better, but also may induce unexpected biases, particularly if the missing value pattern is correlated with the dependent or collection of independent variables.
For example if values are missing because they are censored on the left, for example then regression coefficients would be biased in predictable ways. (Intercept too high and slope too low)
I would suggest getting up to speed on statistical methods for censored data and multiple imputation methods.
Like John, I disagree with Ehsan's and Abdelrahman's recommendations. SAS *can* do multiple imputation, and that may be Abdelrahman had in mind, but SPSS, to my knowledge, cannot. Though all are correct that zero- or mean-substitution is a poor choice.
Elaborating on John's point: With a simple missing data approach, such as listwise deletion when SAS or SPSS simply omits those records from the analysis, you are making very strong assumptions about the mechanism of missingness -- specifically, the data are "Missing Completely at Random."
There are fairly easy-to-use techniques (and some harder) that do not require the MCAR assumption. For most simpler models, there are estimators developed (most typically maximum ilkelihood from raw data) that require a lesser assumption, Missing At Random. These are implemented in any modern latent variable software, including AMOS which is attached to SPSS. They will transparently compensate for much of any bias.
More generally, you can use, as John suggested, multiple imputation. It is more broadly applicable and more versatile than ML estimation, but much less transparent. PROC MI in SAS can do this when the missing data are continuous normal. Procedures in other software (R, Mplus) are more flexible.
There are cases when you cannot plausibly argue for the MAR assumption, either, but MAR methods are still almost always better than listwise. If it's a major issue, there are techniques for Missing Not At Random, but they take a lot of homework.
All of this is detailed in both technical references and didactic articles.
Definitely would not code them as 0. That could be the worst thing to do, short of coding them as the mean. You might look into missing data techniques. Multiple imputation is probably the best option. If you go to the link below and go to Feb 8th you'll find a presentation by Little on multiple imputation. I found it to be helpful.