Of course, there is software for the best, and most complicated way, involving use of an EM algorithm to do full-information imputation. But if you only want simple --
The simplest way, for a continuous variable, is to substitute the mean for missing values. Or, if the variable is categorical, and the modal category includes a great majority of the cases, you could assign missing cases to the mode. This will reduce the variance of a continuous variable. It is further problematic for a t-test if there is substantial bias in item non-response (which, in general, you can't easily detect). But with a slight increase in complication, you can address such problems by doing your significance tests in a regression framework instead of simple t-tests.
For any of the independent variables (IVs) in a regression-type model, you could include in the regression, for each IV, a dummy variable scored 1 if it is a case for which you have substituted the mean (or mode), and scored 0 if it is a case that did not have missing data on that variable. The dummy variables will help control for bias that might be introduced by the mean/mode substitution. Your sample size will need to be fairly large, since the dummy variables could double the number of IVs in your regression model. Grilich's Law (n >= 5* # IVs) is always a wise guideline to follow.
Another handy trick, if you are adding several items together to form a scalee, is to use the mean of the items instead of the sum. Recode the items with missing values to zero, and when you calculate the mean of the items, divide by the number of non-missing items instead of the total number of items. Then you get a non-missing value for the scale if at least one of the items is non-missing. Or, to be safe, you might decide to require that at least a majority of the items be non-missing. If you are using SPSS for the data analysis, the Compute command, with subcommand Mean, lets you specify how many of the items must be non-missing to generate a non-missing mean for the scale. But pretty much any software can do the same thing, if you program it appropriately.
With due respect to Burke's other suggestions, I argue against mean substitution for continuous data. It does terrible things to normality of residuals.
Patrick is right, of course. And as he notes in another post, it also reduces correlations. Those are some of the reasons EM imputation is preferred. On the other hand, mean substitution is quick and easy. In a regression framework, inclusion of the dummy terms for missing data will help some with the residuals, and OLS regression is pretty robust to the normality assumption. So, pick your poison -- quick and dirty, or complicated but cleaner.
Here's another thought: (1) Often, the cases that are missing on one of yours variables are pretty much the same cases as the missing data on the others. In that situation, listwise deletion of missing will not eliminate many cases, so you might decide not to worry about it. (2) On the other hand, sometimes one variable has quite a bit of missing data, but other variables have much less. In survey data, income might have 10% missing or more, while none of the other survey questions have more than 2% missing, and the missing cases on any one of those variables overlap a lot with the missing on the others In those circumstances, listwise deletion will cost you a lot of cases, but there's a (simple and partial) fix available . You can do imputation (or mean substitution plus a dummy variable) only on the variable that's the biggest problem, That will reduce the problem to the equivalent of situation (1), so you might decide that the remaining missing data is not worth fixing.
Suppose I have two columns of data. one of them is Age and another is the variable which there are some missings in it. I wanna check whether if the missing mechanism is MAR. I think there is a way to detect the kind of mechanism. I can create a binary variable (if second variable be miss then this variable is 0, otherwise it is one). Now I compare means of Age between two groups (missing and non-missing). If the mean of a group (missing or non-missing) is significantly higher than other one, the missing mechanism will not be MCAR.