Firstly, if your 'control firm' means a categorical variable and if you are trying to find what addition quantity of variance can be explained by it then you can perform Multiple Regression. But before that you need to assure the following assumptions-
(a) VARIABLES ARE NORMALLY DISTRIBUTED
You can use visual inspection of data plots, skew, kurtosis, and P-P plots. In-addition, you can test Kolmogorov-Smirnov tests which inferential statistics on normality.
(b) ASSUMPTION OF A LINEAR RELATIONSHIP BETWEEN THE INDEPENDENT AND DEPENDENT VARIABLE(S)
If the relationship between independent variables (IV) and the dependent variable (DV) is not linear, the results of the regression analysis will under-estimate the true relationship. A preferable method of detection is examination of residual plots (plots of the standardized residuals as a function of standardized predicted values, readily available in most statistical software).
(c) VARIABLES ARE MEASURED WITHOUT ERROR (RELIABLY)
In the case of multiple regression or partial correlation, effect sizes of other variables can be over-estimated if the covariate is not reliably measured, as the full effect of the covariate(s) would not be removed. You can use reliability estimates such as Cronbach alphas. The Alpha value more than 0.8-0.9 is more secured, liable and acceptable.
(d) ASSUMPTION OF HOMOSCEDASTICITY
Homoscedasticity means that the variance of errors is the same across all levels of the Independent Variable(s) (IV). On the other hand, Heteroscedasticity is indicated when the residuals are not evenly scattered around the line. There are many forms heteroscedasticity can take, such as a bow-tie or fan shape. When the plot of residuals appears to deviate substantially from normal, more formal tests for heteroscedasticity should be performed. Possible tests for this are the Goldfeld-Quandt test when the error term either decreases or increases consistently as the value of the DV increases as shown in the fan shaped plot or the Glejser tests for heteroscedasticity when the error term has small variances at central observations and larger variance at the extremes of the observations as in the bowtie shaped plot. In cases where skew is present in the IVs, transformation of variables can reduce the heteroscedasticity.
If your data complies with these four important assumption then you can go for your desired multiple regression analysis. Following are some links that will let you know how to perform Multiple Regression analysis by using SPSS:
If I undeerstood you well, by control firms you mean to introduce in the empirical analysis other ones without the effect you are analyzing in order to make more robust your results on the 'interesting' firms.
My suggestion: Do a matching exercise. For each one you have, search another firm, a 'twin' and include it in the empirical analysis. These twins must be determined on the basis of what could be of interest in the analysis (let's say, a firm in the same size range & sector, may be region, .... depends on the topic).
Once you have your firms & the ones that match them, these last ones can serve you as control firms. Check (in a regression with a dummy variable, for instance) if the effect is different in your simple than in the one of the 'twins'