I've used Spearman's rank correlations on multiple imputed data. This has generated a pooled correlation coefficient, but no p value. Does anybody know how I can calculate the p value from Spearman's rho?
i don't know which software package you use; i'll show this in R.
first generate some random data:
> set.seed(1)
> x y cor.test(x, y)
Pearson's product-moment correlation
data: x and y
t = -0.0098, df = 98, p-value = 0.9922
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.1973739 0.1954620
sample estimates:
cor
-0.0009943199
spearman correlation is:
> cor.test(x, y, method="spearman")
Spearman's rank correlation rho
data: x and y
S = 160198, p-value = 0.7017
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
0.03871587
one way to compute a z-value and subsequently a 2-sided p-value is:
> r n z p cbind(r,z,p) # results same as above
r z p
[1,] 0.03871587 0.3705434 0.7109776
-abs(z) ensures that you get the correct area under the curve from the left side up to your value, regardless of the z-value's sign. to get the 2-sided p-value, just multiply it *2.
actually, spearman correlation is a pearson correlation of rank-transformed data, so you can use the rank() function on your variables and use cor.test() with the method "pearson" and it will return the same correlation and p-value as method = "spearman":
> cor.test(rank(x), rank(y), method="pearson")
Pearson's product-moment correlation
data: rank(x) and rank(y)
t = 0.3836, df = 98, p-value = 0.7021
alternative hypothesis: true correlation is not equal to 0
i've kicked spss off my machines ages ago, so i can't tell whether this works or not, but i've found that under Transform > Rank Cases you can rank-transform your variables. i don't know how tied ranks are handled - in R, the "average rank" is assigned to equal values of a variable (there's a button called "ties" in the spss rank-window, maybe you have to set this there). once you have rank-transformed your variables, you can compute simple bivariate (pearson) correlations of your vars and get p-values.
however, if you have used some imputation method for your variables - don't you get a set of new variables with imputed values (don't know what spss does)? if so, you could compute spearman correlations right away (with p-values, etc.) and save yourself a lot of time point-and-clicking. ;)
One approach to testing whether an observed value of ρ is significantly different from zero (r will always maintain −1 ≤ r ≤ 1) is to calculate the probability that it would be greater than or equal to the observed r, given the null hypothesis, by using a permutation test. But it is difficult by hand.
Another approach parallels the use of the Fisher transformation
F(r)=0.5*ln((1+r)/(1-r)), then using normal statistic test z=sqrt(9n-3)/1.06)*F(r).
Or using t-test t=r*sqrt( (n-2)/(1-r^2) ) with df=n-2.