I remember a statistical procedure called “V” statistics, but I could not find it now. This statistic used to compare several correlation coefficients at the same time to keep TYPE 1 Error.
To gauge the significance on a single correlation coefficient you can Fisher transform the estimated mean correlation and its standard error, and then use the cumulative distribution function for the Normal to estimate either the two-tailed or one-tailed P-value. The reason for the transformation is that the correlation is not bounded by minus, plus infinity and so the variance is not homoscedastic. (See http://en.wikipedia.org/wiki/Fisher_transformation)
To account for multiple comparisons (four tests in your case), assuming the tests are independent and are from similar families of questions (e.g., coin flips) then Bonferroni correction should suffice (0.05/4). Bonferroni is conservative and a more relaxed condition may be appropriate, for example if the hypotheses are correlated. Of course, there are many other solutions to this problem including Bayesian. As you may know, solving the MCP problem is an active area of research.
Do you mean compare several correlation coefficients with each other? The answer may depend on what you mean and even more what purpose is that you are wanting to test these hypotheses. So, more detail might help get more responses.
In general, the p.adjust function (http://127.0.0.1:16648/library/stats/html/p.adjust.html) function is often useful. If you have say four p values, say .01,.02,.05,.10, you can do (in R)
p.adjust(c(.01,.02,.05,.10))
and it adjusts those p values for different adjustment methods (Holm's is the default). But as said above, more details will help people to know what answers might be appropriate.
I do not know the tests you mentioned, but if you want to compare the coefficients between them, considering that the tests are independent, and by definition the values of the correlation coefficient ranges from 0-1 , you can try with nonparametric statistic such as Kruskal Wallis test because it not required normality and homogeneity of variance, comparing with the value of the chi-square distribution, this will reveal whether they are statistically identical or different.
what I have is four independent correlations, and I need to test if they are statistically equal? The situation here exactly like T-test and ANOVA. But instead of comparing means, I want to compare correlations.
I think you should do a multivariate analysis, like EFA, CFA, SEM... Try to find out something about "omega" function in 'psych' R package, written by Willian Ravelle.
Could'nt your problem be rewrote in an "analysis of covariance" way, with additional variable the sample identifier (1 to 4, categorical of course). Since correlation coefficient are related to slopes, it should be equivalent to a test of interaction. And if you need pairwise comparisons, use subsequent suited contrasts.
Of course, it assumes the variables X and Y are the same for all 4 samples, and that you have a preferential Y = f(X) direction. But since you did not described exactly your experiment/context, one can not say very much and have to guess...
What I am trying to explore if correlation coefficient between students’ academic achievement (GPA) and students engagement in school activities (questionnaire) differ according to family income (4 levels).
So, I have four correlation coefficients between students’ academic achievement (GPA) and students engagement in school activities. One for each level of income. R1, R2, R3, R4, and I want test if they are statistically equal?
(Note I am assuming you want to stick with comparing correlations. If not, then other solutions such as ANCOVA, mixed effects modeling, etc. may be useful on the original data as Emmanuel suggests if they don't violate the assumptions for these models.)
Comparing correlations is a common problem. There is a lot of literature on this. Please note that the major issue with comparing correlations using standard tests such as t-test, ANOVA, ANCOVA, etc. is that correlations naturally violate the Normality assumption for these models/tests due to the -1,1 bound of the correlation measure. A common solution to this is to transform the correlations. This was first proposed by Fisher in 1921, who developed ANOVA.
If you look online, you will find that this violation of the Normal assumption is what most approaches deal with. Second to this they deal with dependence between correlations. A common approach to compare independent correlations is to transform them using the Fisher transformation and use the degrees of freedom (n-3; due to loss of degrees from estimating the correlations) to estimate the standard errors for the t-test (they assume the respective variances for the correlations are one). I believe the calculator that Simon Hunter pointed out earlier on this thread does this.
There is also recent research on this problem (comparing correlations) that use different approaches from Fisher's. One of these papers was recommended by Kate Levin earlier on this thread. I have reposted links and others that I found through a quick search on the net.
Here are some links that may be useful:
from Simon Hunter:
http://vassarstats.net/rdiff.html
similar to Simon's link but gives the formulation:
Comparison of independend (and dependend) correlation coefficients): Several calculators are available in the internet - free of charge. To control for the alpha-error inflation, just use the Bonferroni adjustment or the less conservative sequentially rejective Bonferroni test by Holm.
This will be far too late to help Mahmoud Alquraan with the original problem, but Karl Wuensch and I discussed this situation in our 2013 article. One approach is to use the standard Q-test for heterogeneity that meta-analysts use. We provided code for both SPSS and SAS. Use the appropriate link on this page, and look for syntax file #5.
https://core.ecu.edu/wuenschk/W&W/W&W.htm
If you use some other software, you can probably translate the code fairly easily.
Here is the article.
Article Erratum to: SPSS and SAS programs for comparing Pearson corr...
Daniel Wright It always amuses me that everyone writes their own r to z function rather than using the in built hyperbolic tan functions (and yes I've done it myself):
atanh(r) and its inverse tanh(r)
In practice it won't matter much but it could save time and be more efficient if you are doing simulations etc. Its also useful to remind people that the connection between r and Fisher z is a hyperbolic slope. So r is the Euclidean slope of y =rx and fitting a hyperbola to the graph rather than a straight line gives a hyperbolic increase per unit increase in standardised y that is Fisher z.
Bond, C.F., Richardson, K. Seeing the FisherZ-transformation. Psychometrika 69, 291–303 (2004). https://doi.org/10.1007/BF02295945