1. Using r as an effect size (ES) instead of something like Cohen's d or Glass' g (As in testing the regression coefficient for the IV of group membership on the DV of interest)?
2. Comparing the correlation observed between two variables for one batch (r1) with the correlation observed between the same variables for another batch (r2)?
3. Something else?
Either #1 or #2 can be done...if your intention was something else, perhaps you could elaborate your query.
You know that is Network Meta analysis (Not meta analysis). And I need to run Comparison meta analysis for two continuous variables, whereas all methods I have found so far are related to dichotomous or ordinal data. Thanks again for your time and consideration.
You could compare pairwise. If R is the regression coeficient 1 and 2 denote the column and j the jth row then for and z=artanh(x) and se=1/sqrt(3-n). Then the difference in means is md = artanh(R1j)-artanh(R2j). Because n is the same each jth comparison, the se can be assumed equal as well. If both are indepedent the se for difference in means would be se=sqrt(1/(n-3)*2) via Wald intervals. The issue is that they are not independent, as I understand? I do not know how to adjust for this.
Per example R11=0.5 and R12=0.7 then artanh(0.5)=0.55 and artanh(0.7)=0.87 if n=30=1/sqrt((30-3)*2)=0.27. The difference in means is then 0.55-0.87=-0.23 with a standard error of 0.27.
Then it is still possible to model every md at j as normally distributed:
Wim Kaijser, the most common method (I think) for comparing two non-independent correlations with one variable in common is the t-test described by Williams (1959). I am on holidays right now and don't have time to tinker around with things, but I would be curious to know how the method you describe compares to Williams' test.
PS- My 2013 article with Karl Wuensch includes SPSS and SAS code for Williams (1959) test.
Hi Bruce Weaver I see I am reinventing the weel. As the approach above is the same as eq. 10 in your nice article, where se=sqrt(1/(n-3)+1/(n-3))=sqrt(1/(n-3)*2), because n=n1=n2. Perhaps I have time tomorrow or in the weekend to simulate some comparisons with 3 variables in comparison to Willaims approach.
Hello Wim Kaijser. First, yes, feel free to contact me privately. But bear in mind that Karl Wuensch and I worked on that article 10 years ago, and I'm sure I don't remember all of the details. But maybe Karl does. ;-)
Second, Equation 10 in our article is for comparing two independent correlations--e.g., comparing the rXY values for two independent groups of observations. But Williams' (1959) test, shown in Equation 17, is for comparing r12 with r13 in the same sample. These correlations are not independent of each other--they are two non-independent correlations with one variable (X1) in common. So Equation 10 is not appropriate. Furthermore, the SE depends on the value of r23. I confess that I have thus far only glanced quickly at your simulation results, but I did not notice how (or if) the value of r23 was taken into account.