Alternatively, you can take T= rssqrt(n-2) / sqrt(1-rs2 ), where rs is Spearman's rho for samples, take t to be 1-a/2 of the t-distribution (with n-2 degrees of freedom) and in the usual way of (null hypothesis) significance testing reject the null if |T| is greater than or equal to t.
In general, I never use t-tests or anything that relies so much on variance defined in terms of the squared errors about the mean. So my most honest answer would be that the sample size is less important to me than the fact that I'm using a test which was a compromise with the ideal when it was first formulated:
"Many of the statistical methods routinely used in contemporary research are based on a compromise with the ideal (Bakeman et al., 1996). The ideal is represented by permutation tests, such as Fisher’s exact test or the binomial test, which yield exact, as opposed to approximate, probability values (P-values). The compromise is represented by most statistical tests in common use, such as the t and F tests, where P-values depend on unsatisfied assumptions.... metric distance functions such as Euclidean distance are recommended to avoid distorted inferences resulting from nonmetric distance functions such as the squared Euclidean distance associated with the t and F tests." (emphasis added; italics in original)
Mielke, P. W., & Berry, K. J. (2007). Permutation Methods: A Distance Function Approach (2nd Ed). (Springer Series in Statistics). Springer.
I didn't have any idea that quote would be in the book, but had I know and had the book been mostly garbage (it isn't at all), I still would have bought it to quote those lines.
Alternatively, you can take T= rssqrt(n-2) / sqrt(1-rs2 ), where rs is Spearman's rho for samples, take t to be 1-a/2 of the t-distribution (with n-2 degrees of freedom) and in the usual way of (null hypothesis) significance testing reject the null if |T| is greater than or equal to t.
In general, I never use t-tests or anything that relies so much on variance defined in terms of the squared errors about the mean. So my most honest answer would be that the sample size is less important to me than the fact that I'm using a test which was a compromise with the ideal when it was first formulated:
"Many of the statistical methods routinely used in contemporary research are based on a compromise with the ideal (Bakeman et al., 1996). The ideal is represented by permutation tests, such as Fisher’s exact test or the binomial test, which yield exact, as opposed to approximate, probability values (P-values). The compromise is represented by most statistical tests in common use, such as the t and F tests, where P-values depend on unsatisfied assumptions.... metric distance functions such as Euclidean distance are recommended to avoid distorted inferences resulting from nonmetric distance functions such as the squared Euclidean distance associated with the t and F tests." (emphasis added; italics in original)
Mielke, P. W., & Berry, K. J. (2007). Permutation Methods: A Distance Function Approach (2nd Ed). (Springer Series in Statistics). Springer.
I didn't have any idea that quote would be in the book, but had I know and had the book been mostly garbage (it isn't at all), I still would have bought it to quote those lines.
Many thanks for your important answer. But I have two ranked variables, and I want to measures the strength of association between them, while Fisher's exact test is used when you have two nominal variables.
I had a look into Howell (2013) - Statistical Methods for Psychology (8th ed.).
On p. 280 he gives the formula to test the Pearson correlation for significance t=r*sqrt(N-2)/sqrt(1-rs^2), which is a rearragement of the formula I initially provided and identical to Andrew's.
On p. 314 he shows the similarity of r and rs and also urges to be cautious :
"There is no generally accepted method for calculating the standard error of rS for small samples. As a result, computing confidence limits on rS is not practical. Numerous textbooks contain tables of critical values of rS, but for N > 28 these tables are themselves based on approximations. Keep in mind in this connection that a typical judge has difficulty ranking a large number of items; therefore, in practice, N is usually small when we are using rS. There is no really good test of statistical significance for rS, but most people would fall back on treating it as a normal Pearson correlation and being cautious about borderline cases."
I agree with the statements from Andrew, but I think the topic he mentioned is beyond your question, isn't it?
Referring to what Andrew posted here, the quite old permutation tests have returned to being in fashion as the computing power of computers have increased.
if you have no measure values but ranking values than it is appropriate the spearman formula to get a measure of the correlation between two variables.
Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation coefficient. That is, confidence intervals and hypothesis tests relating to the population value ρ can be carried out using the Fisher transformation: