Another approach that works when you're comparing a slope with 1 (not other numbers) is to compare the linear regression model with one that has an offset. An offset is a term you add to a model where the slope is not estimated, but instead fixed at 1.
Another approach that works when you're comparing a slope with 1 (not other numbers) is to compare the linear regression model with one that has an offset. An offset is a term you add to a model where the slope is not estimated, but instead fixed at 1.
Re Daniel's suggestion about using the GraphPad 1-sample t-test calculator, I believe it gives the correct result PROVIDED that you enter n-1 as the sample size. Remember that for the test that Jianguo asked about, df = n-2, not n-1. The t-test calculator will set df = n-1; so you need to enter n-1 as the sample size to end up with the correct df. ;-)
Hi Jianguo Chen. I don´t know how it works with R or SPSS, anyway I made the test many times mostly by calculating the slope and the standard error of the slope.
Then you must calculate a t value as follow t=b1-b2/Sb1-Sb2 where b1 is the value of slope 1 (in your case your example: 0.81) and b2 is the value of slope 2 (in your case your example: 1) and Sb1 is the standard error of slope 1 and Sb2 is the standard error of slope 2 (in this case is 0 becasuse a slope b=1 has cero standard error). In this way you calculate a t value and then you compared it with a t value from a t table with n-2 df and alfa 0.05 or 0.01, if t calculated is less than t value from the table, you can assume that the calculated slope is not different from 1. See Zar (1999). I think that first you must understant the principles of that analyses before you run it in a statistical package. Hope this information be useful for you.
HI Frank, I found that approach is quite useful and easy to follow. Unfortunately, I have no access to Zar (1999), then I would like to ask whether that approach was Zar's method, or simply he gather that one into his book.
This approach was proposed by other authors. E.g. you can see the reference (Paternoster et al 1998) here http://www.udel.edu/soc/faculty/parker/SOCI836_S08_files/Paternosteretal_CRIM98.pdf
Hi. This is quite simple in R without much coding:
Suppose you have a regression line with a slope of 1.005 and 0.003 s.e. (sample size was 12 animals); you want to test it against a slope of 1 (which, IF you're doing log-log-regression analysis in biological scaling gives an isometric- and not allometric- relationship of the two variables; so it's an interesting question to answer...). And you're working at the alpha=0.05 probability level (of comitting a Type I error, i.e. rejecting the null hypothesis when it's true).
You use a one-sample t-test to test this slope against the value of 1. The test is two-tailed because you didn't know in advance whether your slope would be greater or smaller than 1.
First you get the critical value for t (the value you would look up in a table in "the old times") by doing:
qt(0.975,10)
# 0.975 because it's a two-tailed test (alpha/2=0.025); 10 stands for the degrees of freedom (sample size=12, d.f. =n-2)
critical value = 2.228139
Then you compute the t value of your slope:
(1-1.005)/0.003
# = -1.66666666666 and compare it to the critical value. Simple rule: "larger reject, smaller accept (the null hypothesis)". Our null hyp in the t-test is: "the two population means are the same". With the t-test you take the absolute value of t for comparison(!).
-> Your t is smaller than the critical value, so you accept the null hypothesis.
To get the p value (the probability that data as extreme or more extreme than yours can be observed by pure chance IF the null hypothesis was true) you do this:
2*pt(-1.666666666666, 10)
# 2*pt because it's a two-tailed test, -1.6666666 is your t value, 10 are the d.f.)
which gives you a p value = 0.1265473. (There's a 13% probability of getting these data by chance when both population means are actually the same. You work on the 5% probability level; for you the p value is not significant).
That's it. Pretty simple. I double-checked the results with http://www.graphpad.com/quickcalcs/oneSampleT1/?Format=SEM mentioned above.It works out.
To recap the two steps without too many comments with a second example:
slope 0.995, 0.01 s.e., n=15, alpha=0.05; testing slope against a slope of 1:
qt(0.975,13)
# critical value=2.160369
(0.995-1)/0.01
# t value = -0.5 , 0.5 < 2.16 --> accept the null hyp
2*pt(-0.5,13)
# p-value= 0.6254313
I recommend Crawley MJ (2013) The R Book; 2nd edition, Wiley.
Covering the above:
7.3.3. Maximum likelihood with the normal distribution, p. 282ff;
The answer using the confidence interval for the slope (coefficient) is the simplest way to go. It is usually given automatically by most statistical software.
@Frank can you please tell me more about why the Sb becomes 0 when b =1 ? I am unable to understand that bit, i looked it up but could not find an apt explanation anywhere.