Can we get different statistical results by rounding? Definitely! Should we? I'd say definitely not! Here's a made-up result that is statistical significant (at .05):
Group A: avg=100 sd=25 n=23
Group B: avg=115 sd=25 n=23
t(44)=2.0347, p=.0479 *
But what if the means had been rounded and they really were like this:
Group A: avg=100.49 sd=25 n=23
Group B: avg=114.50 sd=25 n=23
t(44)=1.9004, p=.0639
I would say playing games with rounding like this is dishonest because rounding has no bearing on if the hypothesis is true or false.
Naturally when we write about our studies we round to a decimal showing a reasonable degree of precision for the original observations. It also adds clarity for the audience to not have the distraction of too many digits. But that's only rounding for the presentation. When actually doing the calculation, I would say to always use the most precise numbers we have (i.e., all the decimal places). Best wishes with your project Amel.
Can we get different statistical results by rounding? Definitely! Should we? I'd say definitely not! Here's a made-up result that is statistical significant (at .05):
Group A: avg=100 sd=25 n=23
Group B: avg=115 sd=25 n=23
t(44)=2.0347, p=.0479 *
But what if the means had been rounded and they really were like this:
Group A: avg=100.49 sd=25 n=23
Group B: avg=114.50 sd=25 n=23
t(44)=1.9004, p=.0639
I would say playing games with rounding like this is dishonest because rounding has no bearing on if the hypothesis is true or false.
Naturally when we write about our studies we round to a decimal showing a reasonable degree of precision for the original observations. It also adds clarity for the audience to not have the distraction of too many digits. But that's only rounding for the presentation. When actually doing the calculation, I would say to always use the most precise numbers we have (i.e., all the decimal places). Best wishes with your project Amel.
389 +/- 1 may be acceptable for a final result to present to the public, but if it is a factor in a larger problem, you keep more digits along the way until finished. For the final result, one should consider all kinds of errors. Having worked for many years in the field of Official Statistics, I believe that many people make spurious decisions based on thinking that a result, especially a change from one period to another, is more accurate than it really may be. In the above, is a change from an estimate of 389 to 391 actually meaningful? Maybe not.
I see that part of my note above was already under Kevin's note, but I disagree with using 0.05 as a universal threshold for any p-value, or even using any lone p-value at all. Sample size and standard deviation go far enough to be more meaningful instead. But, because Kevin used the same standard deviation and sample size in each case, the p comparison does have some meaning there. Otherwise, not so much.