If the statistical software renders a p value of 0.000 it means that the value is very low, with many "0" before any other digit. In SPSS for example, you can double click on it and it will show you the actual value.
So the interpretation would be that the results are significant, same as in the case of other values below the selected threshold for significance.
If the statistical software renders a p value of 0.000 it means that the value is very low, with many "0" before any other digit. In SPSS for example, you can double click on it and it will show you the actual value.
So the interpretation would be that the results are significant, same as in the case of other values below the selected threshold for significance.
The interpretation of small p-values varies by context (as does the [over-?]use of p-values). As Jochen says, saying p < .001 will usually suffice. Fisher (1925) himself said there was
‘‘No practical importance whether p is .01 or .000001’’ (p. 89) and in the contexts he worked (and I work) this is true (see p. 125 of Article Making friends with your data: Improving how statistics are ...
). However, if you are in some branches of physics different criteria are used for discoveries (e.g., https://arxiv.org/pdf/0811.1663.pdf) and there is some discussion in other disciplines implementing some of the thresholds used in physics. So, my "answer" is that it would be worth knowing the context to provide more useful answers. If you were announcing the discovery of the Higgs boson you would do something different than if you happiness correlated with the weather.
It is worth also stressing that Iulian and Ette have given very different answers. Iulian is saying if it is below THE selected threshold (singular) it is significant, while Ette appears to be saying there are degrees of significance. This is tangential to the main question and I am sure is discussed elsewhere on researchgate and has been of debate in statistics for decades. Since Ette wrote his answer after reading Iulian's I am assuming that this was this purpose of his note.
Oh, and getting a .0000 p-value COULD be a computational problem, but this would usually not be the first thing I would think of. Always think if the estimates make sense. More often I find p-values of 1.000 and realize I need to look more carefully at the modeling/computation because there may be an issue. You added that possibility at the end and I imagine most responders will focus on other aspects. If you think it may be a computational problem it is worth creating a minimal working example (and probably use a different message board like stackexchange).
Actually there should be no controversy over the precision used in reporting p values since it gives just the significance level of a statistical test. Reporting of 3 or 4 or more zeroes
Dear Prof Debopam, it may be useful for a researcher to indicate that precision of p value computation has been obtained from accurate and improved software.This will solve the problem of difference between highly specialised disciplines.
Distinguishing between very low and very-very low values of the p-level make sense only if the two conditions are fulfilled:
-- the assumed model of the pd of the test statistics is sufficiently exact (e.g. verified in the history by many researchers, etc.)
-- the results implied by the usage of this parameter are really very much or very-very much sensitive to the implied decision (e.g. If one has to decide about producing just one example of a unique very expensive medical device supposed to work in very hard conditions; then it is expected to fulfil very rigorous requirementss, that its reliability must be very high; obviously, in such cases the tests statistic(s) is (are) a combination of results of many different experiments on parts of the device or their mockups, in various conditions, with the use of a model of the impact of each part on the reliabilty of the device; usually with participation of many experts)
In many cases, however, the second requirement is rather not applicable. Then thre is no need to care if the level is 0.001 or 0.0001 - simply, it does not matter. For example, for mass products, the costs of the ventual failure of the examples, due to their seldomness can be put into the warranties suitably included in the price to be payed by the customer. It can be justified by the fact, that the costs of incresing the reliability (here: decreasing the p-value) could be too high for introducing into the production.
It's funny, that sometimes it make sense to diminish the statistical reliability, since the arrivals of the failed examples cannot be too low if the repaire requires openning a special division, which then should work with a rather constant intensity to be worthwhile! In such cases it is also important to have a better (more exact) p-value.
I also agree with Lulian that we allow for what the statistical software gives.
However. will it not be innovative to allow reporting of precision results in exponential format ? In that case the zeroes will be replaced with negative exponents.
This will accommodate virtually every discipline that needs it and would want to use it.
I agree with the main idea by Iulian and improvement by Harold. But, let me stress again, it would be meaningless if the probabilistic model is in its tails not extreemly precise. Compare, please to a measuring instrument scaled up to 0.0001 of the unit when its standard deviation (caused by sensitivity to random perturbances) equals 0.1 of the unit:-)
Let we be more precise via the following example: If the test statistics T under the H0 possesses cdf equal 1-2^(-t) and the obsever value is a=10, then the p-value for the right-sided critical interval equals circa 0.001. But, if the model approximates well the reality preatty well within interval [0,5], and for instance if the more exact cdf equals 1-2^{-x) \cdot 1_[0,7], which perfectly agrees with he first model within the interval [0,7], then the p-value, for any a>7 equals 0. Thus, in case we would be satisfied with both models - any infromation on the p-values less then 0.001 becomes of no importance. Regards, Joachim
Very clear explanations by all researchers. I'd just add that, if you'd like to know the actual p-value, u can indicate in your software package that u want pvalues to be displayed with more precission. How to do this obviously depends on the software u use. I hope this is useful to u. Regards from Madrid
I guess that if you used a parametric method for comparison it might be easy to test if there was a type II error by getting the 95% CI. However, if the comparison was between medians testing the probability of type II error is problematic especially with SPSS software
If the p value supplied by statistical software is found to be 0.000 then it is to be understood that this value is rounded off up to four decimal places. The actual value is not zero. It is greater than zero though may be very small ans close to zero. Therefore, it would be not proper to conclude that the impact of the respective variable is absolutely significant. One can simply conclude that the impact of the respective variable is significant or highly significant according to the prefixed level of significance.
It is a fact that p-value can not be zero. If it is found zero, it is to be understood that there is computational error (or approximation) of some other error.
I agree with the interpretation " that the p value is 0.000 means the results are significant " provided by Prasad D.K.V. However, I like to mention again that p-value can not be zero and if it is found zero then it is to be understood that there is computational error (or approximation) of some other error.
If the null hypothesis is rejected then the decision, on what was fixed to know, is taken against the statement contained in the null hypothesis.
However, if the null hypothesis is not rejected then the decision is not taken against the statement contained in the null hypothesis. In this case, the decision is taken as follows:
"There is no evidence against the statement contained i the null hypothesis."
It is to be noted that a null hypothesis is usually not accepted. It is either rejected or not rejected.
"P=0.000" means `p=0.0005". software output P=0.001
(P=0.000 means P≪0.0005)
Noyez, L., Janssen, D. P., van Druten, J. A., Skotnicki, S. H., & Lacquet, L. K. (1998). Coronary bypass surgery: what is changing? Analysis of 3834 patients undergoing primary isolated myocardial revascularization. European journal of cardio-thoracic surgery, 13(4), 365-369.
The SAS manual says that 0.000 is rounded off to the three decimal places . However, it is common sense to consider 0.000 as less than 0.0005 in information processing. All statistical software shall obey this rule. He pointed out .. this problem is not as important in statistics as a whole.
What a P-Value of 0.000 Means, how to interpret it ?
Whether you use Microsoft Excel, a TI-84 calculator, SPSS, or some other software to compute the p-value of a statistical test, often times the p-value is not exactly 0.000, but rather something extremely small like 0.000000000023. Most software only display three decimal places, though, which is why the p-value shows up as 0.000
This video explains what a p-value of .000 means and how to determine if the result is statistically significant.
If a value is rounded off to 0.000, it does not mean that the value is 0. Only one can, in this case, interpret that the value does not differ significantly from 0. This is nothing but inference based on statistical philosophy.
Any data collected for some study are certain to be suffered from error at least due to chance (random) cause.
Accordingly, for any set of data, it is certain not to obtain "0" p value.
However, p value can be very small in some cases. This small value may be approximated by "0" if the digits after some decimal place are neglected. But, this does not mean that the p value obtained is "0".
" If the correlation among the Vegetation indices is analyzed, there is high chance of getting 0 as p value, as those indices are somehow ratio to each other. "
As I understand, there is no chance of getting 0 as p value in this case.