It means that the data observed are relatively (compared with the same study yielding a low p) consistent with the model that you proposed. The attribute you tested is not so extreme for you to reject the model for these data (which if it were would still not mean to reject the theory). So, if you believe the model you proposed you should keep believing it. Depending on what you did this might be evidence for the theory or it may relate to measurement/instrument issues.
You might want to provide more background information to get a less abstract response.
That the information provided by your data is not sufficient to interpret the sign of (at least one) regression coefficient.
I respectfully disagree with Daniel that "... the data observed are relatively consistent with the model ...". The degree of consistency that can be shown depends on the sample size. It's easy to get a miserable p-value even if the model is actually completely wrong, but the sample size is simply way to small.
It means that the predictor (s) do(es) not significantly contribute to the variations in the response variables. The suggested model is not appropriate for whatever you wanted to do.
@Jochen, first, checking what you meant (and where is the quote, without the parenthetic clause, from?). The quote is not saying that it is consistent, but consistent relative with finding p < .05 ceteris paribus. The CP conditional is important here. So I assume you are saying that you disagree with this. There is obviously the issues that a poorly designed isn't going to yield much useful information about anything. That is a related issue and it is good that you added that because often this is p is large.
Hello, In the majority of analyses, an alpha of 0.05 is used as the cutoff for significance. If the p-value is less than 0.05, we reject the null hypothesis that there's no difference between the means and conclude that a significant difference does exist. If the p-value is larger than 0.05, we cannot conclude that a significant difference exists.
That's pretty straightforward, right Below 0.05, significant. Over 0.05, not significant.
You did indicate the reference of the reported p-value. The usual p-value provided with statistical regression packages refers to correlation between the predictor variable and the response variable. A large p-value indicates that changes in the predictor are not correlated with changes in the response.
I agree with Jochen that the explanation by Daniel Wright is unclear, "The attribute you tested is not so extreme for you to reject the model for these data." What attribute? What model? Who is rejecting anything?
Joseph L Alvarez , Jochen Wilhelm was disagreeing with what I said, not clarity, but I'll address the clarity thing first since there are differences in terminology across disciplines and in quickly writing comments I am not as clear as I could be! The attribute might be how much kids have grown, and the null model might be no difference between milk and non-milk groups. In practice, when people get a really low p they often reject the model (here that milk makes no difference) and this is often called the null hypothesis. But there are lots of other considerations and problems with this null hypothesis significance testing approach.
The example here comes from Student, where he notes a large sample is only one consideration (https://www.jstor.org/stable/pdf/2332424.pdf?seq=1#page_scan_tab_contents).
The point where Jochen and I differ on, if I understand him correctly, is as follows. Assuming that the study has some value (and it can be way under powered, or whatever) and the result is either p = .01 or p = .50. Does the person react differently to these? I believe the p = .50 says the data are more consistent with the no-growth or null model than the p = .01 case. Of course it is wrong to say p = X shows data are consistent in any absolute sense with any model (in the milk study random allocation was messed up). This is not what Jochen (again, if I understand correctly) is arguing for or against. He is saying that observing p = .50 relative to observing p = .01 for the same study would not lead to a difference in how consistent one thought the data were with the model.
I should clarify even more perhaps. I am NOT saying that the p value should be what is used to judge this, and certainly should not be used on its own (as numerous authors point out). But that is different than saying that the p is uninformative about data conditional on the model. It will be interesting to hear if Jochen and I have some very different views on this, if it is something more subtle, or if it is just a clarification problem.
Hmmm. Consider two sets of data (A and B) from two similar experiments. For sets have the same n and are tested with the same test (on the same full and resticted model). Both give a p-value. Let's say pA = 0.01 and pB = 0.5. For the sake of concreteness let's say that the test is about a difference in means (t-test) between treated and control, so the full model is Y = b0 + b1*1group=treated, the restricted model is Y = b0 (that is: b1==0), and b1 is the coefficient of interest (expected difference between means).
If I understood correctly, you say that the data B fits better to the restricted model than data A, because pB > pA. Am I right?
If so, then what if the noise in B happend to be larger than in A? What if b1 for set A is 0.5 and b1 for set B is 1? The estimate for the mean difference for set A is closer to the null than for set B, but (obviousely) the noise in B must have been larger, because (with the same n) the obtained p-value was higher. I find it strange to claim in this case that the data B better fits the restricted model. In my sight, the p-value does not contain this information, because it's not a signal but rather a signal-to-noise ratio.
Jochen Wilhelm , glad we agreed, since otherwise I assumed I was wrong. :o)
If the studies are the same the expected noise would the same, BUT if the sample SD is much larger or smaller than expected this will affect p, but if you are measuring consistency in raw non-standardized units, then you could create sample data where the ps and this consistency measure flip.
a) There is no real relationship between the predictor (s) and the model response variable.
b) The relationship exists, but the problem occurs because you are working with a problematic sample (small, biased, unvariable, ...) or the model is not appropriate to represent the existing correlation.
Since we have established only a tolerable probability of error (alpha) to reject the null hypothesis; when we get a p-value greater than alpha, we can not reject the null hypothesis, since the error involved in rejection is high, at least it is greater than alpha.
In the other hand, we can't confirm the null hypothesis based on the statistic, since we do not have control of the error involved in the confirmation of the null hypothesis, only in the rejection.
So, you have an inconclusive result. Go back to the theory on which the model was based and / or analyze the consistency of the data used in the analysis.
Thank you guys for your answers.. @Marcelo Correa Alves I believe that my problem is the problematic sample ( for a particular country).. because the same model works fine for other countries..