No. It is more subtle. It means that the null cannot be rejected based on the data. That is different from accepting the null because one can also fail to reject a null simply because the power of the test is poor and the data do not contain enough information to reject it. Depending on the type of test you can check the power of the test or calculate the sample size necessary to get reasonable power (at least 85%).
No hypotheses can be accepted. We can only fail to reject the null (p>0.05) or reject the null (p
No. It is more subtle. It means that the null cannot be rejected based on the data. That is different from accepting the null because one can also fail to reject a null simply because the power of the test is poor and the data do not contain enough information to reject it. Depending on the type of test you can check the power of the test or calculate the sample size necessary to get reasonable power (at least 85%).
No hypotheses can be accepted. We can only fail to reject the null (p>0.05) or reject the null (p
Just want to add that the cut-off of 0.05 is completely arbitrary. It just managed to become quite commonly used to "reject the null".
The p-value from a significance test is just a random value summarizing a particular property of the data relative to a statistical model and the specified null hypothesis. One may see this value as a normalized pivotal for a signal-to-noise ratio. The only aim is to check if the signal is visible clearly enough in front of the noise to take it (the signal) "serious" for interpretation or the planning of further studies. But it is a difficult and after all subjective decision what signal-to-noise ratio is deemed sufficient. It makes actually no sense to state a common, general cut-off, like p
When you perform a hypothesis test in statistics, a p-value helps you determine the significance of your results. Hypothesis tests are used to test the validity of a claim that is made about a population. This claim that’s on trial, in essence, is called the null hypothesis.
The alternative hypothesis is the one you would believe if the null hypothesis is concluded to be untrue. The evidence in the trial is your data and the statistics that go along with it. All hypothesis tests ultimately use a p-value to weigh the strength of the evidence (what the data are telling you about the population). The p-value is a number between 0 and 1 and interpreted in the following way:
A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
p-values very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the p-value so your readers can draw their own conclusions.
I would not use the word "evidence", which mixed a Bayesian way of thinking with the pure frequentist p-value, and sounds a bit like "bullshitting" since there is no clean definition of "evidence".
If my null hypothesis is that someone just guesses the lottery numbers next week, I would still not consider it "extremely strong evidence" (against that null) if that person wins, for which p < 0.0001.
To my opinion it's better to say that small p-values indicate that the data are statistically incompatible with the null.
To come to "evidence" one must consider the plausibilities of the possible outcomes (in the lottery example: having mental powers is very highly inplausible, therefore the evidence of the data close to nothing). These plausibilities can be formalized in a prior distribution, and the evidence (against the null) is the amount by which that prior is modified by the data "away from the null". It's quite unclear how to measure this, because usually the distributions span a continuous range of hypotheses, where the null is just one point. Should one use a posterior mean oder mode? What about other moments of the posterior?
And anothe clear no-go is your statement " so null hypothesis may be accepted for p>0.05 ". Never ever do that!