Since, in my study , two null hypothesis have been accepted out of three ;for instance, an app has no affect on learning motivation. What does it contribute? Or does it make any difference?
I feel that your example for null hpyothesis does not really work. If somebody decides to code an app, there must have been an idea (i.e. hypothesis) what a certain app may be useful for. If it turns out that this app does not contribute to increase learning motivattion then either the idea that any app could do this is wrong or the app itself did not match the goal.
Or did I misunderstand the meaning of null hypothesis at all?
That's not entirely correct. You are right when we are talking about a significance test. This test is only about a single hypothesis (the "null hypothesis"), which is rejected or not. The whole test is designed to use the information from the data to discredit the tested hypothesis. Not having enough evidence from the data to discredit the hypothesis can have several different reasons and there is no information available about the possible reasons, so in this case the data remains inconclusive.
In contrast, a hypothesis test tests two alternative hypotheses, A against B. Practically, one of these two alternatives is often identical to the null hypothesis. But the hypothesis test is designed as a strategy to decide between A and B, so the result of this test is to accept one of these two hypotheses (and to reject the other). This strategy requires some additional thoughts and considerations that eventually lead to the selection of sensible error-rates and the determination of the required sample size. Now that the sample size is fixed, the decision based on the test statistic is either to accept A or to accept B, what is a statistically maximizes the utility associated with the actions that are taken in response to these possible decisions.
@Amna:
If you really did a hypothesis test (what I doubt, however) then "accepting the null hypothesis" means that "you should act as if the null hypothesis was true" (whatever this practically means should follow from the context and the research question).
If you conducted a significance test, then you get a measure for the statistical significance of the data relative to the tested hypothesis (the null hypothesis). A high statistical significance (= a low p-value) indicates that it may be worth to continue research in this direction, to believe that the experiment was able to demonstrate an effect. Larger p-values indicate that your experimental possibilities are inadequate to analyze the relation you focused on. This means you may leave it as it is or think if you change your experimental setup or the sample size to have closer look. How small a p-value should be to become convinced that your experiment was able to show an effect depends on great many things. This has to be judged wisely based on expert knowledge of the subject matter.
To perform a hypothesis test, it is first necessary to define an assumed set of conditions identified as the null hypothesis, Ho. Additionally, an alternative hypothesis, Ha, is, as the name implies, an alternative set of conditions that will be assumed to exist if the null hypothesis is rejected. The statistical procedure consists of assuming that the null hypothesis is true, and then probing the data to realise if there is sufficient evidence that it should be rejected. Ho cannot actually be proved, only disproved. If the null hypothesis cannot be disproved it should be stated that we “fail to reject,” rather than “prove” or “accept,” the hypothesis.
There is a easy read called "Science as Falsification" by Karl Popper that gets at this and the scientific process as a whole. We set hypothesis tests to either falsify or fail to falsify, not to accept or fail to accept. Think of a court case as an example...when a person is found not-guilty, we don't say they are innocent. In the formal language of the court, we say they are not-guilty. There is a difference between being innocent and being not guilty. The same will a null, there is a difference between accepting it and failing to reject it. It is less about the language of the actual hypothesis test you are doing and more about the language of the philosophy of science and falsification. In your specific case, it looks like the evidence does not suggest (is insufficient) your app had an impact (either good or bad) on motivation.
Be sure to consider the possibility of setting an alpha level for one hypothesis and setting a beta level for it's alternative. N can vary up to the point where an evidence based decision can be made between the two alternatives.
“…, no p-value can reveal the plausibility, presence, truth, or importance of an association or effect. Therefore, a label of statistical significance does not mean or imply that an association or effect is highly probable, real, true, or important. Nor does a label of statistical nonsignificance lead to the association or effect being improbable, absent, false, or unimportant. Yet the dichotomization into “significant” and “not significant” is taken as an imprimatur of authority on these characteristics.” (quotation from
Wasserstein, R. L., Schlomer, G. L. & Lazar, N. A. Moving to a World Beyond "p
It means ur app which was supposed to make some kinda difference in learning motivation is failed to do so as per your findings..In other words, if this app is not contributing anything then this is not needed to be used too
I used null hypotheses in my research how I could state that data of analysis show failing to reject the null hypotheses, could you give us an introductory statement