If there was no significant difference ( p value greater than 0.05) and the null is rejecting , is this clinically valid or should it be only statistically valid?
in others words, when should the significant difference be taken seriously?
"in others words, when the significant difference should be taken seriously ?"
that is an inverted logic!
Whether or not a result is "significant" if a judgement call. It is NOT simply the result of comparing a p-value with some arbitrary cut-off (like 0.05). If you state that the observed difference is "significant" than you mean that you take it serious. It would be non-sensical to call a difference "significant" if you would't take it serious. And it would also be non-sensical to call a result that you take serious as "non-significant".
If someone told me: see, I measured some data and the difference (treatment effect, whatever) has p = 0.02. Is this significant? I don't know. I can't judge this. The p-value alone does not provide sufficient information (actually, the p-value does not provide any helpful or additional information that would not be already provided by the important facts, like the aim of the study, the design of the experiment, the context of the experiment, the sample size, the used statistical model, the estimated effect size, and the uncertainty of the estimate).
If, and only if(!!) this whole experiment was designed to perform a hypothesis tests to decide between two alternative hypotheses (A or B) and based on given values for alpha and beta (the type-I and II error-rates), and if I knew the desired alpha, then, and only then, I could conclude that we could act as if B was true with a confidence of (1-alpha) when p
Essam, this is an old controversy that comes about the middle of the past century, maybe the paper that is attached can help you.
Also, I recommend the book of Andrew Vickers: “What is a p-value anyway? 34 stories to help you actually understand statistics” (2010, by Pearson Education, Inc.). It’s quite funny but very instructive.
"in others words, when the significant difference should be taken seriously ?"
that is an inverted logic!
Whether or not a result is "significant" if a judgement call. It is NOT simply the result of comparing a p-value with some arbitrary cut-off (like 0.05). If you state that the observed difference is "significant" than you mean that you take it serious. It would be non-sensical to call a difference "significant" if you would't take it serious. And it would also be non-sensical to call a result that you take serious as "non-significant".
If someone told me: see, I measured some data and the difference (treatment effect, whatever) has p = 0.02. Is this significant? I don't know. I can't judge this. The p-value alone does not provide sufficient information (actually, the p-value does not provide any helpful or additional information that would not be already provided by the important facts, like the aim of the study, the design of the experiment, the context of the experiment, the sample size, the used statistical model, the estimated effect size, and the uncertainty of the estimate).
If, and only if(!!) this whole experiment was designed to perform a hypothesis tests to decide between two alternative hypotheses (A or B) and based on given values for alpha and beta (the type-I and II error-rates), and if I knew the desired alpha, then, and only then, I could conclude that we could act as if B was true with a confidence of (1-alpha) when p
I think Jochen sums it up pretty well. It takes more to evaluate your effects and your data, to make a decision. A good starter in my opinon is Geoff Cumming - Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. The ESCI software delivered with the book visualizes pretty well, why a p-value is quite a bad marker for decision making. But have a look at the link below.
Edit: I remembered that there is a youtube video of the ESCI program "dance of the p-values", which shows nicely the p-value problem (second link).
I would look at it under the guise of Statistical vs Practical significance.
For example, I can take a couple thousand people and take their weight. I can then shave off their eyebrows and weight them again. I will find a statistically significant difference in weight. Does this mean eye brow shaving should become the next weight loss fad? Absolutely not! The few grams every single person dropped by having their eye brows removed does not constitute a practically significant difference.
If you understand the system you are dealing with, you can look at the difference and determine if it is meaningful in any way. If the difference is not meaningful, you could argue that it's not worth wasting your time with the analysis.
The p-values only relate to statistical significance. Although the interpretation of statistical results should be phrased to make clear the relevance of the statistical significance to the research question, there ought to be no mention of the clinical, financial, process improvement, etc.... meaningfulness of those results.