I don't know what you mean by "overlapping" here. However, because the 95% confidence interval contains one, the OR is not statistically significant when using the traditional alpha level of 0.05.
NO. This is not statistically significant, because the one is included into de confidence interval. It means, some times the variable exposure is a risk factor and in other number of times this same variable is a protector factor. So we don´t have enough evidence to say that is a risk factor or the opposite. The most real is to say there is not evidence about a relationship between variables.
MH is Mantel-Haenszel which can be used only for dichotomous data, as stated above, a CI that includes one is not significant. For a nice summary of types of analysis this is a useful table: http://handbook.cochrane.org/chapter_9/table_9_4_a_summary_of_meta_analysis_methods_available_in.htm
According to your figure (meta-analysis), the summary OR overlap with the central line (OR=1), that means the risk in Case group is equal to the risk in Control group. Thus, no siginificant~
Hi agree with answers; would like to add some points
P value tests the probability of the null hypothesis being rejected outside a chance occurrence, and here a greater than 0.05 P value can not reject the null hypothesis.
Confidence intervals test the possibility of the test hypothesis or the alternative hypothesis being acceptable and including the mean population value 95% of the time (95%CI), in this case the inclusion of 1 in the range points to the most likely exclusion of the population risk ratios in the range. Also the broad range of the CI may have been caused by a small sample size.
So put together the findings(P value and CI) are important since they are in agreement with each other( insignificant risk association), however, if you are very certain that the risk should have been significant, then increasing the sample size may clear up things.
It is not significant. To be significant the odds ratio should be more than 1 along with the lower bound of CI, which should also be more than 1. In your result, P value corrects shows that it is 0.055, thus non significant. Since OR touches 1, that means no association, sometimes it is less than 1 also, which means protective.
Hi Essam. The figure shows no significant association because the CI crosses the line of no effect/association (1.00). However, you may need to look for any methodological issues between studies. Try to appraise the quality of evidence using standardized tools so you can exclude outliers or studies with less robust designs. Then recalculate the pooled OR.
To be specific to your question, I would agree with other contributors that the P value shows no significance. This can also be appreciated from that wide confidence interval. However, I think you should go forth looking into reasons for that lack of statistical significance such as the sample size. Sometimes there appears to be no statistical significance even when biological plausibility/significance is there.
The null value for an odds ratio is 1.0, so a 95% CI of 0.975 to 10.901 includes the null value and therefore indicates that the OR is not statistically significantly different from 1.0, so not significant. The p-value of 0.055 also indicates that the OR is not statistically significantly different from 1.0. So the answer is that your OR of 3.2 is not significant. As pointed out earlier your sample size may be too small to have enough power to detect a statistically significant result if one exists.
I would agree with other contributors that the P value=0.055 shows no significance maybe because of small sample size.The 95% confidence interval contains one (null value for an odds ratio) ,so OR isnot significant.
Your attached diagram shows that you are employing the Mantel-Haenzsel procedure, but the "n" is not shown. For small cell sizes, even reasonably large odds ratios are unreliable. The conf limits around the O.R., however, are revealing, and they all include the null value (1), so there is nothing statistically-significant. I suspect with such results the sample sizes are all too small.
Are you trying to do a meta-analysis across several studies? If so, you may be limited to the study sizes used by others, and you may have found a fault (very common) in the other authors' studies where they have not performed a proper power analysis. Nothing you can do except report in detail what you have found, and hope that other authors do a proper sample size determination and power analysis.
I agree with most all of what has been said by our colleagues. I would make a couple of comments, though. Altough the 0.05 p value is conventional, when statistical specifications are made (before the data collection), it becomes an inviolable boundary: this is one of the game rules. As is said before, being your Odds Ratio one of a strong association, the first -although not an elegant resource- is to increase the sample size (since it means two assessments of results) . Nevertheless this does not warrant that some additional cases will render the "p" value "significant" . Moreover, the OR size might fall too. This applies to results coming from a single experience, which ought to have a proper design before data collection. In other circumstances, a test for homogeinity (Mantel Haenzsel's, for example) might shed some light to the situation (the meaning of a bordering OR). Since we do not know what your are really doing -whether or not is a meta analysis - what stands is the “p” value "inviolable boundary" . There are other procedures to cope with your trouble, but it would require more information from yours.
I also agree with most of the colleges have told, but first of all I would recommend you, Essam, if you have a CI result to not pay attention to the level of significance, because CI is more informative. Moreover, if you have the opportunity to increase the sample size, most of the times you will get a statistical significant result that may have not clinical and public health importance. That the reason the significance level is very tricky. As we all know the statistical significance result has a big sample size dependency.
Taking a look to the CI limits you reported, one can see that lower bound is near to 1 and the upper bound to 11. Being the OR equal to 3,2, it shows that the OR is far away from the upper bound, almost a triple of its distance to the lower bound, indicating an unbalance around the central point estimate, this situation make ones think about a small sample size.
¿Could you, please, tell us what the study sample size was?
Because at this point in time of the study it is not possible to increase the sample size, one more thing you can do is to estimate new CI at 90%, by doing that you increase the power of the fix sample size.
I don’t agree. Since the ’70 epidemiology should no more be qualitative (i.e. significant or not significant based on a P-value) but quantitative. For more details see the book of K Rothman “Modern epidemiology” and of D Kleinbaum et al “Epidemiologic Research: Principles and Quantitative Methods”.
Do you really think there is a difference between a p=0.049 and a p=0.051 ?
Do you really think there is a difference between a lower limit of a CI=0.99 and a lower limit of a CI=1.01 ?
To answer to M Essam A. Al-Moraissi, I will say, yes, your results are significant since the punctual value of the OR is high and the lower limit of the 95CI if very near to 1. The size of your sample is probably weak and, if possible, you should try to include more persons in your study. If not possible, you should try to publish it nevertheless.
This interesting discussion has to do with the “small study groups” issue, a matter of my concern for a long time and empirically explored by me. A study group as the one we are talking about turns out to be a “small study group” (and propositions are made to add another smaller study group) so it falls into the troubles that affect this kind of data sets. When a hypothesis lays beneath the study procedures (i.e. study desing, methods for statistical significance and association analysis), some specifications are to be done, one of them being the alpha error level (i.e. one or two tailed 0.05; I never saw an alpha error = 0.055 or 0.06) something that determines a given sample size over which you can try some trades off but not moving downward the selected Alpha error level. If an OR will be the association statistic, then its size enters in the calculation too, as well as the selected beta error value. Beyond mathematical “technicalities”, experience will show you that, given a fixed OR value at the time of calculations, choosing a high OR is more prone to result from beta error if small sample sizes are selected. Too often, two mistakes are made: one is to carry out the analysis of a given data set (patients and some variables of them) including its manipulation (i.e. splitting the set in two groups as if they were cases and controls) without a working hypothesis. In this case chance will play a demonstrated misleading effect. The second is doing everything as prescribed, but selecting a high OR for making the sample size smaller (This is a temptation when using statistical packages without full knowledge of what you want and what there outputs mean) . Moreover, in cases of well defined cases and controls, the lack of an “a priori” working hypothesis may lead to the same flaws. In other words, the risks of false outcomes are the same. (I assume you know the difference between a “conceptual” hypothesis and a “working” one). To get rid of the intromission of chance in clinical epidemiological research it is mandatory to stand the values necessary to calculate the sample size so as to avoid both alpha and beta errors. Given these considerations, I would thoroughly review the study with the results that are bothering you and if there is doubt about any of the above points I would discard it from my analysis. Note that I am not proposing to reject from publication a study such as this (although as referee I would make caveats), but if what you want is validity, accuracy and precision, don´t include it in your task.
To answer to M Essam A. Al-Moraissi second question. In this situation you can say that the high punctual value of the OR, the value of the the lower limit of the CI very close to one (and the p vallowalue close to 0.05 which is another way to say the same thing) and, I speculate, the scarcity of the situation under study allows to conclude to a link between the situation studied and the exposure. BTW, if the situation studied is more frequent,, I suggesst to replicate this sudy with 2 or 3 (not more) controls and calculate a McNemar OR and, if you like p values, a McNemar chi-squared.
Thank you for your answer,but I want to ask you that what did you mean that I have replicate this study with 2 or 3 (not more) controls ? what about the study group?
One way to increase the power of a study is to have more than one control per case but, if you can replicate your study with a larger size of cases, you don’t need to have more than one control per case, knowing it is always more difficult to find controls than cases.
I think the direction that took this dialog is not helping Essam to solve his question, and because you still insist in using p-value, I would like to recommend you, to read the Rothman's book again on chapter # 4, pages 115 to125.
Let me point out some paragraphs on this chapter:
''The ubiquitous use of ''p-value'' and reference to ''statistically significant'' findings in the current medical literature demonstrates the dominant role that statistical hypothesis testing plays in data analysis in the biomedical sciences. Many researchers believe that it would be fruitless to submit for publication any paper that lacks statistical tests of ''significant''.
Latter one he says: ''Though the p-value is a reasonably meaningful continuous measure, it is often used to force a qualitative decision about rejection of the null hypothesis. An arbitrary point, usually 5 percent, is selected as a criterion by which to judge the p-value''.
''It is generally incorrect, for reasons that will be clear later, to think of ''accepting'' the null hypothesis i favor of the alternative when it cannot be rejected; therefore, ''not rejected'' is not equivalent to accepted with regard to the null hypothesis'' .
He wondered himself a central question: ¿If ''significance'' testing is misleading, how should results be presented? He advises that: ''It is best to conceptualize the problem as a measurement problem rather than as a problem of decision making''. That the reason why we have to pay attention to the values of the lower and upper limits, because this range represent a set of values for the point estimate that is consistent with the observed data. This range of values is known as confidence interval.
He keeps saying that best way to answer that question is by using confidence intervals, and he says that '' The confidence interval does much more than asses the extent to which the null hypothesis is compatible with the data. It provides simultaneously an idea of the magnitude of the effect and the inherent variability in the estimate''.
In order to make short and fruitful this discussion I would like to end my contribution by adding Rothman final remark, on page 125: ''Indeed, since statistical ''significance'' testing promotes so much misinterpretation, it would be reasonable to avoid it and the use of p-value entirely; routine avoidance of testing might have the desirable effect of accelerating its inevitable demise as a method for inference''.
So, based on Rothman remarks, the same author Michel recommended, it seems that if we keep talking about the p-value and significance level, we are wasting time, and not helping to Essam to solve his problem.
Thank you for your answer. Type of studies is retrospective studies, my research question is comparing between 2 surgical methods regarding recurrence rate.
Of course, they are clinical studies but retrospective
Because you are in a retrospective clinical trial, and your response variable should be "time to event", I would recommend you to use the Kaplan-Meier and Cox porportional model to analysis your study.
This will be better than analysing it as a Case-Control study and your results will be neat and robutness.
¿Did you have the data of the main events?
I mean the date when the reserachers ramdomly allocated the study subjects, the date when they applied the intervention (treatment), the date when they measure the main end point of the study, and so on.
If you have those data, you will not have problem using these methods.
I don't think I insist to use the p-value, on the contrary, I think p-value is a nuisance, as is a nuisance to look only if the 95CI contains or not 1.
I insist on the necessity to have a quantitative and not a qualitative interpretation of the results.
Thanks for your answer. Unfortunately , don't have required data to do survival analysis, because the included studies did not report details about follow-up times for each lesion. Only reported overall follow-up time for all lesions and they did not report at what time ,each lesion recurred .
In that case the best you can do, I think, is to test the proportion recurrence rates of the 2 surgical methods.
You may use the Z test for proportions or the Chi Square, with one degree of freedom, is the same.
You can not use the OR to analysis these data, because you are in a retrospective clinical trial, and not in a Case-Control study, albeit you have a 2 by 2 table.
Essam........ At this stage, I suspect you are hopelessly confused with apparently conflicting advice from many different directions on this site. Can you sit down with a biostatistician and systematically go through the research question, the design, the type of data, power analysis, confounders, and so on.
This result is at the limit of significance to the classic threshold retained 5% (probably lack of power ). The results were presented as such and should not be handled . It is up to readers to judge based on their knowledge of the problem.
based on 95%CI and P-value test is significant and it is correct. but your problem related to sample size. your sample size possibility for some cell was small and then power analysis low.
The answer is that your results are NOT significant because the confidence interval includes 1. Why? Because 1 is the null value, or, as often called, it is the "referent category" for the odds ratio - which is the value to which all groups or categories of the OR will refer to or be compared to. I hope this is helpful.
as next steps, might I suggest that you talk through your results with your supervisor or committee. In some cases, you can change the confidence interval from 95% to 90%, in other words, making the CI less stringent. This would be permissible only if your hypotheses clearly allow this approach. Another way would be to do a o e sided p-value, o ly permissible when your study design clearly indicates which direction your results are supposed to go. REMEMBER, BOTH THESE (OR OTHER) ALTERNATIVES are allowable only under strict and clear conditions of directionality or context. Please check with experts on your committee or your school to assess what you may do next.
Dear Esam Halboub: If clinical facts are so obvious that in the issue under discussion the former prevails over a "tiny" lack of significance, then don't use statistics. Experience shows that this "tiny" lack of significance may become much greater ( hence, de resultas and conclusions invalid) when the sample size grants real significance. Remember that not all the clinical readers are skilled in probabilities. Their clinical decisiion will rely on Essam's conclusions. So, make what is necessary to make your results reliable -following the rules of statistical infernce.
The most important point to remember is that "statistical significance" is not the only parameter of importance in evaluating the scientific significance of a study or analysis. Other parameters such as clinical significance, the scientific importance of the research question under examination, and the rarity of and/or the need for information about the disease or clinical condition or scientific issue are further issues whereby your results may be of interest to the scientifuc community and thus should be a part of your decision matrix regarding whether or not to report your results to the larger scientific community.
Studies with so called positive results are not the only ones journal editors may desire to publish. Increasingly, the parameters i mentioned above as well as many others are part of a decision tree underlyng publication.
As before, my suggestion is to consult with your advisors and faculty and assess whether there may be a rationale to reframe your study findings, or finetune the results further, or suggest the use of additional criteria such as one sided tests (only under specific conditions) or other strategies. I would suggest thoughtfully discussing these points with your committee or other consultants before finally closing the book on your work. I hope this is helpful. Best of luck!
It is not statistically significant in the light of your results. The clinical significant is a different issue. If a results in research is significant both statistically and clinically, it is very encouraging result. However, sometimes lack of statistical significance does nor deny the clinical value of a research result.
According to this results there is no significant. but you better focus on clinically significant because may be it is important than statistically significant. so if clinically you take an good answer it is better run Bayesian analysis and report it.
An Odds Ratio of 3.25 with Confidence Interval, 0.975 to 10.901, P = 0.055. doesn't give us any significance since it includes 1 within CI telling as no association between
The CI (0.975-10.905) for the actual value of 3.25 OR is too wide, critically this relative risk is bound to error pointing to be not statistically significant and that is in keeping with the p- 0.055 .