Dear Fahmi, In my opinion, your question cannot be answered without knowing more about your research question and study design. For the interpretation of your results, it is important to take into account the sample size, accuracy of your measurement instruments, clinical context and analyses used. For example, an OR of 4.2 with a p-value of 0.01 in a small sample with a narrow CI could suggest a strong association. However, the same results in a large sample (say >10,000) with a wide CI and after having done many tests, could also be the result of type I error (multiple testing) in which case no association may exists at all. I hope this helps. Cheers.
Univariate analysis involves measurement of one variable at a time while multivariate analysis involoves more than one variable at a time including confounding effect and or interaction. To calculate OR, cross tab is always univariate even if you put multiple independents at the same time. However, if you put >1 variable in a logistic model, usually the analysis is multivariate.
Reagarding your second question, importance of OR or P value depends on sample size and randomness of sampling. However, P value is more important in the sense that even a high OR is nonsigniificant in the absence of significant P value. Although there are strong arguments in favour of reporting OR in absence of significant P value if your sample is adequate and OR is in line with the scientific fact.
OR=4.2, P=0.01 means you found 4.2 times risk (or benefit) of your independent on the dependent variable after adjustment for confounding variables (if any) and it is significant.
From the statistics point of view, p-value is more important. It is the first thing we concern. But from the clinical point of view, OR is also important. e.g. OR=1.1 didn't mean anything actually (for example: smoking and lung cancer), if you have a very very large sample size, OR=1.1 can be significant, but it is not that clinically meaningful.
Agree with the above... Remember though, OR is quantifying the relevance of a particular variable to your outcome (dependent variable), while P-value is simply quantifying statistical significance (or not) of the relationship you are currently analyzing (basically acknowledging the outcome was unlikely to have occurred by chance alone). Really two different perspectives.
An additional consideration worth mentioning is that of confidence interval (CI). For example, consider the situation with a significant P-value and a wide CI. Although this will fall into the realm of "statistical significance"... in reality, the clinical significance and applicability may be lacking in the setting of a wide CI... regardless of the P-value.
There's a false dichotomy here. Each OR has a p-value associated with it. When the p-value is not significant at, say, the 0.05 level, the 95% confidence intervals of the OR will include 1.0. And, when an OR's 95%CI includes 1.0, the p-value will be >0.05. Either way, the message is the same.
The odds can be thought of as 1 suit of cards - would be 1 to 3 odds or .33 probability. The risk would be 1 out of 4. The odds and risk approach each other around one but are more consistent with each other when the outcome is rare. If not rare the odds overestimates the risk.
The odds would be more appropriate in logistic regression - the maximum likelihood explains the model's overall fitness. If you are using linear regression the p value would be more appropriate as there shouldn't be a odds ratio - if you did have an odds ratio that would be a separate analysis.
I agree with Jason. Sometimes you may get a rather significant p-value but the CI for the OR is quite large (small sample size perhaps), and your conclusion will be too broad. So maybe p=0.01, OR = 4.2, but 95%CI is (1.01 - 28). I'm not sure that's such a conclusive finding.
There is a significant relation (p=.01) between variables X and Y; if both are binary a odds ratio 4.2 says that there a 4.3 times more risk to be Y=1 when X=1 than when X=0.
Dear Fahmi, In my opinion, your question cannot be answered without knowing more about your research question and study design. For the interpretation of your results, it is important to take into account the sample size, accuracy of your measurement instruments, clinical context and analyses used. For example, an OR of 4.2 with a p-value of 0.01 in a small sample with a narrow CI could suggest a strong association. However, the same results in a large sample (say >10,000) with a wide CI and after having done many tests, could also be the result of type I error (multiple testing) in which case no association may exists at all. I hope this helps. Cheers.
You originally stated your question in terms of reliability and I don't really know how to interpret that. Although I don't want to leap in and say as others have that the p-value is somehow more important, I will say that for many people the p-value will determine whether they will look at the result at all or not. For the most famous discussion of the use of p-values, you should see the essay by Jacob Cohen, "The Earth is Round (p
Since this is multivariate analysis the odds ratio is adjusted by the other variables. The strength of the finding is another way of interpretating the p value. The odds ratio can be thought of as the even odds while holding all the other variables constant (while adjusting for them).
There is a 1 to 1 correspondence between the 95% CI for the Odds ratio and the p-value. If the 95% CI includes 1.0 (even odds) then the p value will be greater than 0.05.
Your results indicate that there is evidence of a treatment effect in that your odds ratio 6.3 is >1 and this is confirmed statistically in that the LCL > 1.0. You can be 95% confident that in the first example the odds are not 50/50. However, the interval is wide, indicating that your sample size was not adequate to precisely quantify the magnitude of the treatment effect. The true odds ratio may be only slightly greater than 1.0 or up to 35. The practical implications of either extreme are likely to be important to any decision you might make with these data.
The second example indicates that there is also a treatment effect (this time negative) but don't be fooled into thinking this OR is more precisely estimated. Note that by reversing the definition of treatment-coding the OR and CI are all expressed as reciprocals so you have a similar situation: OR=7.6 (control vs treatment) and CI: (1.78-33.3).
Both variables are statistically significant, but the precision of the estimated effects is poor. Need more ample size, or inject additional information that could constrain the estimates. Are there other enhancers/confounders that you could include int he model?
The concepts are different. OR is a measure of association for case control studies. Smoking in lung cancer scenario gives a OR of 6, this means that if you smoke the possibility of developing lung cancer is 6 times comparing with people who not smoke. The p value is like the interval confidence, for example if you have a p < 0.05, this means that if you repeat an experiment 100 times, there is less than five times you will get a different result. A last example - if you have an OR = 3, but in first case the p value is 0.000 and in other case p value is 0.2 then only the first is stadistically significant
Contributing to the answers of Robert Brenan and others: in interpreting results of regression or other statistics by whatever means - OR, RR, use of p-values or Confidence Intervals - you need to account for what it means in the clinical sense (ie it goes back to your research question, design, topic area, sample size etc...covered by other colleagues). A good discussion about the clinical significance of p-value and CI are as follows:
All in all, in interpretation - don't only look at the statistics - it is the clinical significance and applicability in the real world that also matters.
I agree with Po-Lin and others. First of all, he has a study with a 99% p value; so we know he already exceeded both 95% and 99%, so although a confidence interval is always nice, we know his interval will not include 1.0. Additionally, your 4.2 OR means there is a 420% increase between one group and the other.
As stated by many others p value-OR/RR/IRR/etc. is not an either or proposition. You need a significant p value, but would not report a nominal but statistically significant value. Aside from needing to know the scale of your variables, the sample size of your study population, and the biological underpinning of your research question, you want both values to coincide (strong statistical and predictive relationship), rather than have to choose to report some with insignificant but interesting clinical significance, or unexpected statistical relationships with little clinical merit.
Also, as a spatial epidemiologist I should note RR or OR do not always ascribe to a clinical value; we can have prescriptive environmental or spatial odds ratios, because an odds ratio is just that: the odds of something versus something not occurring. Many facets of epidemiology are applicable to public health, but not necessarily a medical setting. I think we need to keep that in mind, that clinical really should mean epidemiologically proactive significance. Physicians and clinicians in general deserve extreme merit, but there are other areas where health interventions take place that do not necessitate a clinician, but rather a planner, engineer, agriculturalist, or community group's action.
A test associated with a p value is of interest for effect size so modeling effect size can be done using the beta coefficient, so I dthin the effect size is of importance. If you have counts of data you can use the odds of exposure; which can also be retrieved from logistic regression. If the Odds Ratio is of importance may you should also be concerned with the etiological fraction (the attributable risk).
The OR and CI are very important. However, both they and the P-value can be statistically significant but practically unimportant. In multivariate models, use explained variance to calculate power to test if your results are practically important. This is particularly necessary when you are doing a policy-related study where you will be making recommendations for change.
If you are comparing two different models, linear regression and logistic regression - one uses p values and the logistic model uses odds. If you are comparing different linear regression models - you would use the R square to compare between models; with logistic modeling you would use the overall maximum likelihood.
OR tells you the magnitude of the association, while p-value simply let you know whether or not your result is statistical significant. You might interpret that the odds of having outcome in group A is 4.2 times the odds of having outcome in group B (OR=4.2, 95%CI x-x, p-value = 0.01).
The odds ratio gives a more contextual expression of the data at hand in terms of sample size and relationship between variables. p-value on the other hand may most times play a complementary role in validating the analysed relationships by giving statistical significance.
Vu Dien , you may be misleading many people here who currently make the all too common mistake of thinking that ORs are the same as RRs. They are, of course, not the same. While an OR does approximate to an RR as the incidence of the predicted event becomes increasingly small, they are never the same, and an OR always gives an overestimate (however small ) of the RR. I am sure that you already know this. My comment is addressed those who do not distinguish between risk and odds.
They are both important concepts. Below Moore answers regarding extending the discussion to confidence intervals. I ask you to consider effect size also. Did you know there are 7-8 effect size equations that are all different. For instance I would not ever use d-bis and I would always use Cohen's d. Did you know though that d-Cox, d pro bit, perform really well also.
I agree with Patrice - there is also another issue one of hindsight and foresight. P values can be used in modeling to predict as well as describe the effectiveness of the experiment you have just ran (only 1 out of 20 times would you get this result by chance alone if you repeated the same experiment over and over) versus the even odds of an event that already has occurred and positing those odds on a future bet. In regards to effect size, there are two concepts, one is the number of persons in the area under the curve that is of interest versus the numerical characteristics like cholesterol, diastolic blood pressure, or weight of a subject. The odds usually is used on the number of person(s) affected and the p value (not in all instances) may be used on the numerical quantity.