We are conducting a cross sectional study. Here are some tables which show substance abuse among a different groups of participants according to educational status (e.g.Two groups < primary education and > primary education level).
A relative risk is a ratio between two incidence rates from a longitudinal study.
A ratio between two prevalences to analyse the association in a cross-sectional study is also a rate ratio but it is not a "relative risk" because a cross-sectional study is a point study where you see all the elements at the same time so it is not possible to talk about causality. We can not talk about exposure and outcome.
Yes you can use odds ratio for cross sectional studies. Form 2x2 table by taking the education level (risk factor) at rows and substance abuse (Yes/no) at the columns.
Yes, the odds ratio is commonly used in cross-sectional studies. As with all such measures of effect size, a confidence interval should also be reported.
BUT - bear in mind that the odds ratio is a pretty unintuitive measure. It is *a* measure of relative risk, but it is not exactly the same as *the* relative risk i.e. a simple ratio of two proportions. If we try to interpret it in the way that we would interpret a relative risk, strange things can happen. See the attached paper
Newcombe RG. A deficiency of the odds ratio as a measure of effect size. Statistics in Medicine 2006, 25, 4235-4240.
This and a second paradox relating to odds ratios can be found in section 11.2 of my book
Newcombe RG. Confidence intervals for proportions and related measures of effect size. Chapman & Hall/CRC Biostatistics Series, Taylor & Francis, Boca Raton, FL, 2012. ISBN: 978-4-4398-1278-5.
Yes, definitely--see the attached paper. It`s the most commonly used statistical method when reporting cross-sectional study results. You can`t use RR in reporting cross-sectional study results as you cant measure the risk!
The only problem with the odds ratio is the difficulty people have in interpreting it. The alternative in a cross-sectional study is the prevalence ratio, which has a very simple interpretation. However, modelling prevalence ratios, like modelling relative risks, can be difficult multivariately, because predicted probabilities can exceed 1. It's worth a try, though. Stata's binreg command models prevalence/risk ratios, but you will find that it doesn't always converge. I've been quite lucky with it (about 80% of models) but some people have seen the reverse.
Some excellent answers here, but the original question begs a simple answer.... The odds ratio functions as an estimate the "true" relative risk, which requires the true incidence data for each group, something which is impossible on a single-measure cross-sectional study. So the O.R. is the one to use. BUT BEWARE of an O.R. generated from small numbers, and don't forget that the appropriate confidence limits around the O.R. can be useful in deciding if the O.R. is statistically valid (CL should not include 1.0)
As previously mentioned the rate ratio (a ratio btween two prevalence) is the best choice. However, some software, like SPSS provides you Odds ratios with Confidence Interval and it does not provide rate ratio with CI. that is the reason why there are a lot of articles showing OR with CI instead of Rate Ratios.
Depending on what you woul like to measure, SMR (standard mortality ratios or standard morbdity ratio between two goups) can be also a good option.
Richard, I agree that the relative risk (incidence among exposed / incidence among non-exposed) is the best, but it requires the incidence rate, and this is not usually available in a cross-sectional study. In a cross-sectional study you are further vulnerable to assumptions about chronological sequence. Mohan is trying to establish a relationship between edu. status and subst. abuse. This MAY be based on the hypothesis that edu status is the independent variable, but it is also possible that "early-start subst- abuse" might determine the edu status in later years. So we cannot fall into that trap. Without a longitudinal basis, we cannot determine the incidence rate of exp vs non-exp, groups, and therefore the relative risk (ratio between the incidence rates) is unavailable. Under cross-sectional conditions, and usually in a case-control study (where 100% of the studied population is not available), the odds ratio is the only safe option.
Tim Sly identifies 'statistically valid' with 'CL should not include 1.0'. This is totally incorrect. If the confidence interval for a ratio measure such as the odds ratio or relative risk excludes the null hypothesis value, 1, that corresponds to saying that the study provided statistically significant evidence, at or beyond the conventional 2-sided alpha level, to reject the null hypothesis (in this case OR = 1) in favour of the alternative (OR not equal to 1). The narrower the CI (as measured by upper limit divided by lower limit), the more informative the study was.
Robert. Other than my somewhat loose term "statistically valid", we are saying the same thing. If I calculate (using Miettinen's method), an OR of for example 3.46 with CL of 1.61 and 7.43, I am concluding, with 95% confidence that the population OR is somewhere between 1.61 and 7.43. As the interval does not include 1.0 (as here), I can conclude that in the population, the cases are more likely than non-cases to have been 'exposed'. The OR and RR are strictly speaking "descriptive" methods but a CL that includes 1.0 would not allow me to reject a null hypothesis even if that analysis had been.
A relative risk is a ratio between two incidence rates from a longitudinal study.
A ratio between two prevalences to analyse the association in a cross-sectional study is also a rate ratio but it is not a "relative risk" because a cross-sectional study is a point study where you see all the elements at the same time so it is not possible to talk about causality. We can not talk about exposure and outcome.
I think Richard makes a great point here. When is this confusion going to stop with regards to Prevalence Rate Ratios and Prevalence Odds Ratios. I remember reading Zocchetti`s 1997 paper on this issue and it was a good paper. You may all want to refer again to this classical for more insight at. http://ije.oxfordjournals.org/content/26/1/220.full.pdf
Personally, I won`t think odds ratios for cross-sectional studies, I will just do my prevalence rates as this design can just help us draw hypothesis then we use a longitudinal (eg cohorts) approach to estimate causality. Moreover, these ORs may introduce confounding even when there`s none especially in terms of prevalence rates.
As soon as you start adjusting for confounders, logistic regression is the obvious choice of model, and the results of this analysis are naturally expressed on the scale of either the odds ratio or its log. There is no getting around this. There is no way to convert ORs into RRs that does not depend on some kind of value for either the whole-population risk or the risk for a case with all risk factors negative - either the value based on the same dataset or some hypothesised value.
Yes, you can! See practical use in the enclosed articles. You can also control for a other relevant variables in a logistic regression analysis. Of course you have to talk about associations, not causation, based on your design.
Article Assessing alcohol use and smoking among patients admitted to...
Article Attitudes towards 12-step groups and referral practices in a...
Don't forget that to use the relative risk you must have have the true incidence data for the two groups. If this is NOT available (as in a case control setting), you use the odds ratio, OR the exposure ratio (exp rate for the cases over the exp rate for the controls).