Looking for information on quantitative reporting standards for mixed methods articles when the confidence intervals of bivariate tests are significant, but are so broad that they indicate the model has failed. Thanks in advance for your input!
I recommend reporting the effect size on the bivariate comparisons (r-squared or Cohen's d). I find these are much more easily understood by most readers and reviewers, and they are easily calculated.
In most mixed methods studies, the quantitive results are reported in the same fashion that they would be reported in a standard quantitative article. So, if you are working in a field that routinely reports similar analyses, then you should follow the typical formats used there.
One exception might be if the broad confidence intervals have particular relevance for the rest of your mixed methods study. For example, if they arerelated to how you use your qualitative methods, then you would want to examine those quantitative results in more detail.
Thank you David and Peter. It is standard practice in public health epidemiology to report odds ratios (and preferably risk ratios) if they are statistically significant. My concern is the reliability and therefore utility of the bivariate test results. While I'm under the impression that an odds ratio has a more meaningful interpretation than a side-by-side comparison (e.g., 48% of men experienced depression compared to 67% of women), I don't know if this advantage is lost when the confidence intervals are too broad (e.g., 95% CI: 1.7 - 9.3).
I don't think "reliability" is quite the right term in this case, because it sounds like you are more interested "replicability" -- i.e., would you find the same results if you repeated your analyses in another sample from the same population. In general, the p < .05 is your guideline there, because it indicates the probability that your results are due to chance and hence not replicable.
I'm not an expert in logistic analyses, but my understanding is that you must have a relatively large effect, in order for it to significant despite the broad confidence intervals, so that the estimate for men (with its CI's) does not overlap with the estimate for women (with its CI's)
Also, if you are doing strictly bivariate analyses (no control variables), then consider the equivalent approach of running a t-Test, where most of us have been taught to rely simply on level of statistical significance and be "happy" if the means (in this case proportions) are different.
I think we may have disciplinary language differences - "reliability" is used in epidemiology based on the "precision" of an effect estimate. Wide confidences intervals indicate poor precision (in my case due to low sample size and therefore high random variation) and therefore low reliability. Confidence intervals indicate precision/reliability, while p-values will only tell us the degree of confidence of the qualitative result (positive/negative/no correlation). In epi, we push for reporting confidence intervals rather than p-values, because we also like to know the reliability of the effect estimates.
As a few others are following the question, I'll share what an epi/biostats post-doc advised, which I think is similar to David's recommendation in his last paragraph:
Due to the poor precision of the results (i.e. OR = 12.1; 95% CI: 2.4 - 21.8), the effect estimate isn't reliable enough to be useful, despite being more interpretable in epi. A side-by-side comparison of the n(%) experiencing the outcome (Y) in each exposure group (X), accompanied by the p-value of a bivariate test, suffices for a descriptive table in a mixed methods article. So, to combine David and Taylor's advice, report the p-value without the effect estimate or CI but with side-by-side comparisons of counts/percentages instead, and be "happy" that it's significant.