When doing a meta-analysis for a categorical outcome; is there any guidance as to when it is better to use odds ratio/ relative risk and when it is better to use risk difference?
but my question is how to interpret results from the same review in which the results have a RR that is not significant (CI includes 1), but a Risk difference that is significant (CI excludes 0).
You only have to be sure that you interpret either one correctly. This is the advice given by Schmidt & Kohlmann (See below). Given that the relative risk is more intuitive, then it may be more advisable to use it. At least, this is the advice given by Deeks (See below)
Schmidt, C. O., & Kohlmann, T. (2008). When to use the odds ratio or the relative risk? International Journal of Public Health, 53(3), 165–167. https://doi.org/10.1007/s00038-008-7068-3
Deeks, J. (1998). When can odds ratios mislead? Odds ratios should be used only in case-control studies and logistic regression analyses. BMJ (Clinical Research Ed.), 317(7166), 1155-6-7. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9784470
I haven't looked at the advice given by Schmidt&al. and by Deeks: probably useful texts for making sure that you interpret either measure correctly, as stressed by Taha E.
However, as regards practical statistical inference, your problem falls under the heading: What to do if one has made two statistical tests of essentially the same hypothesis and obtained (quasi-)conflicting answers? Here theory cannot help you, only subject-matter insight. And one thing more: remember that a statistical test is a STATISTICAL test, not a sharp knife but one as dull as random error (the CI width!) suggests.