When you are making multiple statistical tests, the p-values can be adjusted* to control the inflated risk of making a "type-I error" (false positive). For a very brief overview, http://rcompanion.org/rcompanion/f_01.html .
Thinking about the simple case where there is a "correct" and "incorrect" conclusion to a hypothesis test, It is important to realize that there is always a tradeoff between the risk of making a type-I error (false positive) and the risk of making a type-II error (false negative). (For a given data set, with a given power, say.) The more conservative you are against accepting a false positive, the greater the risk of failing to acknowledge a true positive. There's no universal advice here. There are cases when it is better to accept false positives (an initial screening trial with many treatments) and time when it is better to be more conservative (medical trials where lives and money are on the line).
* Or, equivalently, the alpha value (usually 0.05) could be adjusted.
Unless you are expressly requested to use M-H OR, I suggest using a logistic regression. In this way, you'll be able to control for more that one variable at the time, have more precise results.
To help you better, please, provide us with more information on the variables you have and what you need to find out.