Explain the circumstances: Sometimes you have a variable that is statiscally significantly related to your outcome of interest that does not exert a big impact in a clinical/practical sense. For example, you might find that having one additional milligram per liter of C-reactive protein in your blood is significantly associated with a decrease in your overall level of happiness, but this decrease is very small (like 0.01%). If C-reactive protein is left out of your model, not much bias will be introduced because its impact in a concrete sense was very small and probably negligible. By contrast, if you saw a 10% average decrease in levels of happiness per 1 mg/L increase in C-reactive protein, you'd definitely have substantial bias if you didn't include that variable in your models.
Assessing the direction: This is easy. Just run two sets of models, one with your variable of interest included and one with it excluded, and compare your results. It's pretty easy to determine the direction of bias introduced when you compare two sets of models directly--you look at the parameter estimate for the explanatory variable of interest and can thus quantify the potential variability that remains unaccounted for if you don't include that measure. Looking at the full model provides a context for comparison with the reduced model excluding the explanatory variable of interest.