First of all, I tend to lump unclear bias with low bias. Why penalize a study for what often is a reporting shortcoming and not a shortcoming of how the study was conducted. For example, the Cochrane risk of bias assessment tool wants to know how the randomization procedure was done. No one reports that now a days.
I see two approaches to dealing with studies with high risks. One, you could calculate the overall ES with and without those studies, and then discuss the difference if any. Two and my preferred way, you could run a subgroup analysis comparing the subgroup ESs for the studies with and without a high risk of bias. If there is no statistical difference between the two subgroups in ES, then you can say that the studies with a high risk of bias are not skewing your analysis.
I agree with Gordon L Warren. Be very careful that you are rating the study's actual risk of bias, and not rating the study's clarity of reporting / writing style of their risk of bias. Both of the two approaches are excellent and should be done. A third approach that people use is to exclude studies that clearly have a high risk of bias in the meta-analysis. Testing out the analyses based on these different approaches can help you have a robust and confident result.
If studies with high risk of bias were included in your pooled analysis, you would see substantial heterogenity (i.e in terms of I-square value)
I also agree with Janni, Gordon and Ahmed; it is better to excluded these studies.
If you want to retain them, may I suggest you to perform "subgroup analyses" (e.g one group of analysis including studies with low risk of bias, and another group of analysis including studies with high risk of bias)
Another suggestion........
Please, perform "leave-one-out-sensitivity analysis" by excluding individual studies from the meta-analysis.
I am not sure that exclusion is the best approach because that simply means that you are applying a dichotomous quality score - 0 & 1. There has been a move away from univariate quality scores (eg Juni et al 1999, JAMA) but such studies have utilized the flawed approach of looking at strata of quality and then show conflicting results which they then attribute to quality scores. The problem is that strata of quality have a different mix of study precision and effect magnitudes and thus quality has ultimately little to do with the effects within strata. The next debate has been to use quality components without a score (Greenland, Biostatistics, 2001) but the problem is that there is no way we can determine the effect of any component deficits on the direction or magnitude of study effects. Thompson et al (Int J Epidemiol. 2011 Jun; 40(3): 765–777. ) have proposed such a method, but in my view fails substantially as they propose we impute new ES and SE's which are simply not imputable. I proposed that we use the quality effects model since in this case we do not need to associate quality scores with direction nor magnitude of effect and the subjectivity of quality scores do not matter so long as they have some information value - this is not bias quantification at all. You can try it out as it is built into meta-analytic software (www.epigear.com) called MetaXL and the paper is attached.