In your experience, what are the main sources of flexibility in meta-analyses, i.e., at what stage do decisions made by the analysts have the most major consequences on the outcome?
On my end, I think of two areas - (1) risk of bias/quality assessment and (2) dealing with missing data/values & imputations.
Assessing the quality of included studies is arguably variable and subjective. There is no doubt that an analyst's experience and expertise on the topic matters on this part. Since it is very likely that reviewers working on the same systematic review and meta-analysis (SRMA) have unequal levels of experience, expertise and understanding of SRMA principles, disagreements often occur in this stage. I think that this is a consequential stage as far as the end product is concerned because stringency on studies, based on quality assessment, to be included in the final analyses is a more conscientious option (at least in certain cases) than simply "lumping" all studies together regardless of quality then perform subgroup and sensitivity analyses. I think that there is a tendency among SRMA readers, at the end of the day, to use the overall pooled analyses rather than the smaller, nuanced ones to make decisions, so the structure of the overall pooled analyses should really be contemplated.
Dealing with missing data/values and subsequent implementation of strategies such as imputations are also consequential, in my view, also because of its arguably subjective nature. For instance, a certain group may just disregard studies with missing data/values (effect sizes and/or dispersion measures) for a particular outcome, while another group may attempt imputation "to make the most of the collected literature." Such decisions may be enough to change reported statistical and clinical/practical significance compared to another option, so this must be detailed and defended in the paper.
Depends at what level you are asking about. I mean, considering the same dataset, I would say that a lot of inconsistencies come from how you choose to calculate the meta-analytic effect size (fixed-effects vs. random-effects) or how to deal with effects that share dependencies (subgroup analyses, meta-regression or multilevel meta-analysis). Choosing one of these options may be obvious for some fields/ meta-analytic questions, but it's not always like that - and people make a lot of mistakes (see doi.org/10.1186/s40359-016-0126-3).
With that in mind, personally, I think the major source of flexibility is how you built your dataset, ie: the systematic review you make before the meta-analysis. There are a lot of processes there that can prompt widely different results between researches, from the subjectivity of inclusion criteria to the multiple ways to handle missing/incomplete data.