The hardest part of conducting a systematic review/meta-analysis, at least for novices, is coming up with a good research question. In the healthcare fields, that usually means framing the question in PICO format. Often, you have a question for which you can't find any (or many) studies. Other times, your question yields too many studies. It can take time to delineate your study eligibility criteria in order to make your systematic review/meta-analysis reasonable in scope. Another hard part is understanding the entire systematic review/meta-analysis process. I am attaching a flowchart that summarizes the process.
I will add that the first and most important step is determining the utility and potential public health implication of your intended evidence synthesis. Has this been conducted before? If so, what is the increamental benefit of conducting this piece of work. Eventually, the ability to link what could be a cumbersome activity with implication for practice is, to me, what makes it worthwhile. To do this, you need to challenge your own thoughts by speaking with experts in your chosen field of study.
Closely linked with that is the need to continue to engage such experts. Again, this ensures that your research question, search strategy and eventual interpretation of your findings are of clinical relevance.
In short, the ability to link needs and impact to practice, with methodological rigour is what separates a methodologist from an evidence-based expert.
An additional comment: conducting one's first systematic review--with or without quantitative analysis--can be daunting. Even a targeted search conducted by a research librarian can yield thousands of citations to review and manage!
For me, a critical part of the process is identifying (or creating) a system for tracking and managing search results. It can be something as simple as an Excel spreadsheet or as complex as a dedicated piece of software (though most of the latter require a subscription). Taking time at the start of a project to establish the flow of information from search result to title/abstract review to final decision will pay dividends in the end.
Finding a mentor with experience in systematic review methods can make the difference between a positive experience and a negative one. I wish you luck!
In my experience, one of the hardest aspects of the systematic review journey is to formulate a question that is actually answerable. The suggestions above - using structures such as the PICO format (Population, Intervention, Comparison, Outcome) can be very helpful although the structure needs to be adapted for some types of questions and (conversely) can be rendered meaningless unless applied wisely. If you cannot get that right all else is very difficult - specifying structured searches, selecting and critiquing relevant literature. ... etc. While there are some fine examples of systematic reviews on challenging topics (hard to define or specify search terms, diffuse methods, very broad scope) there are also many bad examples and I would urge a beginner to practice on something simple... for many years I ran a course where students were encouraged to do this and in most cases simple questions of effectiveness where both the intervention and outcome are easily defined yielded by far the best learning experience. Many students went on to get their work published because they focussed on 'simple' but clinically useful topics. So my suggestion is that if you have a broad topic that is a given, start by undertaking a 'simple' review of a relatively straightforward aspect...
having completed 50+ Cochrane reviews over the last 10yrs ..from my experience... the most useful advice is to check what has already been done on the topic (once you have the clinical question formulated) replication and duplication of effort is a major waste of resources this may be helpful