I have undertaken a database search and identified the articles. I would like to evaluate the effectiveness of this search. Does the number of included studied identified by both databases count?
This is not an easy question to answer. Probably the first step would be to compare your two database outputs to see how much overlap there is in the findings, or how many non-duplicated studies there are. If you find a reasonable difference you know that either the search parameters need adjusting or that the databases are not comprehensive enough. Maybe the next step would be to run searches in other smaller databases to see whether they come up with similar outcomes. It all depends upon the subject of interest as to where publications may appear and what level of quality of journal attracts studies of the type you are interested in.
Unfortunately, establishing the adequacy of a systematic search cannot be measured by just looking at the number of studies that are in the search yield. There will be cases where a good search yields zero studies and a bad search yields hundreds of studies.
The assessment is a bit more qualitative and takes into account 1) the number and type of electronic databases searched, 2) the search terms used, 3) language and data restrictions (if any), 4) hand searches (where appropriate), and 5) attempts to identify unpublished studies.
Chapter 6 of the Cochrane Handbook of Systematic Reviews provides a thorough discussion of the many factors to take into consideration: http://handbook-5-1.cochrane.org/
That said, the number of studies identified in your search yield can sometimes give clues as to whether or not your search strategy was adequate. For example, if you know for a fact that there are 3 landmark trials relevant to your SR but none of them were in your search yield, then you know there is something wrong with the search strategy.
Chapter 9 of the book Painless EBM provides a discussion on how to optimize searches: https://www.wiley.com/en-us/Painless+Evidence+Based+Medicine%2C+2nd+Edition-p-9781119196242
Effectiveness of searching strategy is hard to define. However, searching strategy should be clear, reasonable, and reproducible. More than 2 authors usually perform searching separately with the same inclusion criteria and discrepancy can be described using such as an interrater reliability.
Unfortunately, there are many systematic reviews available in which the contents have been analysed neatly, but the basis for the review, the literature research, is hardly documented.
According to "Cochrane", for a systematic review, articles should also be included that fit less well. But nowadays, this can hardly be done, because thousands of articles can be found quickly.
That's why I think it's important that the following minimum principles should be adhered to:
1) Define the right search terms and note alternative terms, different spellings, singular and plural etc. If you are not certain about the terms, consult a dictionary (or a taxonomy / ontology).
2) Use a minimum set of search operators to limit the hit list, especially the Boolean operators AND, OR and NOT.
3) Permanently check the search algorithm for correctness - this is very easy with highlighting keywords in the abstracts of Web of Science - permanently determine the ratio of Precision to Recall. A systematic review search should result in a high recall.
4) Use multiple literature databases, for example Web of Science, ScienceDirect, PubMed, Wiley, IEEEXPlore, ACM Digital Library, SpringerLink, Emerald, etc. - for a comprehensive analysis I recommend paid databases like Embase, Chemical Abstracts etc., which are available via STN.
5) The number of hits can be reduced by prioritization, e.g. by reducing the number to frequently quoted articles.
6) If there are still too many papers in your document pool, I recommend the use of artificial intelligence methods - we happened to have published a paper about it (Open Access), which is available here: https://ieeexplore.ieee.org/document/8718286
As others have pointed out this is not an easy task to evaluate the effectiveness of a literature search as you of course do not know how many studies you should have been able to identify (i.e. relevant studies in the database). Evaluating the literature search you may evaluate the precision – the percentage of the hits from the literature search relevant for your research question – and recall defined as the number of studies identified by the literature search divided by the number of eligible studies. Estimating the recall is not possible as you do not know how many relevant studies included in the database. Therefore, you need to look at other ways to identify relevant studies – meaning going through reference search (relevant studies referring to and referred by included studies) – checking previous systematic review for relevant studies – and relevant studies you may already know and then you can check whether these studies were included in the database and why you may not identify them through your search and thereby evaluate how many relevant studies indexed in the database not identified by the literature search. Lastly you have to be aware that not even search strategies in Cochrane review are able to identify all relevant studies included a database – meaning if only one study are not identified through the literature search you have done a good job searching the database
According to this study of Bramer WM et al. ( Article Optimal database combinations for literature searches in sys...
), which investigates the actual retrieval from the original searches for systematic reviews, a combination of Embase, MEDLINE, Web of ScienceCore Collection, and Google Scholar databases was found to perform best (overall recall of 98.3 and 100% recall in 72% of systematic reviews). These findings suggest that optimal searches in systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar as a minimum requirement to guarantee adequate and efficient coverage.
Some extra databases apart from Embase, MEDLINE, Web of ScienceCore Collection and Google Scholar are:
Cochrane Library (Includes the Cochrane Central Register of Controlled Trials)
Cochrane Central (contains MEDLINE trials plus many trials from other, non-indexed sources; limited to randomized and randomized controlled trials. )
ClinicalTrials.gov (Registers trials that are recruiting and reports which have been completed. Since a majority of the trials in this registry are never published, you'll need to search here if you're looking or clinical trial data.)
Having said that, a combination of databases seems to bring the best results.