I am observing that different scientific journals indexing services are using different criteria for evaluating a scientific journal.eg. Impact factor, cite score, thomson reuter etc.
The Impact factor (IF) from JCR-Web of Science and the CiteScore from Scopus are both basically based on the same principle: the citations that the articles of a particular journal receive during the previous 3 years. However, as Scopus has more journals indexed in its database, the journals may receive a higher score (as "CiteScore") in Scopus than the exact same journal (if also indexed) in the Web of Science (as "Impact factor").
These are quite complex issues, because there are many evaluation factors, and besides, individual indexing institutions create incompatible evaluation systems for scientific journals. The problem is the lack of full standardization, because some databases indexing scientific publications and magazines are run by commercially operating companies that, as part of competitors, create intentionally different rating systems as an element of competition. Therefore, it may happen that the same journal and the same scientific publication may be assessed differently by various publishing institutions and other indexing bases.
There are many factors like impact factor, source normalized impact factor, H index, ...but at the end these parameters are not very important when the journal is well-established and prestigious in a field. People typically know the prestigious journals in their fields.
The citations of the papers published in the journal, the diversity of topics and diversity of scientists form various countries, bringing the issue of journal in time are some of the points for evaluating the quality of scientific journal