Journal impact factor (JIF) is often used by the funding agencies or academic departments to assess the quality of research in order to grant of fund or academic promotion etc. This method has been criticized at many places. Some thought that JIF should not be used as the sole criterion for evaluation of research quality.
The role of the specific research in the the development of science and/or solution of a societal problem is very important; the contribution of an individual researcher in these aspects has to be evaluated scientifically.
Joint publications in high IF journal or single authored publication in low IF journal - the contribution has to be measured through in-depth analysis not on JIF. Dr.F.O. Farid has indicated the difference.
The use or misuse of IF in a promotional process depends on the way the IF is used. If the promotion committee members know the usage limitations of this measure, they wil be careful in using it as a measure of excellence. Anyway, the IF should be only one evaluative measure for promotion, among others, such as the level of teaching, participating in academic activities, letters of recommendations from peers, etc'.
The promotion committees do not only use the impact factor of the journals in the decision making, but the evaluation of the scientific depth and practical values are also considered. In addition, high impact factor journals usually accept high quality articles considering many issues. On the other hand, I partially agree with you; promotion committees have to give the impact factor a high weight for indirectly contributing in enhancing the rank of their universities by encouraging researchers to publish in journals with high impact factor.
"....evaluation of the scientific depth and practical values are also considered (comment of Mohamed EL-Shimy) - Yes, in addition to impact factor, these are significant parameters. Regards
Impact Factor is known to present several distortions. First, it is not accurate at all to measure the quality of a given work. It simply measures if a given work has found a community who is interested at that moment on those data. History has shown many many articles that remained abandoned until things changed and they became obligatory references. So, in biomedical research, anything related to genetics, cancer, diabetes will have many more cites than a basic research, for example, on virology of invertebrates. Second, impact factor can be manipulated: a) the editor board is stimulated to cite the review in their own publications in other reviews; b) the reviewers can be orientated to indicate articles of the review when receiving a new manuscript; c) almost a non-written code of conduct says that a researcher should cite at least one article from the publication they are sending their manuscript; d) and worst of all - the editor in chief uses not strictly criteria of quality, but 'commercial' criteria if a given topic of an article is a trending topic or if it seems to be unlikely to be cited. Etc, etc, etc. So, a main complain among researchers is that when they get a negative result, even though after a hard and good work, they will find excessive obstacles to see the manuscript published. I think we are feeding a monster - IF.
I actually recommend that journal metrics be used in the promotion process. Many panels are actually not using this data. Universities should evolve and use this information which is for free. Many Universities still cling on to old antiquated ways of local (within University) and external assessment committees to do work that can be found on the click of a button. I would recommend that they start using platforms such as Researchgate, Google, Scopus etc to consider promotion.