You can take the academic high ground and say that no metrics matter. Or the practical one that Thomson Reuter's JCR Impact Factor and the H-factor you get based on journals indexed in this registry, is used for academic evaluations. E.g. awarding tenure and promotions.
This is choices to make - but there is no uncertainty that the other impact factors are a waste of time. Generally they are run by predatory publishers who cannot get registration with Sciverse Scopus and Web of Science.
What I have felt out of my experience is the Thomson Reuter's JCR Impact Factor report is the most reliable and authentic one. Regarding h-Index, Sciverse Scopus and Web of Science should remain a priority.
You can take the academic high ground and say that no metrics matter. Or the practical one that Thomson Reuter's JCR Impact Factor and the H-factor you get based on journals indexed in this registry, is used for academic evaluations. E.g. awarding tenure and promotions.
This is choices to make - but there is no uncertainty that the other impact factors are a waste of time. Generally they are run by predatory publishers who cannot get registration with Sciverse Scopus and Web of Science.
Thomson Reuter's JCR Impact Factor is evaluated within several short years after the appearance. It usually reflect a fashion of the contemporary science. Real evaluation of the truth in science need a time, especially scientific truth counter contemporary truth has been evaluated more than 10 years after their appearance.
The answers above outline fairly well the issue of Impact factor, I wasn't aware of the lnnospace services, but from what I gathered it is an attempt to rank open access journals, which may have or not an Impact Factor. For an emerging researcher the value of this service is rather very low.
For the moment I would not bother with these ranks, I would go with the ISI metrics and above all the "h-index contribution" of a journal. That means that not al lF journals are the same, open access publications TEND (at least for the moment) to contribute less to our h-index, due to fact that they are not well distributed/accepted within the scientific community.
This is a really worrisome development. SJIF and others seem to jump on the predatory publishing train and issue impact factors to journals that would otherwise have either no impact factor or such a small one, that they do not attract attention. Especially as the scale of the metrics seems to be different and SJIFs are much higher than the typical IFs, it is misleading and in fact false advertisement. It's like metric and non-metric systems without using units. The journals that pay for the evaluation profit from pretending an "higher" impact factor as the ISI metrics would assign just by using a different scale. However, most of all profits the company that issues these metrics of course. It is very necessary to have just one universal standard to prevent false pretense and fraudulent metrics from taking over.
We only speculate to the originality and quality of different impact factors because we have no direct knowledge on how they do it. I usually checked the papers that are being published in certain journals to find out if those papers are good quality or not. I checked the content of the papers and check their methodology if they are quality and correct and only then I can submit my paper. Indexing services can be copied and placed on their logo but the one determines the quality is you.