I did not elaborate in my earlier comment because I thought most people are aware of h index and impact factor. But it seems it is necessary to say a few things.
By and large, impact factor relates to quantification of the quality of a Journal whereas h factor to that of an author. A journal has higher impact factor if it has more readership and thus popular. Now we know all over the world the number of scientific community is much more in biological and medical sciences. In physics it is not that big. In chemistry still there is large community. Therefore the impact factors f journals in physics remain low as compared to those in medical and biology fields. Now a days there is a tendency to have broad based generals like "biomedical and materials" having high impact factors because the audience they cater to is very large. Anyway this has become a technique to increase impact factor o a journal. The classical journals in physics who are well rated like Physical Review do not get their impact factors increased very much above 3. Still they have their place in the society of physicists.
As regards h index, it has not much to do with impact factor of a journal directly. It only is an indicator how many of your papers are cited how many times. It does not depend on the quality of who cites them or which journal cites them. In big groups this h index tends to rise enormously because all collaborators refer invariably the group citations and continues endlessly.
Of course there is a problem for a relatively new entrant to establish and find ppl who can cite his/her work. It seems one has also to be smart in addition to good worker.
Now a days there are so many Journals and this multidisciplinary set of Journals tend to maximize their impact factors because of their increased readership. It is like going to a mall to buy vegetables or matresses and thus making malls popular, But specialized Journals howsoever good may remain low in impact factor. Some thing has to change.
In my opinion scientometry does give a kind of quantiative information whihc is more than a qualitative infromation but it should not become a fetish and we should not pretend that is the "real" or "only" measure of scientific activity.
Quality of work will definitely reflect in a paper.....but all this H-index,impact factor is all about RELEVANCE to the current tech/science trends....may be ur work wud not recieve any praise now....but what if it was way before any one did it...initially u wud not find readership for most of the novel works.....the typical mass mentality works here too.....
Impact factor is measure (not ideal but acceptable) of Journal quality, connected with concrete topic segment. In various scientific areas have different number of researchers and published papers and level of this measure is very different and difficult for comparizon. But for evaluating young scientists in one field this scientometric parameter is applicable and give idea for published paper.
H-factor is an attempt to give a picture how offten are cited more citating papers of a scientist and is more applicable for scientists with longer period of activity. There also have dependance on scientific field, number of co-authors is ignored.
Scientometric parameters in all case are only supporting elemants in the evaluating of scientific output of a researcher. More important are opinions of the experts in the same field!
I think Impact Factor can be misleading in some cases, where some journals encourage or force the writers to refer more to that journal publications. also there are Self-Citations.
I did not elaborate in my earlier comment because I thought most people are aware of h index and impact factor. But it seems it is necessary to say a few things.
By and large, impact factor relates to quantification of the quality of a Journal whereas h factor to that of an author. A journal has higher impact factor if it has more readership and thus popular. Now we know all over the world the number of scientific community is much more in biological and medical sciences. In physics it is not that big. In chemistry still there is large community. Therefore the impact factors f journals in physics remain low as compared to those in medical and biology fields. Now a days there is a tendency to have broad based generals like "biomedical and materials" having high impact factors because the audience they cater to is very large. Anyway this has become a technique to increase impact factor o a journal. The classical journals in physics who are well rated like Physical Review do not get their impact factors increased very much above 3. Still they have their place in the society of physicists.
As regards h index, it has not much to do with impact factor of a journal directly. It only is an indicator how many of your papers are cited how many times. It does not depend on the quality of who cites them or which journal cites them. In big groups this h index tends to rise enormously because all collaborators refer invariably the group citations and continues endlessly.
Of course there is a problem for a relatively new entrant to establish and find ppl who can cite his/her work. It seems one has also to be smart in addition to good worker.
The impact factor is a trial to assess the quality of journals. However, it has some advantages and disadvantages as well. The h-index is also an index to assess the activity of the researcher even-though it is not necessary reflecting such activity for those researchers working in very specific fields and thus will not have a high number of citation. This hypothesis is also applying for journals and their impact factors.
I confirm arguments in ansvers of V.K.Jindal ! But we need some scientific factors at estimation of scientific output of a scientists applying for a new position. As I mentioned the opinion of experts in the same field are more important. BUT IN REALITY, AFTER THAT THEIR OPINION MUST BE TAKEN IN ACCOUNT FROM LES AND LES EDUCATED RESEARCHERS FROM OTHER FIELDS, that ocuppited chairs in decision making boards or commissions. Pyramid of scientific board is wrong- the decision takes peoples from other scientific areas and more closer activity researchers are usually (for important scientific positions) far from the real choise of most suitable applicant.
In this context of h -factor, I would also like to express some concern shown by authors from the so called third world who have a feeling that if the same paper is written sitting in their own country firstly finds difficult to get published in good journals, and secondly to find citations. The same when they submit while on a visit abroad gets attention quicker and easier. This is somewhat factual state of affairs.
h-factor, f-factor and other symilar factors are trial for evaluating the sciantific output of an author by frecuency of citing of more cited papers. This measure is aplicable for scientists with more long time of activity(and bad for yang scientists. This scientometric number is not free from disadvantages-it not take in account difference between various fields of science, number of co-authors, citations of papers of ypur group scientists etc.
All in all, my feeling is that these attempts to find a quantitative index for journals and researchers are too misleading and unfair, even if they prove useful in some cases. I won't elaborate on the topics already covered (mainly by Georgi Mladenov and V. K. Jindal), but add two cents of my own issues... Mainly, this "scientometric" (loved the word!) approach boosts the "mainstream" (or "fashionable") scientific topics, while severely impairing topics that aren't so popular in that given moment; as a consequence, several journals stop (or at least greatly reduce the chance of) publishing in some subfields (for instance, try to publish nuclear data - I'm sure there's plenty of other examples around, too). In the long run, this shall have the effect of reducing the research made "off the beaten track" - which is where true scientific breakthrough usually happens...
Like everything else H-factors need to be used with sense. There purpose is to compare journals within a field and not between different fields. Obviously a journal in computer science will have a higher H-factor than one in nuclear science and it implies nothing about the quality of the journals.
I am an associate editor of a CFD journal. We use H-factors for 2 purposes. One is to compare to other similar journals. The other is to compare between years and see if the journal is improving or declining. Again short term statistics are not very meaningful but over a few years give a good indication
For me, those are just tools or instruments to measure the performance of someone/ publisher in a global perspective. How they are used/ interpreted largely depend on the purpose of administration.
Let me summarize, h index for authors or impact factor for journals are a means of quantifying the 'quality' which infact is not quantifiable. Nevertheless when we deal with huge data we need to have some index to broadly place some scientist at some level and also some journal. For such a measure these indexes or indices do make sense. It is somewhat similar to the grades you score do not necessarily describe the caliber of a student but for majority they do and we need this. The society demands set procedures and these indexes help some transparency, especially in the third world where objectivity of evaluators is always questioned. I would say keep these indexes in mind, but do not evaluate only on these basis as once you define, people discover means to achieve high ratings..There are silent and true workers who produce excellent quality and devote their life times without bothering for these defined parameters. Therefore this discussion be dedicated to those small number.
quality has to be quantified: it's a must in the ISOstd.
I spent years doing that and I found, almost for every task/job, index and parameter to quantify activities as non conformity mgt, lab analysis, material review board, articles.As far as scientific journals are concerned, two issues could be taken into account:
1) for h and f factor, these figures should be weighted as they depends on the base (number of person working on a specific subject) and
the "extroversion" (the number of direct and indirect links, aftermaths and outcomes on other topics) of the subject/topic;
2) the second is the long term references: the weight of a citation should depend on the distance in years (directly proportional to years elapsed could be a good start up: if somebody refers to an article issued 10 years ago, that was a milestone),
geographical (e.g. researcher working in the same group =.5; same university=2; state =3; continent = 4 etc.) and in topics "distance from others" / isolation
(measured on average number of inter-topic or cross references).
I'm not sure general measurements criteria are related to "quality" of a journal (content): as stated in other comments, it strongly depends on the nature of journal approach: interdisciplinarity or generalist versus specificity; nevertheless cross-disciplinary journals have a tremendous impact on possibility to improve exchange and growth of knowledge, positive score for that shall result mainly in extroversion increase (citation by authors working in different topics should be worthlful until a pletora of them it's changing "extroversion" of the topic itself)
Alberto Dossi, you may try to quantify quality. I am not against it- but the pitfalls were pointed out- to judge totally on the basis of this quantification. Nature is not that quantified. It is not the tallest tree that is most qualitative.
my point was just: quality has a meaning only if you can measure it. that appears quite an absurd but is based on etimo and ancient definitions problems (que lys = property to be in relation with; aristotle in categories ecc.), following that, an ISO definition application shall result in a "fit the requirements" (bad shortening) and, finally, the definition of the purpose of a publication is the core of the matter: quality is depending on what you are looking for in an journal but if a std for requirements is agreed then a measure it's. (I'm sorry, I've been looking for some objective measurement for year to avoid arbitray judgment; I'm going worse and worse...). I'll use a measure to select what to read, I agree, a different and personal judgment is based on other factors (e.g. def. of beauty by Racine or any other).
We must take it that h is a good idea, but it is too young and therefore unprotected against scam. A dark side of it is that in order to increase h people develop some fraudulent technologies which lead to absurd: young scientists demonstrate hundreds of publications and thousands of citations.
As a young scientist, I was stimulated to aim for the journals with the high impact factors and increase my h factor. When I figured out how there were calculated, I was a bit dissapointed that the scientific community addresses a lot of value to these numbers (e.g. in grant proposals, job sollicitations, overall scientific performance), while they are really quite one-dimensional.
As quantity is favoured over quality, it is stimulated to write review papers, as they may yield a lot of citations. But review papers don't really advance science.
Salami science (cutting your work into several publications rather than writing one comprehensive one, just to have a larger number of publications) is another excess raised by this way of evaluating scientists.
But reading your reactions, many of you are aware of the flaws. I may not have the answer to quantifying quality, but I think this RG-score is a step in the right direction by including scientists activities in area's other than only journal publications as well. Moreover, including publication views rather than citations could help. Currently I'm doing literature research to orientate myself for a new research proposal. While many of the papers I read won't make it into the publications that will follow, I'm still glad that they exist as they are valuable for the orientation. And publications views would acknowledge that to some extend.
in my idea authentic indexing service (like Scopus and Thomson) are proper index to evaluate a journal, although these are not accurate but you can use these data beside other data (like editorial board members, previous authors and their affiliation of a journal) to evaluate a journal. H-index which is presented by authentic indexing service shows contribution rate of an author and is a good index for evaluating an author.
With respect to the journal impact factor it can be a good metric for selecting where to publish, but it is also imporant to consider the self-citation ratio of highly rated journals (WOK provide you this information). A high self-citation ratio can be an indication of an artificially increased IF.
The H- index is a good metric for the impact of the research of a given author. But it is also highly influenced by the strengh of the research group (and also on its publishing politic) where you are. A metric for measuring the capacity of the researcher considering only the original contributions of authors (as a first author or last author) would complete the information about the real researcher's capacity.