Which is more productive and effective in Engineering: Google Scholar h-index or ISI Journal Impact factor ?
HI, all! At a similar discussion I said that htese indexes are more needed by our bosses and employers than by us, the researchers. We can easily spot a paper that merits and a paper that is worth nothing, while our employers have to have a "better" measure. In fact, I am asking another Q: How long are we THE researchers going to support publishers of our papers to gain money on us, while we get nothing for our work?
These two characteristics are measures of various values. IF is Journal characteristics and H-index is an attempt to measure productivity of a researcher by citations distribution og its publications for all period of activity
H-index give distribution of citation received by given research paper presenting average number of citations for a number of best papers. Citation are accomulated for all period of activity.
If factor is measured fot one, two or five years, but for one average article of studied journal
ISI IF is measure of Journal, but more known from administrators in all levels.
H-index ( Hirsch number) is attempt for measure of individual scientific output for all period of activity and unfortunately-bed for yang scientists and for evaluating activity of the last period
I think the effectiveness should be dependent on the subject you want to evaluate, i.e. author or journal. H-index, I think, is used for the evaluation of one author and his/her best publications and citations, while the JF is for the evaluation of one journal.
Many academia rate the applicants to a new position by h index, as it describes better the total volume of work, not just the most cited paper. Yet, you usually need to provide the figure of merit that is asked ...
HI, all! At a similar discussion I said that htese indexes are more needed by our bosses and employers than by us, the researchers. We can easily spot a paper that merits and a paper that is worth nothing, while our employers have to have a "better" measure. In fact, I am asking another Q: How long are we THE researchers going to support publishers of our papers to gain money on us, while we get nothing for our work?
Hi Adeola,
from my point of view Google Scholar h-index is more productive because encompasses all kind of journals. There are a lot of research studies that are not published in ISI Journal data base but deal with very interesting and meaningful research results.
ISI and Web of Sciences is very restrictive. I'm not denied the value of the papers published here but .... researchers publish very valuable papers in non ISI journal. Also ISI web of Knowledge is too expensive for a researcher from low and medium income country!
Boris, I like your ansver and especially questionapplicable to help at administrative decisions for giving scientific position of an applicant, because in the case of IF that is really a measure of quality of Journal, but during rewieving two persons are giving recomendation to publish your paper and in this way are joining you in circle of authors of this journal with their reputations. The number of papers in giving academic positions, for example Assoc.prof. and Professor are 20-25 or 40-45 respectively. In this way mistakes and friendly or enemy rewievers are not fatal.
H-index is also suitable, if we apply it for all candidates for this position. It is better for more long activity and not so good for young scientists. Disadvantage is also that activity can be not in last 5-6 years. It is disputable, that scientist with more publications(as example books and chapters of books) will be lower estimated, than concurent with only publications with equal citations.
I agreed with opinion of boris since it is a matter the people in the bosses and employers. I aslo agree with the opinion of luminita where Google Scholar h-index is more productive because encompasses all kind of journals. From my experience of publishing scientific paper for ISI IF factor, it is too difficult to get it publish and it takes too long to get the paper published. On top of that, for people who come from non-speaking English country find i t difficult to get the paper published.
Something I like to share as an academic. Recently I scored H index=1 (where paper from web of science or web of knowledge is considered) but for Google Scholar I scored H index = 4 (paper from web of science and scopus are used in the calculation of H index). Here it can seen the difference between the two type of H index. Please give your comment, I appreciate it.
Of course the Google Scholar index is better because it makes us look better than the ISI WoS h-index. Right :)?
Google Scholar is a better measure of individual scientific research. I agree with Mr. Boris.
I think ISI Journal Impact factor shows the potential of a new paper during the first two years. Later the citation is much more important, as this shows approximately the the interest of the other researchers in your paper. If they can build on it they will normally cite it (except some special cases that are too ugly to discuss here). Lot of citations will result in in large h-index. It can happen that somebody publishes some poor, unfounded, uninteresting results in a high ISI Journal Impact factor journal, and it will be seldom cited. The opposite is also possible, low impact, but a single paper is well cited. So both numbers should be used. The ISI Journal Impact factor should be used for paper younger than two years to show the potential, and later the impact factor is useless, instead of that the citation number, thus the h-index gives you better impression about the importance of a given researcher.
But I also notice, that it can happen, that revolutionary great papers are not much cited for even 10-20 or more years as the researchers do not realize the importance, or simply do not understand the message. For example, the EPR paradox about the quantum entanglement was quite difficult, or KS theory from 1964 that picked up 1986-88, and become productive from 1993-96 resulting a Nobel prize.
I guess more than these numbers, the original and useful research contributions is what would matter at the end of the day. The numbers would keep on changing with timely awareness and application of the problem on any given point in the history.
The IF and h-index are two different measures and which is more effective is not right questions. One parameter try to analyze whole averaged number of most cited papers (h-index) and second parameter try to evaluate the sum of quality of Journals, in which author is published papers and in this way is joined to circle of all authors reputations of these journal (IF). This parameter is more known and distributed-in RG profiles IF is given as scientific characteristic of each user
Hi!
My opinion about h-index is the same like Boris Kompare. We are scientists, we need a lot of time to work in laboratory, first of all. When i started my work at University 14 years ago i needed 5% of my time for bureaucracy 95% for investigation. Now is inversely. I dont knew what wiil be next.
Ups! Bureaucracy!?!? Dear Jaroslaw - the older we get the more we have to manage. Said this - it means that for my research work I have to employ junior scientists - the time (5% you say) left for research is when I have brainstorming with my research group. All other time misteriously vanishes for lecturing (this is OK) and paperwork (this is definitely not OK). One has to realize that a day has only 24 hours (but then it lefts for work also the whole night!!!) ...
hi adeola,
i dont understand your mean, h-index or impact factor are different from Google scholar, and can be asserted the Google scholar is more generally rather than other scientific institute. in other hand, some institutes as Engineering & Technology Library, CABI, DOAJ , JCR, ISI, .....
are more reliable and routine.
No, I'm thinking only about everything what we have to do "around" our basic professional duties. H-index is one of this. If you want to have lot of citation more important is title of publication. Doesnt matter what is inside. The title, most general how you are able to come up with.
Dear all,
what do you think about the expression: PUBLISH OR PERISH? Do you feel stressed about this? Me and my colleagues are really stressed about this!
Hi Luminita,
Could you describe more about the expression and you worry? When you have the experimental data, if you don't publish shortly, it will perish soon. So whenever you have the results, look up for some similar references and discuss about your results, and then get published in a suitable journal.
Hi Yangchao, Hi Jacek,
Indeed the word 'perish' refers to necesity to publish 'at any cost'. The main expectation of the university management is to publish papers in highly respected academic peer-review journals. We live in a “ranking atmosphere”. In my country, the research performance is not explicitly required and paid, but the management pays special attention to research outcome and development of academic professional career. In other words, academics are paid for teaching but they are expected to deliver research outcomes. I largely agree that the quality indicators in research are the quality of the journals where articles are published and the citation per publication rate. The same importance presents the co-operation in international research projects. This is the way to gain visibility or notoriety in national and/or international environment. Unfortunately, high performance in teaching and research activities is expected without providing any financial support though.
http://zmdm.teset.sumdu.edu.ua
The h-index is an index that attempts to measure both the productivity and impact of the published work of a scientist or scholar. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department or university or country, as well as a scholarly journal.
The impact factor of an academic journal is a measure reflecting the average number of citations to recent articles published in the journal. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones.
The Impact factor is a bibliometric figure, serving librarians when buying journals.
Of course, good journals are preferred for good papers, but there is no rule.
In the ocean of citations, who's citing what is more relevant than the number of citations itself. That global number of citations is useful for librarians when buying journals for the library.
Because it is free, the Google Scholar indexes are useful also in classification of universities, such as webometrics.info versus Shanghai ranking giving similar results, with the advantage of simplity: quality matters and is visible whatsoever..
There is the additional problem of promoting a good quality research, where various barriers appear for young and foreign researchers, beyond the financial support.
To Luminitza: The words "Publish or Perish" should not be taken literally. They are an idiom in English because both start with a P, but what it means is that if you don't publish you don't survive in academia. And in my view this is the right thing to do, and I have done it myself. This actually reflects a situation which is sine qua non in a university environment: if you do not do research (and publish the results) it means that you stagnate. If you stagnate you cannot survive in a real university environment. You need to be up-to-date with your field, and I don't see a way of being up-to-date unless you participate in continuing research.
Yes, Publish or Perish is stressful, but those who enter the world of research do not go there because it's easy. I fully agree with the idea that in order to get good students university staff has to be up-to-date, involved in publishable research.
The research has to be genuine, and of course not plagiarism, and unfortunately I hear that In Romania plagiarism is a 'negligible' crime, and even the Romanian President was accused of plagiarism...
Hi all the researchers,
According to me the difference between Google Scholar h-Index and ISI Journals are as follows:
1. Google Scholar h-Index represent the impact of the Author of the particular articles but ISI Journals represent impact of the all articles of the Journal.
2. Google Scholar h-Index improve the individual citation of the researcher but ISI Journals represent improve citation of the Journal.
3.With the help of Google Scholar h-Index researcher must recognized him self at international level but with the help of ISI Journals he improve only impact factor .
4. Google Scholar h-Index represent only individual growth but ISI Journals represent growth of the area related with the Journal.
Dr Agarwal: In principle you are right, but the reality is that many papers do not appear in Google Scholar because of various reasons. From my publication list about half is missing. I have no idea why, and also don't know how to correct it. Moreover, I care very little about the H-factor.
ISI journals have official impact factors and it is good to have a few of those in the resume. Nevertheless, some professional journals only cater to small communities and these would have small impact factors. But what is important really is the citation record. If your work is cited it means it is relevant to others, and this is important.
Hello,
to Rafael: is about our Prime-Ministry not about our President! And yes, the plagiarism is a true issue but not only in Romania. I have sound information about famous cases of plagiarism in Europe.
Now, if I'm coming back to our topic I would like to inform you that Researcher ID from Web of Sciences had founded the ORCID platform. ORCID states that "is an open, non-profit, community-based effort to provide a registry of unique researcher identifiers and a transparent method of linking research activities and outputs to these identifiers. ORCID is unique in its ability to reach across disciplines, research sectors, and national boundaries and its cooperation with other identifier systems. " (please see http://about.orcid.org/about)
Dear Drs. Manory & Agarwal,
I agree that Google Scholar is not complete. Some publications or journals are not indexed by Google Scholar. But I think the citation in Google Scholar is more accurate than other sources, such as ISI, Science-Direct, ACS. Those database normally only include citations from their own database. That's say if your paper published in Sciencedirect is cited by the paper from ACS, the Science-Direct will not count it. But Google Scholar is more complete to include various databases. You can register an account in Google Scholar, and add your publication manually, then whenever your paper is cited and indexed by Google Scholar, you will receive an email from Google telling you that your paper has been cited. This is a good way to track citations of your work.
Hi Jacek,
The disqualification of the 20% of the lowest evaluated proposals is really ruthless. This rule is actually inhumane and dose respect the researchers. We all know how hard to write a proposal and submitted to the Funder. If you don't get funded this year, it dosen't mean you are bad and you may get the funding next year. I think everyone should have the equal opportunities to submit the grants freely. This is the basis respect to the researchers and scientists.
Hi Luminita,
Yes, you are right. In the USA, I think the situation is similar. The researchers or professors are only paid by university for teaching but not research. However, the department cares more about research outcomes, such as publications, books, patents. In the US, the fact is that if a professor gets a funding from any sources, the university and department will have a overhead charge to the funding money. The overhead, sometimes, can be as high as 50%, which means if you 1 million funding money, the department and university will get 0.5 million from your funding. So the actual money you can spend on your research is about half of your funding. That explains why department and university care more about research outcomes, because they can get money from it. And, this money occupies a large portion of the budget in universities. So, many professors pay least attention to teaching but spend a lot of time writing proposals and applying for grants all the time.
In the US, if you don't have funding for some years, the university cannot get money from you and you will be fired eventually, no matter how well you are teaching and how much students like you. So, I would say I will publish my data "at any cost", only you get your work be recognized nationally and internationally, you will have a better chance to get funding support!
To all those who answered on how to update my publication list I am grateful, and I apologize that I don't mention all the names. I will ty to follow up on this, although it means a lot of work... To Luminitza I apologize for mistaking the President with the Prime Minister, but the general idea is that in my eyes plagiarism is a very cruel crime and someone committing this crime should be in jail and not running the country. But this is a side issue to the discussion at hand.
I disagree with those saying that professors are paid only for teaching. A normal teaching load for professors in many countries is about 4-6 hours a week. (Not in Australia). The teaching load is such because the professors are expected to be involved in research and publications. That activity also often pays for additional employment during the non-teaching periods. In science and engineering professors can employ themselves during the summer on their own grants.
Btw, Australia, despite being considered a very developed country has hard conditions for its professors and many have a high teaching load despite research activity. Also, the ARC grants are distributed only once a year and the peer review process is flawed. Not like what I hear about Poland, but still, most grants are given to those who won grants already, and the chances of getting them are about 10%...
No penalty though for not being successful.
@ Yangchao -- just a possible correction. I am not sure about UMD but many (I should say most) universities in US charge overhead as 50%+ (51.2% for UofA) on "indirect cost" only which involves salaries of personnel, running daily expenditure etc but excludes capital equipment. Capital equipment is what costs more than $5K (movable yardstick and different in many places). So unless your 1m$ is all for indirect costs, I don't see how UMD takes away half of it. Depending on size of the budget and duration Universities do not get more than 5 - 10 k /month in overhead from each project (which is a very gross ballpark estimate). In any decent city in US that kind money will not get you even the space you need to do the research let alone paying for insurance, safety, water, electricity, other overhead related facilities that we get. I don't agree that is why researchers are spending less time teaching more writing proposals.
Sorry for this off-topic post, but I had to correct this one.
Hi all, I target Rafael's statement on teaching load.
Normally, a professor should have his own method to teach, so the research/teaching barrier shoud be fluid.
Otherwise, in absence of a specific method the professor shoud teach, it's no more about higher education, peer review, citations and such, but rather a public school approach.
I maintain my previous view on the bibliometric value of the "impact factor" and "influence factor"
@ Palash ---- Thanks for your correction. Your description is more accurate than mine. Yes, it is the overhead charge of the indirect cost. The actual money professors can get is also dependent on how many co-PIs are associated with the grant. In my cases, some professor from our department told me that he only got less than half of the grant due to the overhead and other costs. I think what he meant might be the fact that some other co-PIs shared some grants funding with him! I also agree that the duration also counts for the cost. But I did see that one professor in UMD was fired due to no grants, but he was the best professor I know and we all enjoyed his class so much! At his farewell party we all complained about the university rules. In UMD the teaching load is normally 50%, and other half is research. But if an assistant professor did a decent job on teaching but not a good job on research, he will fired, Instead, if the assistant professor got several grants on his/her research, he/she will be safe for tenure, even though his/her teaching is bad.... The competition for grants in US is really fierce, and university counts research grants the most, especially for assistant professors.
@ Rafael ---- In the US, we have the similar situations. The professors can pay themselves from their research grants during Summer, but the salary they paid for themselves cannot exceed their average salary during non-summer semesters. More accurately, it cannot exceed 1/3 of their annual salary during the summer, and the summer period in the US is three months. So the amount per month during the Summer is roughly equal to the their salary during semesters. And the salary for professors in academia is far way less then salary in industries. For the professor, I think the best benefits are the job security (after tenure, of course) and respect from others, as well as self-accomplishment.
@ Palash, please correct me if I am not accurate or am wrong.
@Yangchao, this is whole different can that you have opened. I am worried this discussion will become argumentative and drag the focus away from the original question.
UofA like many other universities in USA is not a state university and survives solely from revenue it generates through education, research grants and revenues from sports. It is an industry. Within UofA in College of Optical Sciences (where I work) is unique, tenured faculties get 6 months salary (not 9 months like you mentioned), for the other 6 months they need to get funding to be paid, that's 0.5 FTE of their salary. But then the upper limit of the salary is negotiable - obviously! This is not the case in other departments in UofA and was a conscious choice the faculties here made. For us, the scientists, we have to get funding for 1.0 FTE but we do not have to teach. We get our contracts reviewed and renewed every 6 months / 1 year. Keeps me up on my toes and just not me, the entire work force in College of Optical Sciences. Most of our grants come from industry and DoD, some from NSF as well. Would I get paid more (and secured) in Industry? Probably yes, but here I have enough. Would I take another research job if I am paid more, probably yes! Would I get fired tomorrow -- may be. But I would not like if I am paid full time but i am working half. Sooner we realize the University is a place of business and profit and research is part of it, the better.
Yes, competition for grants in US (don't think its easy in other countries as well) is really fierce! but that is what I like most about this country. I am sorry you lost a good teacher but would he have stayed if he was paid 50% (since there is only 50% teaching load)? Could he have another teaching position with 100% load, i don't know, but I do not see wrong in Universities's push in getting more funding.
To Decebal
I can't say that I can follow your argument. Universities are large organizations, and they have general rules. Of course any department can modify the rules if necessary or if they see fit. The method of teaching has very little to do with the hours. Some professors can send the entire class to search a topic on the internet and this is still teaching. My point was the statement that professors are not paid to do research. That theoretically true, but not really because in decent universities (and I consider Babes-Bolyai among these) staff members perform research in order to remain in the system. Research is not 'volunteer work'. Because of the big gap between people with PhD and staff places, those who don't like the situation can be quickly replaced.
That's what 'publish or perish' really means.
Professors are paid to teach, do and supervise research and contribute towards shaping future direction. This last factor (future direction) may be influenced by funding sources and impact on industry and technology. The first two parts should be as independent of financial tags as possible. The quality measurement has nothing to do with money attracted through funding. A good business proposition is not necessarily a great break through in science and vice versa.
I believe It is healthy to discuss any related issue surrounding the question. This is a very serious issue and I quite agree that it should be debated and discussed on its strongest terms. In my University, for instance, there are lots of internal rumblings relating to where and when do I air my views, and not only where do I publish my works. Though, currently the two approaches, as the discussion unfolding, stand far apart.
@Rafael. Thanks for reading my post.
I meant that professors should be the authors of at least one original method to offer (teach) to students. Of course in research, so that students are motivated to choose him as a mentor, to learn something original, inventive, useful. That is why researchers may be good mentors to motivated students, apart of the teaching duties.
However, the ISI numbers remain bibliometric, not scientometric. As any error of judgement, use of these bibliometric parameters for scientometric purposes will yield paradoxal results, mostly in small countries, where publishing only or mostly in foreign journals, on foreign topics, doesn't help the local economy. Although good for the personal tenure of the professor himself, he will be disconnected from the local environment., with huge social costs(mostly for him). This doesn't apply to developed countries, where on the contrary, new ideas are welcome and may be implemented quick in the R&D environment. The money issue is a key factor, of course.
Learning from my country's experience as a developing country, the solution should be based on focused long term national research programs. "Focused" involves publications but in a narrow field which fits the local environment and financing. Poor countries cannot provide high citations scores in wide scientific areas. Open international cooperation between these countries could be helpfull but I see it is not a functional solution for various reasons, mostly human.
@Decebal-Radu, I am confused now with your comment on "the ISI numbers remain bibliometric, not scientometric". But ISI aka Institute for Scientific Information was founded by Eugene Garfield and Derek J. de Solla Price, who started Modern scientometrics, its based on their work that the scientometric approach for analysis started. Plus, scientometry is mostly based on bibliometrics. Or am I mistaken?
@ Palash Garfield's company was originally the ICI(International Citation Index) who compiled the references also. So their bussiness was and remains compiling citations. I don't believe scientometry based on ISI figures exists. Science contains many breakthroughs which shape the future (Linne in Botany, Newton, Einstein in Physics etc).and can't rely on citations which are basically an elegant habit.
I explained in a previous post that impact factors and number of citations serve librarians when buying journals. it is more complicated with humans.
The notoriety of the scientists, like in most liberal professions, is a key professional factor. It matters who is citing what, if the paper is good and original. The income should be proportional to the notoriety (as for musicians, actors, lawyers etc).
But global figures from ISI or Google may be misleading. That is why the only accepted procedure, both professionally and legally, is the competition for an open position. The referees are supposed to read and evaluate the papers of the candidates, not to count them.
Based on count of papers only, will bring easy writing guys in front and leave original and heavy working researchers in the back; this will also screen the students from new ideas and elaborate studies and give them the easy solution instead of the correct one("publish or perish" instead of the correct solution to the research theme) .
Moreover, many of the lab's papers are written by students, doctoral and post-doctoral fellows, so counting papers alters the scientometric figures. It is obvious that a paper published in a good journal doesn't mean the paper is good itself; citations habits differ from field to field and many citations come from bulk introductory considerations.
This answer is actually a general reply and slightly off-topic. As everybody knows, there is an inherent difference between universities. They all try to hire good staff, but there are richer and poorer countries, and richer and poorer universities. And when cost-cutting starts, than there is no money to buy state-of-the-art equipment and even to subscribe to journals. This leads to a lower ranking in all research-related issues, and this...brings even less moneys in research funding. So, like in every other field in life, the rich get richer, the poor get poorer...
That's why government funding to poor universities and to groups with good potential but no money should be a priority, in particular in developing countries. And this is why any penalties for applicants like those that Dr Pietracek was talking about seem to me outrageous...Regarding the link between research and business (Dr Zaman), I disagree with a general statement saying that quality of research has nothing to do with business. This is sometimes so for theoretical studies, but in applied science and engineering if the research is practical and is useful it is already 'good' in my view.
To Decebal-Radu
What you are saying is correct and interesting, but do you propose a different solution? If a paper stirs interest, it would be cited /or (as it happened in my personal history) plagiarized. But even plagiarizing actually means that the original idea was good and worth copying...It is true that research students contribute to the research, but that's a win-win situation: they wouldn't be doing this particular work without this particular supervisor, and the publication actually makes sure that all are mentioned, although sometimes some people are given credit without a real contribution. However, coming back to the last point, upmarket journals nowadays request a statement on the contribution of each author, so that people with no contribution would not be included.
Coming back to metrics however, someone in his early career would not have many publications, and then other factors come in, such as real references.
I think that the entire publication business is based on the peer review process. If the journal uses high quality reviewers it would remain a high-quality journal, and it would publish mostly good papers. And high-ranking journals are very selective really, even though sometimes their reviewers make mistakes. A most common mistake that I found was when I tried to published something that went against prior art. It wasn't really, but the reviewers thought it was and my paper was rejected. After arguing with the editor and proving him wrong...he made me a reviewer...(sic!) but the paper itself was finally not published (even though I was asked to resubmit) because the coauthors have moved on...So yes, all academics make mistakes, but the metrics offered by counting citations has no competition in my view. And of course, this requires academic integrity, giving credit when it is due, etc...
Impact factor is one measure of journal, as were definitely tolking above. But reviewer, making sometime mistakes, accepting the manuscript of a paper for publishing joins the authors to the circle of authors of that more or less respectfull journal. In this way, at application for a scientific position the referee could use the sumarized IF from all the published papers as one argument. But there are need to look also to published books and chapters of books; to activity for practical applications, for new metodics of research and educations, as well as on activity as lecturer.
Hi Rafael, thanks for reading myn post.. Here in Romanaia we have 8 years experience in implementing the "ISI only" concept. Results are disastruous. As I told you, the ISI numbers are bibliometric not scientometric(if such numbers would exist).
I suggest that before looking at the proud significance of citations to take a look at the basics of data processing:
- absence of systematic errors
- homogeneity of variables and error propagation law (factor of influence)
which are not well defined yet.
Dear Decebal-Radu
Incidentally, I could answer in your language but I am using English for everybody else..:-)
When I said that citations are important I did not refer specifically to publications screened by ISI. I didn't know that Romania is moving towards an 'ISI only' system.
I am not arguing that this is the best system, and like every system it has errors. It is easy to err when people use only a name and an initial. Take a common Japanese name like Yamada or Suzuki for example, and if they use only the first initial it would be impossible to find all the citations of a particular Yamada or Suzuki. Moreover, if someone with such a name changes universities this affects the whole statistics. I definitely don't think that the system is foolproof, in particular for common names. But if they count citations of a particular paper this changes and it can be more accurate. However, there are languages other than English and I have no idea how ISI deals with them. I am not at all implying that ISI is foolproof, or that only journals covered by them should be counted. I think that small journals in the local language that cater for a professional community should be as valid as others. I am also not talking about using this criterion for academic promotions, but even before there was an Impact Factor people knew what journal is 'good'. "A good journal" is one that hard to get a paper published there, such as "Nature" or "Science". The difficulty is how to find the metrics that define what everyone knows, that journal A is considered higher in ranking than journal B. If a selection committee is faced with a candidate who presents only one publication in Science, and another one with 10 papers in other journals, the candidate with one paper in Science should be selected. Of course, this is my personal opinion.
Dear all!
I following the message of Jacek Pietraszek above recomend to read the paper "Nefarious Numbers", where manipulations on impact factors are described with details:
http://www.ams.org/notices/201103/rtx110300434p.pdf
Question after reading this paper is: Have we any sciencemetric number better?
My ansver is : Not!
And then we must use IF nevertheless all problems, related with manipulations of this number from some editors and at understanding, that only numbers are not enought to measure scientific output of a scientists!
Georgi, Thanks for this article, I wasn't aware of manipulations of the impact factor.
Hi Georgi, Hi Rafael, I believe I have already pointed out the alternative to "ISI numbers": reading the papers instead of counting articles. Although, but not secondly, you should see if the candidate has his own method to offer, in order to establish his personality.
Being in close field to yours, I could provide to you examples of many nonsense articles yielding Hirsch indices over 9 for guys who are unable to deal with undergraduate issues.
High citations records may be collected for obsolete or old subjects, such as on common magnetic compounds, while everybody is moving towards other materials, such as optical or other nanostrutcured films(DVDs vs casette recorders as an example).
Because higher education and research are good for the notoriety of politicians, we have a whole bunch of war profitors, so a personal examination of the candidate personality and contribution cannot be avoided.
Hi Decebai-Radu, Hi Rafael! From your messages above I learn and understand your fillings! I am agree with yours arguments. But from the time the summarized ISI IF to become to be a instrument for evaluation of projects, universities and nations as workers on scientific subjects and executors of investigations and in this way be decizive for funds distributions, I am afred thet this numbers play a role for reachest to increase geting ammount of money, and developing and poorest to become more poorer.
Dear Jacek!
Excellent idea - the radar plot! Why should not have a vector of numbers, instead of single scalar length (which IF, H and similar indices are ...). Highly support!
Hi all,
Good idea to have a vector representation instead of a scalar. However, the bad news is that the unreliable bulk criteria are used by the referees(and decision makers).
So, we should fight the use of unreliable indicators for science quantitation.
When used for comparision of universities, the Shanghai list and the webometrics.info give comparable results, although the webometrics.info is using Google Scholar for the citation part. That means that many indicators are redundant or irrelevant in these classifications. The good news is that common sense is preserved globally.
I am also agree, than one number is not enough to evaluate the scientific quality! This we could see also for a net of numbers (in opposite case a computer could do a list of scientific quality of scientists! So, we needed from reviewers!!!
Bad news is that reviewers of a lot of papers of an applicant for scientific position as professor for example require lot of time and knowledge and is really different task.
How to do such very complicated multicriterial evaluation?
Also-in the scientific comunity there are friends and enemies???
I though the multivector variable for evaluating a researcher was called CV ... ;)
Hi Pablo, Hi Ignatio!
Remaining on topic, the Google Scholar based webometrics.info and the Shanghai ranking give comparable results at least in the 1-300 range.
Since the criteria (vector components) are simply different or icompatible, the conlusion is that it is the quality of the professors which matters and not the impact factor itself, nor the number of papers.
For many countries evaluation of scientists by IF and h-index change direction of effort of scientist: instead in the first place to be development of country industry or spiritual life the main direction is to give new ideas to very developed countries, that are open for new ideas and scientific results and as result they become more reach at implementing it in own industry and spiritual life
Hi Jacek, rather old referenced(from 2007) but very actual also !
While both H-index (http://en.wikipedia.org/wiki/H-index) and Journal Impact factor (http://en.wikipedia.org/wiki/Impact_factor) are related to number of citations, they are different measures. While H-index is referring to an author, the Impact Factor is referring to a Journal. In addition, for each author you will find an h-index in Google Scholar and another h-index, usually lower, on ISI (Web of Knowledge). Hope this help!
Good Costin, the definition is clear. Bur really it is the effects that the factors will have on our academic colleagues and that is bordering everybody.. You know it is not easy to come by a single, publishable article/paper, which will go into the world and influence DECISION MAKERS. The big question still remains: which one is effective perhaps convincing (not the two anyway) in academic arena - impact factor by journal or by individual article/author? As you have said, "for each author you will find an h-index in Google Scholar and another h-index, usually lower, on ISI (Web of Knowledge)" Which way to go then?
I would like to add this, that anything "score or impact factor" - or by any name we call it - to determine "superiority", so to say, is a matter of mind and strong will on those that are involved. A good example is the RG-score and of course you trust the academics, in their usual direct approach to issues, have started raising "eyebrows" on the platform or criteria on which the scores are based - of course these are normal reactions, because all scores and factors are not based on uniform criteria. Academic papers, in many Institutions and regions, are not respected much if they are localised by publication. Or what do you say?
All these numbers-IF , h-index and its variations(g-index) are based on citation of a paper. There we pay to understanding, that scientific input of a paper or scientist is citation-namely are other scientists read this paper, are they agree or made criticizm of obtained result. But general goal of science(especially of applied science, engineering and technologycal research, to improwe material and spiritual life of comunity. Therefore, if a paper directly give knowledge on fabrication of tools, methodology or some important steps to create new material and goods we will not estimate these possibilites???
Now we have a new situation with RG scores!!! Hopefully we can see how it will be accepted as measure of research output of users, because component of peer reviewed papers now become bigger!
Hi Georgi,
Most peer reviwed journals are a merchandise of global publishers, which involve a transfer of copyright. Some professional associations (American Physical Society, The ASME , IEEE) also publish high quality journals with excellent peer review. Publishing in any of these means integration in the global community, led by the developed countries in the directions modern there . This is important for the individuals.
For devbeloping or small countries, with smaller budget, the progress obtained is zero, because the local economy is not benefiting of that research. Actually the small countries are financig research for larger countries by using this byte of impact factor.
This was already oobserved by the Romanian Ministry of Education, after the 2005-2012 blind implementation of a system of counting papers in ISI database.
The Ministry is trying to change the criteria, but is not successful yet.
Dear Decebai-Radu, you are 100% right(please see my ansfer before the last). There I only gratulated chenging of estimation the RG scores with bigger content of research work numbers, because before RG scores was more measure of our forum activity and not our scientific input. I am agree, that often in regional journal are published papers that are more important for country of authors, but all known bibliometric numbers are nit estimation of this kind of results. Sorry!
To Decebal-Radu and Jacek
Actually small and not so rich countries should be concentrating on research and on using it. Research doesn't have to be esoteric,it can be practical and useful, and a small good invention could generate good income for the country. There was a very famous case in Australia (a country that despite its economic situation does not properly fund research, and in particular it does not seek implementation). A PhD student originally from China invented a method of making foldable solar cells sheets. He did not find interested parties in Australia to market his invention, so he went back to China where he became one of the richest people in the country. His former university is still collecting royalties from this invention. Good applied research can generate wealth, but of course there needs to be the right attitude for this to happen. The business community has to be geared to develop local inventions, and the government has to promote local R&D program. I can give as an example the program called SBIR (Smal business innovation research) in the US.
This program devotes by law 2% of the national budget to research in small business, and each government department has to select the programs it is interested in and pay 2% of its budget on this. This not only creates jobs for people with PhD in small business, it also has a stage two in which the pilot stage go to implementation and that stage attracts much larger funds.
I also don't agree that the stakeholders described by Jacek have competing interests. What I do agree is that the impact factor of the journal is not that important for successful research. You can publish a discovery in a very upmarket journal, but that might have scientific/academic interest only. That's true. But overall, good research increases the chances of successful applicable inventions that can lead to marketable products, provided of course that there is an entrepreneur willing to invest in these ideas.
If everyone in the system is in pursuit of excellence, eventually excellence will be delivered. That's my credo. I have been involved in research in a number of countries, and unfortunately I found in Australia the same attitude you mention about Romania and POland: the universities are good, the academics are high-quality, but there is a lack of research funds and a lack of implementation, a lack of follow-up. Because the politicians do whatever the public wants, this means that the public does not see the value in research and in excellence in general. So the question is how to educate the public about the benefits of using the knowledge of their academics. Singapore has learned that, Taiwan has learned that, Japan has done it early on, and now South Korea and China are doing it...Despite its economic strength, until recently China has been dependent on know-how from the West, but they have learned to import the best scientists and engineers to teach their workforce...Eastern Europe had a very strong education system under the Communists, unlike China, where all the good academics were "eliminated" during the cultural revolution. It would be a pity that this tradition of excellence would disappear with more freedom...
I apologize for the long 'speech"
Dear Rafael! I am agree with you, only your difference with Jacek about steakholders opposit interest I do not accepts. I also can note, that our ministers are followers of domination of western (old western european countries and USA) preferencies and IF is one of them. IF is sutisfied scientists with activity in more pure scientific fields and they also confirm this measure. Researchers in applyed scientific fields as engineering and technology are estimated onlu by publication activity and technology applications are often underestimated.
Dear Georgi
At the end of the day, as they say in English everything matters, I think that the publication record is important, and where you publish is also important, but the individual citations are also important. However, these factors can only count if the system allows the people to perform at their best: If you have high-quality instrumentation you can come up with results that are publishable in highly respected journals. But if the researcher needs for example ultramicrohardness or AFM or an Auger machine, but the university can't buy these apparatus, the system cannot expect that the articles will be published in highly respected journals.
But really, before there were IFs, everyone in the field knew which journal is the highest ranked, IFs are only numbers that really should not be used arbitrarily. The standing of a particular scientist in the community cannot be measured exactly by numbers, except by measuring the number of citations (not self-citations) in any journal and conference. Conference participation, invited lectures, keynote presentations, are all factors that should be taken into account for promotion and I am sure in most cases they are. Patents are very important as well, in particular if they were commercialized, but of course it depends who sits on the Committee. If it's someone from Social Studies or History these things would not tell them much...
Anyway, I have a feeling that the discussion has strayed away from the original question. I think that there is no accurate measure of standing in the community, and I pity those who have to put up with systems that are run by bureaucrats with little understanding of the journal publication process.
Thank you Rafael! I am agree with all your arguments. I hope that these arguments will be readed by jang scientists too. If I try to come back to original question:
for PhD application and refering IF is more applicable(h-index is for scientists with longer activity). And we do not forget all discussion that only numbers are not enought to measure scientifiq output of a researcher
Hi all, thanks for reading my posts.
Hi Rafael, thanks for your contribution.
Indeed, good quality research (and papers) yield new industries. However, for developing countries, the process of that citations followed by industrial applications is rather theoretical. In the West, after a PhD or post doc, the scientists enter in an industry and QA which can use them(as in the example you provided0. This does not happen in the developing countries and perhaps may be an indicator of the integration of the research in the local environment. Of course for the pure and applied sciences, not for theology.
I agree that globally, the citations are marking the interest for a given research and yield satisfaction and notoriety for the author. However, for the employers, collecting IFs yields figures that may not be significant, as I pointed out for the classification of universities(Shanghai vs. webometrics.info).
Moreover there is a salary and infrastructure discrepancy making these figures look odd.
I have news. The new ISI criteria in Romania for professor in Chemistry (1000 dollars sharp) are:
Number of ISI papers, 26 abroad = 40 ;
Total influence factor = 45 ;
Number of citations (excepting self citations) = 100.
What do you think?
Thanks Decebal-Radu. I think that only people in administration, who have never really worked in research can set such criteria. Are these criteria set by the university or by the government? The numbers look ludicrous, and to set the impact factor as a criterion is in itself a very short-sighted method. What about total research grants? Patents? contracts with industry, etc.?. These criteria also eliminate any bright young person from the competition. There needs to be weight given to how much a person's name is known in the international community, but to actually put numbers on these requirements is wrong. It sounds too 'mechanical'. I don't think it is hard to have 100 citations overall, but what is wrong is to a particular number as a requirement. Btw, by the sound of it it is no wonder that Romania suffers of brain drain and I find so often Romanian authors publishing from other countries...
Dear Jacek
I said a few things in this discussion. I am talking about about in particular how the system should work, as I understand that it is not working properly in Poland or Romania. These governments however don't read the postings in Research Gate. Perhaps they should..:-(
In my case, I choose the ISI Journal Impact Factor for al my research. In academic researching, google doesn't seem the best choice for me when looking for knowledge. Anyway I don't think there is an absolute truth around this, and it is mainly a matter of what best fits in your case and your research
Hi Jacek, Hi all.
There is no wonder that countries from former soviet block have similar problems in adapting to the western system which dominates the progress in science and technology.
However, research and higher education are based on talented people, on hard work and on breakthroughs, not on citation bussiness (which can be tricked).
The above criteria describe an old professor which didn't run for new ideas but for a long record of publications. The three criteria are actually redundant.
The functional(western) system is aimed at young researchers, up to 35 which proved talent and may develop in the future a research direction which can bring students, funding and fame to the employer. Such a candidate should have some ISI papers (say 4-10), but is too young to collect so many citations.
Now, if the candidate comes from a different university (where the citations were collected) and is too old to begin a new research direction-which takes up to 30 years, what is the benefit for the university with that open position?
Hi Rafael, thanks for reading my post.
Actually these criteria are politically motivated. After 1990 a lot of local universities were allowed with improper structure and research, but having, of course, many bosses. Being short of money the politicians(meanwhile they became professors) are trying to direct the research funds towards their clients. This is not working properly so they oscillate between criteria for barring newcomers. These criteria are discussed mainly for sciences, where financing is more important and additional revenues may be obtained. Also, the citations are mostly practiced in sciences and less in theology, physical education or bussiness.
in my idea Thomson and Scopus indexing service are more effective than google scholar. because google scholar index many non-authentic journals which some of them even publish papers without reviewing !!! these journals are very low in ISI and scopus indexing data. therefore, h-index of google scholar are not good index for evaluating an author. however, as mentioned in previous comments, google scholar index many journals and conferences and it can be a good device to view all publication of an author.
I would not rely on the Google h-index as it is only as good as its database of journals which is far from exhaustive (for example some of the Elsevier journals are not included). On the other hand it often cannot properly discriminate between citations in peer reviewed content and in non-peer reviewed content and theses.
Impact factors are not perfect scores either as many journals know how to "play" the system, while many reputable journals, like some in the US, have not even bothered to register their impact factors.
The primal interest of scientists is to publish their ideas and (eventually) get citations.
It is more psychological than technical.
There is a difference between the approaches of 1). the employer, seeking international support and 2) the author himself - alone against the elements.
I have experinced the communist rule an I must share the conviction that author's recognition (citation) comes first.
Relying on ISI for citations count is an obvious limitation, of interest for libraries but making no sense for the authors.
By indexing non-ISI publications (such as thesis) Google is filling a gap.
Hi, thanks for the link !
When using a bibliometric figure instead of a 'scientometric' figure, a systematic error is made.
Comparing one systematic error to another systematic error may be funny. As an example please compare the figures for universities ranking from webometrics.info with Shanghai ones (http://www.shanghairanking.com/ARWU2012.html).
Positions for universities in front, about 100( Ghent University), at 200 (Universite Libre de Bruxelles) DO FIT. But criteria are totally different.
So, a most scientific attitude is to reject both. Instead of that, the employers are trying to select personnel according to some of those criteria.
Something reliable should be found. For the moment, public competition on professor positions, CVs and interviews are accepted as legal means in hiring.
Until then, publication of new ideas and breakthroughs should be pursued. A too narrow selection might discourage emerging talents or reject new ideas.
Our distinguished colleague Rafael Manory has already provided the example of a Nobel laureate in Chemistry who's paper was primarily rejected, but he saw no alternative to publication of new ideas.
To Prof. Ciurchea: Incidentally I have not seen this message until now although it was posted four days ago. I think that the each ranking serves a purpose, and any ranking is only true within itself. The Shanghai ranking is quite popular with teh press, and I have not checked each of its criteria, but I am sure it is consistent within itself. In the US there is the ranking of the magazine US News and World Report. They are very good in comparing US institutions but useless for overseas institutions. But in general I don't believe that any criterion can be used for overall quality. For example, one cannot compare papers in materials science (with which I am more familiar) with papers in particle physics. These days Higgs' boson has been confirmed and I am sure there will be many authors on the paper, because many scientists take part in any experiment in particle physics. So if someone looks at the number of authors per paper and takes this as a criterion for selection, there will be no hiring in particle physics...:-)
If we are talking about criteria for hiring there cannot be a set formula across departments, but there should be a desire to achieve excellence with the hiring. I know that a committee can be made up of people who are mediocre themselves and who therefore seek to hire other mediocre people, so that they will not look bad. Systems should have mechanisms against such cases, but not by setting too rigid criteria (such as counting the number of citations irrespective of field).
So the bottom line is that the judgement of an academic committee should be trusted by the system, so that questions such as the one we are addressing in this thread will not matter.
Academic hiring should always strive for excellence, and if all systems would follow this simple principle they will always hire the best applicant. The problem is however that because this principle was not applied in the past, there are places where mediocrities have power and they will oppose hiring based on excellence...The strength of management would be to overcome these forces...But this is also only possible only if the administrators are also selected based on excellence...So it is a vicious circle unfortunately: You need a system that continuously strives for excellence and this is difficult to obtain when there are many mediocrities in key positions, and in my view the actual ranking method is less important. The will needs to be there for excellence to prevail. Apologies for the lengthy answer.