Higher h index demands larger uniformity in citation pattern. Therefore, a researcher with lesser citations may get a higher h index only due to larger uniformity in the total citations.
Pragnan - I believe that there is no such thing as a complete metric at the moment. h-index offers one of the more acceptable and accepted metric formats - but it is still flawed and open to much criticism and debate. This is especially so - depending on where and how you access such a metric. My h-index varies quite widely across different bibliometric databases i.e Web of Science, Scopus, Google Scholar, RG etc - depending on how up-to-date those databases are and the range of published sources that they draw upon i.e. ranked journals only, book chapters, reports, dissertations etc.
Pragnan - I believe that there is no such thing as a complete metric at the moment. h-index offers one of the more acceptable and accepted metric formats - but it is still flawed and open to much criticism and debate. This is especially so - depending on where and how you access such a metric. My h-index varies quite widely across different bibliometric databases i.e Web of Science, Scopus, Google Scholar, RG etc - depending on how up-to-date those databases are and the range of published sources that they draw upon i.e. ranked journals only, book chapters, reports, dissertations etc.
I agree to all that you have said; I have replaced the word complete with the word good in the question to clarify it better. Usually, the h-index has a common format with the standard indexing databases; however, the citations are not weighted according to their quality of sources. There are several other issues too. Can we not devise a good scoring algorithm which takes all round performance aspects of a research article and then add that to the respective researcher's performance metric?
Dear Pragnan, it depends on what your goal is. For example, trying to measure "influence" with the h-index is problematic (even God has an h-index of only 1). Also, scientific quality is hardly measurable since someone will boost his/her h-index by, for example, publishing something incorrect, which someone else will point out in a comment, citing the original researcher. Or someone might make the data fit his/her hypothesis to produce a bold piece of research, and will initially be cited many times before someone else finds out about the problem, if that happens at all. It may also occur that someone publishes such advanced stuff that his/her community will not be able to understand it at first, and citations will not be forthcoming. With respect to ResearchGate, I can also boost my RG score by asking many forum questions (whose answer I might already know or might in fact not be interested in). These are only a few of the issues with citation metrics. If anything, researchers should be assessed by a variety of metrics, each of which captures a different aspect of scientific work instead of cramming disparate types of information into a single number. Ideally, however, they should be assessed on the basis of what they produce, not by where it is published or how good they are at marketing their research.
Highly informative. I Think having explored to the highest of one's improved ability and capability to providing sincere and needed solutions to immediate and distant problems should be satisfactory than trying to build reputation with all these present and future scoring-performance algorithm. their analysis and scores will always be contested with various threshold of dissatisfaction.
Solution centered research should be the focus. Impact and influence will be rewarding.