That would be an interesting proposal. However, as Research Gate team would say that their citation tracking is not comprehensive and it is merely based on citations in their database. So, ranking researchers based on citations in their database would be unjust to some researchers. They can use citations of another source, but of course, it would not then be their ranking. So, I do not believe RG is the right platform for that.
Plus if any other platform does it, there will be a major debate on what does that really mean? Does having the most citations means being the most influential? and people will debate forever about the value of such ranking. Therefore, I do not believe it is a very good idea and it will bring more debate than recognition for the ranked researchers themselves.
Ranking metrics of RG researchers is questionable for many reasons. Citations by RG are not valid, Google Scholar brings better citations info.
RG score is not scientific measure, but measure of activity of researcher.
TRI is a sort of metrics whih is also not scientific metrics.
Another fact is that millions of researchers are not members of Researchgate, so they could not be listed in ranking.
Among RG members, many of excellent researchers do not bring their contribution to Researchgate (articles, conference papers, books, answers, questions, projects....), so their low RG score and questionable TRI are not valid info for the quality of researcher, but measure of their inactivity at RG portal.
I agree that RG is distinctly different from the purportedly leading scientific rankings, such as WoS and Scopus, and to a lesser extent, Google Scholar, among others.
Every ranking metric has its strengths and limitations.
The Citations and h-index on any ranking website are based on the papers that are uploaded thereon.
Most of my research output that is available on other websites has been reproduced on RG.
Much more importantly, I am thankful to RG because we would otherwise not be able to have a fruitful exchange of ideas on the other internet websites.
I do not understand the why of this assertion by Ljubomir:
"Among RG members, many of excellent researchers do not bring their contribution to Researchgate (articles, conference papers, books, answers, questions, projects....), so their low RG score and questionable TRI are not valid info for the quality of researcher, but measure of their inactivity at RG portal."
**I may say Yes on their RG, but would disagree on TRI!
Nowadays, facebook recognition can equal greater remuneration even election wins; we may be foot dragging but some people transfer this RG recognition into income while we argue about what is and that is.
RG to me is a pleasant place because you get to interact with many scholars. In fact, 3 of my current coauthors resulted from interactions in RG. People, sometimes good is better than perfect. "Don't ask what RG can do for you, ask yourself what you can do for RG"
It would be interesting to know people who coud be benchmarks, no matter how much RG is scientific or not...maybe across disciplines? I would like very much to get to know the best ones in the area of marketing..
Unfortunately, RG does not seem to provide Citations data along the lines of the Web of Science, so rankings of journals and individuals according to RG metrics would not seem to be possible.
Mike you are correct but that is understood between us. What we think is often unacceptable for the jury. They see us as crying about every minor discrepancies . Have you not wondered why on average academicians never win an election.
you are correct again! I did not know he ran for anything but I know I came across his name in econometric. One of those raising questions about modeling. Two successful ones are associated TX . Phil Gram and Dick Armey. They are old now!
Brains (or less colloquially, intelligence) can be useful when you are discussing intellectual matters with intelligent people, but I am not sure that is either necessary or sufficient for most things in life.
As some academics are not calculating TRI correctly, the steps are as follows:
Citations are marked clearly under "Stats overview", with a weight of 0.5.
Recommendations refer only to Publications, with a weight of 0.25, and differ from Recommendations reported under "Stats overview", which also include Recommendations for Q&A.
Reads refer to Full-text Reads (with a weight of 0.15) and Other Reads (with a weight of 0.05) by RG members, and differ from Reads reported under "Stats overview", which also include Reads for Q&A.
I’m also sure that Chia-Lin Chang is right in relation to her previous response concerning the calculation of your TRI with respect to the RG Report for week ending.
You are, of course, correct in stating that the simple linear equation for calculating TRI, as presented by Chia-Lin Chang, is right and is entirely consistent with the explanation about TRI, as provided on the RG website.
The academics who have stated that the equation does not work are choosing the data from different headings rather than from a single source, namely "Research Interest", as explained patiently and repeatedly.
It is hard to fathom how such simple instructions can be so easily misunderstood.
I would hardly expect a mathematician not to understand a simple linear equation for calculating TRI, especially as it is based on the explanation of TRI on the RG website.
Using a single data location, rather than multiple data sources, seems to be confounding some academics.
I believe Chia-Ling and Mike have made understanding of TRI much easier. If I remember you all pointed to exactly where the numbers came from so thank you.
It is gratifying to see acknowledgement of the accuracy of the equation for calculating TRI presented by Chia-Lin Chang, based on the explanation given in the RG web page, especially as virtually all previous responses have been in the context of the equation being wrong !
As stated previously by Chia-Lin Chang:
If incorrect data are substituted into the equation for TRI, the calculations will also be incorrect.
Dear Chuck A. Arize, Romeo Meštrović and Michael John McAleer:
We seem to be making progress in convincing academics who are interesting in calculating TRI to use the equation, as presented, and the appropriate data from the correct location.
Dear friends, I do doubt in input data which were used for calculation of TRI. For example, see attached PDF files about my statistics, dated March 25th, 2014. See number of reads and other relevant data. at that time, my RG score was 56,74! I have given the RG mail alert for this date. It seems that those data are very inaccurate!
Are you saying the equation for TRI given by Chia-Lin Chang, which is consistent with the explanation given by RG on the TRI web page, is incorrect, or that the data provided by RG are incorrect?
Your current RG score is well above 56.74, so this must have been a long time ago.
The h-index does not directly affect TRI but, as Citations and the h-index are positively correlated, the h-index is likely be positively correlated with TRI.
I am unsure as to what data you might be referring.
The directions given in this thread for calculating TRI are clear, although the explanation of where to locate the data are not stated explicitly on the RG web page.
As stated 4 days ago as an Answer to this Question:
The appropriate data for calculating TRI are available only under "Research Interest" in the Weekly Report (and nowhere else):
Citations
Recommendations
Reads by RG members
Full-text reads
Other reads
In short, you need 4 numbers to calculate TRI, and they are all available under "Research Interest" in the Weekly Report.
I agree that it is difficult to analyze the accuracy of old data files, but the current data provided by RG seems to be accurate.
In particular, it is easy to calculate TRI using the data provided by RG and the equation you presented, which is consistent with the explanation given on the RG website.
Researchgate management has made more people to understand how they came up with their numbers. Chia-lin and Mike have through better communications have aided in the accomplishments of this goal. RG wrote it but some people took the time to sell what is written
Trying to clarify the algorithm to calculate the RG score has been difficult because many academics, especially in the Humanities and some of the Social Sciences, do not seem to understand the meaning of algorithm.
What is truly mystifying is the lack of awareness by some academics of the equation for calculating TRI, as presented by Chia-Lin Chang.
The equation for calculating TRI is consistent with the explanation given by RG on the web page for TRI.
This is a simple linear equation that can be understood using high school algebra.
As Michael John McAleer mentioned above, the RG score is not an easy sell as not every academic is aware of the meaning of an algorithm in respect of:
"a computer-generated empirical weighted sum of published articles, unpublished research, projects, Reads, Recommendations, and Q&A, based on the RG scores of all other academics in exchange and interactions."
However, I am amazed at how anyone could fail to understand the equation for calculating TRI, and where to access the appropriate data, when it has been explained simply and clearly.
Repeating the same incorrect calculations using incorrect data will lead to the same incorrect outcomes.
I agree with you that goggle is good but I do not believe you can prove that it is more reliable. Furthermore, it seems weird you jumped into this discussion with one liner type statement. We are excited that some people explained what seemed difficult for our friends and threw in this confusion
Javljam Vam se povodom vaših nekoliko prethodnih odgovora-diskusija vezanih za izračunavanje Vašeg TRI-a, a poštujući Vaše uputstvo koje ste meni nedavno (preko RG-poruke-message) dali vezano za očitavanje/izvještaj RG-skora u odnosu na tekući vikend, kao i prethodne vikende. Dakle, prema formuli RG- rukovodstva koja je eksplicitno ponovila Prof. Dr. Chia-Lin Chang, 100% sam siguran da je Vaš TRI egzaktno izračunat nakon svakog vikenda, kao što je to slučaj sa svakim od ostalih preko 15000000 RG članova (RG members).
I also agree with Ljubomir, particularly about the input data. It seems that we have already too much rubbish directly or indirectly related with metrics, ratings and rankings. Nevertheless, I may be wrong. Who knows if by absurd the fields of partial differential equations, numerical integration or numerical analysis could be explored to obtain the suitable indexes?
(A) For promotion and tenure, we rely on SSC, SSC Reuters , ABS, Harzig, ABDC or Journal list that takes all those plus other rankings especially Scopos-sjr. Each of these tells whether a journal is A+, A, B and C. All the journal referred to here are peer-reviewed and mostly blind. Grants alone is just like one publication.
(B) other general assessments, such as awards look at (A) plus h-index from goggle and RG, and collegiality.
You cannot prove it because statistical analyses require the assumption of unbiasedness, but you are already biased towards goggle; secondly, you will have a significant data and resources to establish reliability even the ordinary cronbach alpa would not answer your question. Third, you cannot prove the null hypothesis concerning goggle and RG in terms of reliability
No ranking is needed dear Michael John McAleer . I do respect the fact that they bring H-index of researcher, even it is not updated, behind RG score and TRI.
Part A - Independently of the merit or demerit of any metric, input data can be always manipulated and are frequently inconsistent.
Part B - No Comments
https://link.springer.com/journal/11192
“Scientometrics
An International Journal for all Quantitative Aspects of the Science of Science, Communication in Science and Science Policy”
https://link.springer.com/journal/10732
“Journal of Heuristics
The Journal of Heuristics provides a forum for advancing the state-of-the-art in the theory and practical application of techniques for solving problems approximately that cannot be solved exactly.”
I agree with your view that Citations and TRI can be compared directly, especially as the weight for Citations in calculating TRI is 0.5, or 52.63% ( = 0.5 / 0.95) of the total weight.
Rg is growing and eventually will get to Web of science level, if it is in their business plan. Most schools do not pay attention to Web of Sciences ranking of academics. They do for journals in some cases for the promotion of teachers. With respect to institutions, it is largely sports, scholarship and proximity
As stated on other threads, TRI has an internal inconsistency in that Citations include self cites, whereas it is not possible to record Reads and Recommendations of your own publications.
Just as the h-index is reported with and without self citations, TRI should also be reported with and without self citations.
An associated TRI-index that matrix balances Publications against TRI should be reported with and without self citations.
TRI is an interesting and novel RG extension of Citations by adding Publication Recommendations and Reads by RG members.
Citations and TRI, which purport to measure academic research quality, impact and influence for individual academics and research journals, can be vastly different across disciplines, so they should be restricted to disciplines, and occasionally to sub-disciplines.
The h-index is a measure of academic research productivity that is based on balancing Publications against Citations.
Citations and TRI are scalar metrics, so can be compared directly.
The h-index is a matrix measure, so it cannot be compared with Citations or TRI.