Should RG provide rankings of the 1000 highest-scoring academics by TRI, Citations and h-index (including and excluding self-citations)?
It is not possible but maybe in future this will be something for them to explore
Hello
That would be an interesting proposal. However, as Research Gate team would say that their citation tracking is not comprehensive and it is merely based on citations in their database. So, ranking researchers based on citations in their database would be unjust to some researchers. They can use citations of another source, but of course, it would not then be their ranking. So, I do not believe RG is the right platform for that.
Plus if any other platform does it, there will be a major debate on what does that really mean? Does having the most citations means being the most influential? and people will debate forever about the value of such ranking. Therefore, I do not believe it is a very good idea and it will bring more debate than recognition for the ranked researchers themselves.
Many thanks for your detailed and informative response.
All rankings are taken from a ranker's database.
For example, Publons (WoS) and Scopus are different because the listed journals are different, even though there is some overlap.
Different platforms lead to different interpretations.
Even in the same platform, such as RG, there is no universal agreement as to which of RG score or TRI is preferable, somehow defined.
Rankings are just numbers.
What is important is how they are interpreted and used, rather than abused and misused.
Every metric in RG that relates to the quality of academic research, including TRI, Citations and h-index, should be reported.
Ranking metrics of RG researchers is questionable for many reasons. Citations by RG are not valid, Google Scholar brings better citations info.
RG score is not scientific measure, but measure of activity of researcher.
TRI is a sort of metrics whih is also not scientific metrics.
Another fact is that millions of researchers are not members of Researchgate, so they could not be listed in ranking.
Among RG members, many of excellent researchers do not bring their contribution to Researchgate (articles, conference papers, books, answers, questions, projects....), so their low RG score and questionable TRI are not valid info for the quality of researcher, but measure of their inactivity at RG portal.
To Ljubomir:
Many thanks for your insightful comments.
I agree that RG is distinctly different from the purportedly leading scientific rankings, such as WoS and Scopus, and to a lesser extent, Google Scholar, among others.
Every ranking metric has its strengths and limitations.
The Citations and h-index on any ranking website are based on the papers that are uploaded thereon.
Most of my research output that is available on other websites has been reproduced on RG.
Much more importantly, I am thankful to RG because we would otherwise not be able to have a fruitful exchange of ideas on the other internet websites.
This is one of the strengths of RG.
Vive la différence.
I am fully agreed with the statements given by Ljubomir Jacić.....Great answer sir..... Ljubomir Jacić
I do not understand the why of this assertion by Ljubomir:
"Among RG members, many of excellent researchers do not bring their contribution to Researchgate (articles, conference papers, books, answers, questions, projects....), so their low RG score and questionable TRI are not valid info for the quality of researcher, but measure of their inactivity at RG portal."
**I may say Yes on their RG, but would disagree on TRI!
To Chuck:
As Chia-Lin Chang has asked in:
RG = Facebook for academics?
RG is an internet multi-media site for academics, so it uploads everything that is available on the web..
Nowadays, facebook recognition can equal greater remuneration even election wins; we may be foot dragging but some people transfer this RG recognition into income while we argue about what is and that is.
Dear Chuck:
Treating RG metrics as commercial property sounds like an interesting and novel development.
My friend Roland has had fine question log time ago:
https://www.researchgate.net/post/Did_you_ever_had_the_feeling_that_RG_is_just_a_scientific_Facebook
There are fine contributions there.
Another fine resource was this article:
Is ResearchGate Facebook for science? It is MUST READ article!
https://www.sciencemag.org/careers/2014/08/researchgate-facebook-science
Many thanks for the helpful update.
The knowledge base has changed a lot in the past 5 years, including the addition of TRI, which is not like Facebook or Google Scholar.
RG to me is a pleasant place because you get to interact with many scholars. In fact, 3 of my current coauthors resulted from interactions in RG. People, sometimes good is better than perfect. "Don't ask what RG can do for you, ask yourself what you can do for RG"
RG Strong!! Bed time WoW!!
It would be interesting to know people who coud be benchmarks, no matter how much RG is scientific or not...maybe across disciplines? I would like very much to get to know the best ones in the area of marketing..
To Mirna and Chuck:
Unfortunately, RG does not seem to provide Citations data along the lines of the Web of Science, so rankings of journals and individuals according to RG metrics would not seem to be possible.
Mike you are correct but that is understood between us. What we think is often unacceptable for the jury. They see us as crying about every minor discrepancies . Have you not wondered why on average academicians never win an election.
To Chuck:
Perfectly understood, and agreed.
I know Ed Leamer, who ran as an independent candidate for VP, rather well.
He is supremely clever, but we know who won the election!
Who said you need brains to think?
Mike?
you are correct again! I did not know he ran for anything but I know I came across his name in econometric. One of those raising questions about modeling. Two successful ones are associated TX . Phil Gram and Dick Armey. They are old now!
To Chuck:
Brains (or less colloquially, intelligence) can be useful when you are discussing intellectual matters with intelligent people, but I am not sure that is either necessary or sufficient for most things in life.
Excellent point! it is a always a fight between majority (non-thinkers) vs minority.(Thinkers). The nonthinkers want a binomial answer to everything
To Segun Michael Abegunde :
Many thanks for your well considered, clear, precise and grading system Answer.
Chuck A Arize has given a number of cogent and heuristic arguments regarding implicit rankings.
With Researchgate management and future manifestations in digitization, anything is possible. This new group is creative and listens.
To Chuck:
This is very interesting and useful.
It cannot hurt to listen to the clients who participate in RG activities.
Evidently they are listening! I have written them twice and much of what I asked for have eventually become real
To Chuck:
Well done.
To quote a well-known American proverb, the squeaky wheel gets the grease (and/or oil).
I would like to thank all the academics who have provided interesting and stimulating Answers to the Question.
The equation to calculate TRI is:
TRI = (0.5 x Citations) + (0.25 x Recommendations) + (0.15 x Full Text Reads)
+ (0.05 x Other Reads)
Reads are by RG members.
As the h-index is not used in calculating TRI, the sign and size of the correlation coefficient between TRI and the h-index is entirely empirical.
The algebraic equation given above is for calculating TRI, which holds on a weekly basis as well as in aggregate for all previous weeks.
It is a formula that is used to calculate TRI for every academic on RG:
https://www.researchgate.net/application.researchInterest.ResearchInterestHelp.html
As some academics are not calculating TRI correctly, the steps are as follows:
Citations are marked clearly under "Stats overview", with a weight of 0.5.
Recommendations refer only to Publications, with a weight of 0.25, and differ from Recommendations reported under "Stats overview", which also include Recommendations for Q&A.
Reads refer to Full-text Reads (with a weight of 0.15) and Other Reads (with a weight of 0.05) by RG members, and differ from Reads reported under "Stats overview", which also include Reads for Q&A.
It is not valid dear Chia-Lin Chang . See my data! My laast TRI was 899,90. My total numbers are listed here.
Ljubomir Jacić
Dear Ljubomir Jacić :
The equation is precisely as described on the RG website.
I cannot access the required data as they are only available to you.
The data for "Reads by RG members" and "Publication recommendations" are available under "Research interest" in "Stats overview".
If incorrect data are substituted into the equation for TRI, the calculations will also be incorrect.
Dear Chia-Lin Chang , here are my latest data. Please, try to check the equation.
Citations: 64
Full text reads: 1104
Reads: 3357
Publication recommendation: 2166
Reads by RG members: 4333
Other reads: 3229
Research Interest: 900,60
Thanks a lot,
Ljubomir
To Ljubomir Jacić and Chia-Lin Chang :
I hope you will not mind my participating in your interesting Q&A, but I feel strongly inclined to do so as this is related to my Question.
The appropriate data for calculating TRI are available only under "Research Interest" in the Weekly Report (and nowhere else):
Citations
Recommendations
Reads by RG members
Dear Ljubomir Jacić and Michael John McAleer :
Thank you for the interesting and informative exchange.
The data obtained under "Research Interest" will definitely lead to the correct calculation of TRI.
Dear Ljubomir Jacić :
I trust that the calculation of your TRI was accurate using the equation.
Dear Ljubomir, dear Chia-Lin, dear Michael John,
I’m also sure that Chia-Lin Chang is right in relation to her previous response concerning the calculation of your TRI with respect to the RG Report for week ending.
Best regards
To Romeo:
You are, of course, correct in stating that the simple linear equation for calculating TRI, as presented by Chia-Lin Chang, is right and is entirely consistent with the explanation about TRI, as provided on the RG website.
The academics who have stated that the equation does not work are choosing the data from different headings rather than from a single source, namely "Research Interest", as explained patiently and repeatedly.
It is hard to fathom how such simple instructions can be so easily misunderstood.
Dear Romeo:
I would hardly expect a mathematician not to understand a simple linear equation for calculating TRI, especially as it is based on the explanation of TRI on the RG website.
Using a single data location, rather than multiple data sources, seems to be confounding some academics.
"They're digging in the wrong place."
Indiana Jones
"Raiders of the Lost Ark"
I believe Chia-Ling and Mike have made understanding of TRI much easier. If I remember you all pointed to exactly where the numbers came from so thank you.
To Chuck and Romeo:
It is gratifying to see acknowledgement of the accuracy of the equation for calculating TRI presented by Chia-Lin Chang, based on the explanation given in the RG web page, especially as virtually all previous responses have been in the context of the equation being wrong !
As stated previously by Chia-Lin Chang:
If incorrect data are substituted into the equation for TRI, the calculations will also be incorrect.
Enough said.
Dear Chuck A. Arize, Romeo Meštrović and Michael John McAleer:
We seem to be making progress in convincing academics who are interesting in calculating TRI to use the equation, as presented, and the appropriate data from the correct location.
Dear friends, I do doubt in input data which were used for calculation of TRI. For example, see attached PDF files about my statistics, dated March 25th, 2014. See number of reads and other relevant data. at that time, my RG score was 56,74! I have given the RG mail alert for this date. It seems that those data are very inaccurate!
I agree with Ljubomir Jacić . Many researchers are not there yet in the Researchgate. I was unaware of Researchgate until three weeks ago.
RG needs to do more in its visibility approach while ensuring better citation trackers before it can get there. I don't think the time is now.
To Ljubomir Jacić :
Are you saying the equation for TRI given by Chia-Lin Chang, which is consistent with the explanation given by RG on the TRI web page, is incorrect, or that the data provided by RG are incorrect?
Your current RG score is well above 56.74, so this must have been a long time ago.
To Abhijit Mitra :
Many thanks for your informative Answer.
The h-index does not directly affect TRI but, as Citations and the h-index are positively correlated, the h-index is likely be positively correlated with TRI.
To Madhav Nepal :
Many thanks for your supportive Answer for Ljubomir Jacić.
With more than 15 million academic members, RG is continually increasing its academic impact worldwide.
To Dickson Adom :
Many thanks for your interesting Answer.
I agree that Citations, which affect TRI heavily, need to be tracked accurately.
Dear Michael John McAleer , the input data are bad! See the attached documents and compare number of reads and other data.
To Ljubomir Jacić :
I am unsure as to what data you might be referring.
The directions given in this thread for calculating TRI are clear, although the explanation of where to locate the data are not stated explicitly on the RG web page.
As stated 4 days ago as an Answer to this Question:
The appropriate data for calculating TRI are available only under "Research Interest" in the Weekly Report (and nowhere else):
Citations
Recommendations
Reads by RG members
In short, you need 4 numbers to calculate TRI, and they are all available under "Research Interest" in the Weekly Report.
To Ljubomir Jacić and Michael John McAleer:
The "vintage" data from old files may be suspect, but the latest data files for calculating TRI should be spotless.
To Chia-Lin Chang:
I agree that it is difficult to analyze the accuracy of old data files, but the current data provided by RG seems to be accurate.
In particular, it is easy to calculate TRI using the data provided by RG and the equation you presented, which is consistent with the explanation given on the RG website.
To Adel Badri:
Although Google Scholar was not one of the options given in the Question, does GS provide an equivalent and reliable metric to TRI in RG?
Many thanks.
Researchgate management has made more people to understand how they came up with their numbers. Chia-lin and Mike have through better communications have aided in the accomplishments of this goal. RG wrote it but some people took the time to sell what is written
To Chuck A. Arize:
You are too kind, as always.
Trying to clarify the algorithm to calculate the RG score has been difficult because many academics, especially in the Humanities and some of the Social Sciences, do not seem to understand the meaning of algorithm.
What is truly mystifying is the lack of awareness by some academics of the equation for calculating TRI, as presented by Chia-Lin Chang.
The equation for calculating TRI is consistent with the explanation given by RG on the web page for TRI.
This is a simple linear equation that can be understood using high school algebra.
Dear Chuck,
You are very gracious, which is much appreciated.
As Michael John McAleer mentioned above, the RG score is not an easy sell as not every academic is aware of the meaning of an algorithm in respect of:
"a computer-generated empirical weighted sum of published articles, unpublished research, projects, Reads, Recommendations, and Q&A, based on the RG scores of all other academics in exchange and interactions."
However, I am amazed at how anyone could fail to understand the equation for calculating TRI, and where to access the appropriate data, when it has been explained simply and clearly.
Repeating the same incorrect calculations using incorrect data will lead to the same incorrect outcomes.
Adel Badri @
I agree with you that goggle is good but I do not believe you can prove that it is more reliable. Furthermore, it seems weird you jumped into this discussion with one liner type statement. We are excited that some people explained what seemed difficult for our friends and threw in this confusion
Dragi profesore Jaciću,
Javljam Vam se povodom vaših nekoliko prethodnih odgovora-diskusija vezanih za izračunavanje Vašeg TRI-a, a poštujući Vaše uputstvo koje ste meni nedavno (preko RG-poruke-message) dali vezano za očitavanje/izvještaj RG-skora u odnosu na tekući vikend, kao i prethodne vikende. Dakle, prema formuli RG- rukovodstva koja je eksplicitno ponovila Prof. Dr. Chia-Lin Chang, 100% sam siguran da je Vaš TRI egzaktno izračunat nakon svakog vikenda, kao što je to slučaj sa svakim od ostalih preko 15000000 RG članova (RG members).
Srdačan pozdrav,
Romeo
I also agree with Ljubomir, particularly about the input data. It seems that we have already too much rubbish directly or indirectly related with metrics, ratings and rankings. Nevertheless, I may be wrong. Who knows if by absurd the fields of partial differential equations, numerical integration or numerical analysis could be explored to obtain the suitable indexes?
To António Manuel Abreu Freire Diogo :
Many thanks for your detailed Answer, especially about the perceived inaccuracy regarding the input data.
Might you have any specific concerns about the input data, as well as the metrics relating to ratings and rankings?
Thank you.
Dear Chuck:
I agree with your assessment about Google Scholar and its perceived reliability compared with RG.
To Kofi Agyekum :
Many thanks for your agreeable Answer.
Might I ask what particular concerns you might have regarding the input data?
Chuck A Arize what do you use to evaluate or to see the performance of a researcher (grant, promotion, etc.)? RG?
How did you conclude that I will not be able to justify (prove)? Is it a prejudice?
Adel Badri
(A) For promotion and tenure, we rely on SSC, SSC Reuters , ABS, Harzig, ABDC or Journal list that takes all those plus other rankings especially Scopos-sjr. Each of these tells whether a journal is A+, A, B and C. All the journal referred to here are peer-reviewed and mostly blind. Grants alone is just like one publication.
(B) other general assessments, such as awards look at (A) plus h-index from goggle and RG, and collegiality.
You cannot prove it because statistical analyses require the assumption of unbiasedness, but you are already biased towards goggle; secondly, you will have a significant data and resources to establish reliability even the ordinary cronbach alpa would not answer your question. Third, you cannot prove the null hypothesis concerning goggle and RG in terms of reliability
To Adel Badri:
No one is arguing against the usefulness of Google Scholar, but in what sense do you feel that GS is more "reliable" than RG?
Are you referring to coverage or accuracy, or both?
In any event, GS does not have metrics like the RG score or TRI, regardless of whether they are meaningful or useful.
Many thanks.
Excellency versus Metrics and Mediocrity. See:
https://www.researchgate.net/post/How_can_you_differentiate_Science_from_Pseudoscience_in_your_social_and_academic_life
https://www.researchgate.net/publication/271847229_Optimizacao_Tridimensional_de_Sistemas_Urbanos_de_Drenagem
https://www.researchgate.net/publication/332447065_Peak_Flows_and_Stormwater_Networks_Design-Current_and_Future_Management_of_Urban_Surface_Watersheds
https://link.springer.com/article/10.1007/s11192-017-2396-9
https://www.researchgate.net/post/How_we_can_improve_our_h_index
Thanks
To António Manuel Abreu Freire Diogo :
Many thanks for the interesting publications and post.
Both TRI and the h-index depend on Citations, but TRI is not "standardized" according to Publications.
As Citations have a weight of 0.5 in calculating TRI, the two metrics have the same unit of measurement and would be highly correlated.
The h-index is a matrix combination of Citations against Publications.
No ranking is needed dear Michael John McAleer . I do respect the fact that they bring H-index of researcher, even it is not updated, behind RG score and TRI.
Part A - Independently of the merit or demerit of any metric, input data can be always manipulated and are frequently inconsistent.
Part B - No Comments
https://link.springer.com/journal/11192
“Scientometrics
An International Journal for all Quantitative Aspects of the Science of Science, Communication in Science and Science Policy”
https://link.springer.com/journal/10732
“Journal of Heuristics
The Journal of Heuristics provides a forum for advancing the state-of-the-art in the theory and practical application of techniques for solving problems approximately that cannot be solved exactly.”
To Chia-Lin Chang:
I agree with your view that Citations and TRI can be compared directly, especially as the weight for Citations in calculating TRI is 0.5, or 52.63% ( = 0.5 / 0.95) of the total weight.
To Ljubomir Jacić :
Rankings are frequently given because they are easily accessible based on data availability, and not because they are needed.
I agree that the informative h-index (with and without self-cites) is given under "Scores", but the h-index is not used in calculating TRI.
To António Manuel Abreu Freire Diogo :
Many thanks for your detailed explanation.
I would invited you to expand on how and which metrics can be manipulated.
Michael John McAleer: " I would invited you to expand on how and which metrics can be manipulated."
I am not an expert on this field. Perhaps you should ask to an inspector.
To António Manuel Abreu Freire Diogo :
As you made the statement without any supporting evidence, I was inviting you to provide some.
I know many experts in the field, and have asked several of them the same question.
The statement is yours, dear Michael John McAleer: " I would invited you to expand on how and which metrics can be manipulated."
I am not an expert on this field. Perhaps you should ask to an inspector.
You have repeated an earlier Answer after I explained why I was asking if you wanted to elaborate.
Apparently not.
To Kofi Agyekum :
Thank you for your interest in the Question and the helpful Answers.
The Web of Science / Clarivate Analytics provides rankings of journals and academics by Citations and the h-index.
All the more reason for RG to provides rankings of academics by TRI, Citations and the h-index.
Many thanks for an interesting and convincing comparative Answer about academic research quality.
Rg is growing and eventually will get to Web of science level, if it is in their business plan. Most schools do not pay attention to Web of Sciences ranking of academics. They do for journals in some cases for the promotion of teachers. With respect to institutions, it is largely sports, scholarship and proximity
To Chuck:
Many thanks for the clear explanation regarding the importance of sports, scholarship and proximity in attracting students.
RG is progressively looking stronger.
As stated on other threads, TRI has an internal inconsistency in that Citations include self cites, whereas it is not possible to record Reads and Recommendations of your own publications.
Just as the h-index is reported with and without self citations, TRI should also be reported with and without self citations.
An associated TRI-index that matrix balances Publications against TRI should be reported with and without self citations.
It is not possible but maybe in future this will be something for them to explore
This is possible depending on the logarithm applied. Not impossible.
To Chuck A Arize and Taher Alkhalaf:
Many thanks for your helpful responses.
TRI is an interesting and novel RG extension of Citations by adding Publication Recommendations and Reads by RG members.
Citations and TRI, which purport to measure academic research quality, impact and influence for individual academics and research journals, can be vastly different across disciplines, so they should be restricted to disciplines, and occasionally to sub-disciplines.
The h-index is a measure of academic research productivity that is based on balancing Publications against Citations.
Citations and TRI are scalar metrics, so can be compared directly.
The h-index is a matrix measure, so it cannot be compared with Citations or TRI.
To Chia-Lin Chang :
Many thanks for your detailed, informative and constructive comparison of TRI, Citations and the h-index.