What are your thoughts on the new Research Interest metric added by RG recently on publication pages? Does it really represent anything? It combines reads, recommendations and citations, but does that have any additive value to these usual metrics?
If an article is read a thousand times on the publisher's website it will do nothing to the score. In the same way RG does not always count the correct number of times an article has been cited. I wouldn't put too much in the research interest score. Don't msunderstand me, RG is a great way to interact with researchers, access material and encourage exhange of thoughts. The score though (just as the oiginal RG score) says little about the quality of a researcher or their research in their specific field. RG is social media. Great social media - but just social media, nevertheless.
I think Research Interest (RI) is better than RG score. RG is somewhat nonparametric, focused on researchers, growing community of specialists, and their reputations. On the other hand, RI is probably more based on the impacts of research papers. It's system for weighting:
A read* has a weighting of 0.05.
A full-text read* has a weighting of 0.15.
A recommendation has a weighting of 0.25.
A citation has a weighting of 0.5.
So, RI is more precise, accurate, quantified, non-competitive and finally encouraging to the researchers. But, that doesn't mean, I'm in favour of lifting of the system of RG score! Thanks.
If an article is read a thousand times on the publisher's website it will do nothing to the score. In the same way RG does not always count the correct number of times an article has been cited. I wouldn't put too much in the research interest score. Don't msunderstand me, RG is a great way to interact with researchers, access material and encourage exhange of thoughts. The score though (just as the oiginal RG score) says little about the quality of a researcher or their research in their specific field. RG is social media. Great social media - but just social media, nevertheless.
My main purpose in RG is to exchange perspectives with colleagues all over the world and to learn from them and add something that can teach someting to my colleagues. I leave to the RG team the way they determine our RGscore.
I am not sure, what types of algorithms they use to calculate RG or RI scores but both of the scores fluctuate. I agree with Immo Weichert , it's just a social media. And social medias are "good servant but bad masters"
I like both but they are both somewhat meaningless. All I know is roughly after RG 25 you probably deal with someone who has got some grants reviewed a few data sets and is seen to be an expert in their field. It says on my RG score that that is better than 92 of all people on RG but on RI is only 82. But they both are really high for someone with only 50 pubs or so who is working in a very niche field. So I am very pleased about both but also equally confused about what they mean and what I need to do to increase them.
my main objective when I use RG is to find people who are working on the same subject and to exchange points of view with them. to help people and ask questions in case they need help. for the score I rely on the RG team to calculate it
Both metrics are intriguing. It is difficult to determine what meaning to attribute to these as it appears that the comparison is being made against everyone on RG. I have noted that there is an option to compare your RI score with others who share a similar professional domain. However, even these domains are quite broad and represent a variety of interests etc. So, while both scores are intriguing it does not appear that either provides a measure that is meaningful.
The new RI is very glitchy. In the last 8 weeks, it never even changed once despite 10 to 40 full reads every week (should be at least 10x0.15 = +1.5). And it's now impossible to find the page on the RI score in the help centre...
I have to say that I prefer this new measure to the RG score. TRI components are actually related to our own research, it is more transparent and it gives the possibility to see each component's value for each publication.
Though, perhaps I would prefer an equivalent measure that would not include citations. This way we could keep the existant measure of confirmed usefulness for subsequent research (i.e. citations) and have a new measure - that could add value to the former (without dilution) - related to the popularity and direct/indirect future/present interest (i.e. Recommendations and Reads and Full-Text Reads).
As stated on other threads, TRI has an internal inconsistency in that Citations include self citations, whereas it is not possible to record Reads and Recommendations of your own Publications.
Just as the h-index, which matrix balances Citations against Publications, is reported with and without self citations, TRI should also be reported with and without self citations.
An associated TRI-index that matrix balances Publications against TRI should be reported with and without self citations.