Web-based statistics such as Facebook "likes" and visits to websites as registered by different counters are being manipulated every day. I run a website for a large research project and I paced a so-called flag counter on it to see how many and where from the visitors were. One day the visits to the site sky-rocked from tens to thousands a day. It turned out that some well-meaning soul had directed a robot to my website and I had to install a filter to stop getting these automated visits.
If downloading statistics from OA journals were given any real weight in determining career and grant applications, not just the desperate, but the unscrupulous would be pretty quick to start using similar means of obtaining inflated statistics.
Dean mentioned self-citation in a way which suggests that there's something manipulative about self citations. Not necessarily so. In fact I'd be surprised to see a paper without a number of self citations. Most all papers by active researchers are describing a continuation of a previously published research which usually needs to be referred to. Of course, I could try to squeeze half of my publication list into the reference list, but a competent reviewer/editor should spot that and ask me to remove superfluous references.
Therefore, in my mind, citation frequency is not as easily manipulated as web-based statistics.
They certainly will be for some time, but would it not be logical to change that system? We all agree that citations are a sort of indicator of the interest of papers. But what about the number of downloads? If a paper has been downloaded 200 times for example I think that it means that it is interesting. What do you think?
If paper was downloaded 200 times may mean it is interesting, but also it can mean it has just interesting title (and it was discarded after some examination).
If the number of downloads would become to matter, then some authors would download their own paper many times (and also from lots of computers if the IP address would be taken into account) and that certainly would not be a sign of the paper's quality.
Yes - you could upload your own manuscript many times but, to me, you would have to be desperate to do that. Then, like self-citation, I would imagine that the 'system' will work a way out to detect it and cancel it out
Web-based statistics such as Facebook "likes" and visits to websites as registered by different counters are being manipulated every day. I run a website for a large research project and I paced a so-called flag counter on it to see how many and where from the visitors were. One day the visits to the site sky-rocked from tens to thousands a day. It turned out that some well-meaning soul had directed a robot to my website and I had to install a filter to stop getting these automated visits.
If downloading statistics from OA journals were given any real weight in determining career and grant applications, not just the desperate, but the unscrupulous would be pretty quick to start using similar means of obtaining inflated statistics.
Dean mentioned self-citation in a way which suggests that there's something manipulative about self citations. Not necessarily so. In fact I'd be surprised to see a paper without a number of self citations. Most all papers by active researchers are describing a continuation of a previously published research which usually needs to be referred to. Of course, I could try to squeeze half of my publication list into the reference list, but a competent reviewer/editor should spot that and ask me to remove superfluous references.
Therefore, in my mind, citation frequency is not as easily manipulated as web-based statistics.
Good call as usual Bjorn - and another good answer. Yes - i agree, like internet fraud etc, the publication system is always trying to keep ahead and 'milk it' for what it is worth. I don't disapprove of self-citation at all. Take a wee peek at an article or two of mine and you will see what I mean. I was more referring to say my h-index score minus my self-citation on ISI; they found a system to compensate for that
Hi, There are a number of issues here. First of all downloads or usage is only of value of you apply standards to provide consistent measurement practices. Of course publishers and the publishing industry has thought of this see the COUNTER Usage Factor Project [ http://www.projectcounter.org/usage_factor.html ]. The problem of basing a measure on downloads and not citations is that there is no real "process" in downloading a paper. Citing involves a thought process of reading the article, understanding the content and applying it to your own writing project. My guess is that the figure of significance is going to be higher with downloads. Five citations in WOS might be good five downloads could just be random chance. We might have to recalibrate our expectations.
In the end it comes down to two questions, does the source of the data apply common and transparent standards and is it from a credible source. I would suggest that the much derided Citation Indexes meet both these criteria which might in part account for their success. If I have a paper in WOS that has been cited twice but according to Google has been cited fifteen times. Much as I am flattered by the higher Google figure. I would suggest that the lower WOS figure is has more credibility.
Yes Matt. I agree broadly with your comments. However, a few 'variations' on your themes for me. I get cited quite a lot, and sometimes in some 'weird and wonderful' journals. With many of those types of citations - the authors have clearly either not read (perhaps just the title and/or abstract) or understood the context of my work - and I find myself being 'mis-quoted' etc. And this isn't always just in the more 'obscure' journals citing me. Don't get me wrong - 'any publicity is good publicity' in my mind.
I, like you, am always interested to see the differences in my citation scores between databases such as ISI, Scopus and Google and, of course, Google is always the most generous. Google Scholar, however, I find a little more objective - and it's good to see how my work has been integrated into books, theses, national reports etc - other than just journal citation. In fact, how my work is reported in these formats can be better than in journal articles
Hi Dean, I think many comments about Citation Metrics seem predicated on the idea that there is a perfect metric out there. Of course there isn't, and all metrics have their challenges. My observation is that citation is only an indicator or signifier that a process has taken place and is no guarantee of the quality or thoroughness of the process. You can be cited for good, mad, bad and indifferent reasons, as you point out. BW Matt