I think that most of us heard about Thomson Reuter's suspending the Impact Factor of 51 journals due to “anomalous citation patterns”. I'm wondering if you have ever experienced, or heard about something similar between authors. Like "Hey, i'm writing an article, your paper X is relevant, i will cite it IF you will cite one of my previous articles in one of your future papers (if the topic is appropriate)." What are your opinions about this kind of arrangement?
Tight citation circles (cartels) diminish science by excluding the many worthy publications of those outside the circle. And if editors and grant reviewers are within the circle, there are no checks and balances on the practice. This leads to the "emperor's new clothes" problem; the top scientists all believe the same things, talk about the field in the same way, and perpetuate the same types of thinking in their publications and grants. Those who say, "yes, but!" are silenced, killing innovation.
I have the impression that people like to stay within their community with their citations. This is just a random example from my reference collection to illustrate this: All the cited articles are about complex oxides. < http://dx.doi.org/10.1016/j.proche.2009.07.209 > No citation is from a different field.
Now I am not so sure what that means. Do people only read articles that come from their community? In that case it may happen that author, journal editor, reviewers and readers come from a rather closed circle. That would be such a citation cartel. There is maybe no intent behind it, but just a lack of interest to look around. People are maybe happy in their little pond and don't care to swim out to the big ocean where the sharks are lurking. In an extreme case that may evolve to cargo-cult science .
In my opinion such cartels inhibit the progress of a field, because people solve the same problems again and again, unaware of existing solutions. That happened already on a rather large scale. Semiconductor physics and electrochemistry are quite similar, for example. The Shockley diode equation < http://en.wikipedia.org/wiki/Diode#Shockley_diode_equation> is essentially the Butler-Volmer equation < http://en.wikipedia.org/wiki/Butler-Volmer_equation >! Both describe exactly the same process: Electrons crossing a barrier. And people perform C-V measurements (aka cyclic voltammetry) in semiconductor physics and electrochemistry. "Fermi level pinning" corresponds to a redox buffer, etc etc... Still it's taught separately by different departments in university!
For the result it does not matter if the trade is conscious or not. No harm is done if people ALSO cite their friends. I cite friends and colleagues without a mutual agreement between us. I just look what they are doing and if it fits my subject and I think it's good work, I cite them. I also let them know of my publications if I think they may interest them. In my opinion it's a problem when people ONLY cite their friends and ONLY get cited by their friends. That may evolve into a parallel universe situation. If such citation patterns emerge, action should be taken by the publishers. Again, it does not matter for the damage if there was an agreement or not. Maybe it's even worse when people are not aware that they separated themselves from the rest of science.
Even though this kind of practice is not only confined to the friends circle, but publishers do ask the authors to cite relevant work from their own journal. as this will increase the impact rate.
@Lucas,
I like your market value argument! The "market value" of a citation, for a journal and for a researcher, is directly a result of the fact that money and positions are assigned based on these metrics. As soon as there is a metric, there is a way to game it. And as soon as the metric has some sort of influence, there is an incentive to game it. And people will do it wherever they perceive a possible advantage.
I am coming to the conclusion that metrics based on journal impact factors or citation index are harmful. Journals and authors have an incentive to publish "troll" articles, that means low-quality work that makes grandious claims. Such articles attract a lot of attention, and as far as I know there is no such thing as a "negative citation" or "non-endorsing citation" for poor-quality work. The arsenic-based bacteria from Mono lake would be such an example. And authors have an incentive to publish a lot of low-quality articles on fashionable topics instead of solving hard problems or tackling new projects that do not get so much attention or require a lot of work. And of course the metrics create an incentive for citation cartels.
In software development there are attempts to quantify developer productivity in terms of lines of code written (LOC). All these attempts failed, because such systems are easy to game and don't give an incentive to write good code, but to write overly complicated code. Small code is better! Bibliometry should in my opinion not be used at all. Instead the quality of the work should be judged on an individual basis, i.e. articles should be read and reviewed for quality and creativity instead of just counting citations.
Thank you for all your comments. My non-explicit assumption behind this question was that of the market value of citations. Since in any topic, the number of relevant literature is growing, one could think "What is my interest in citing a specific work, as i could equally cite others to indicate that this idea/result/method showed up elsewhere." Should i cite an author from my country, increasing his scientific capital (independent citation number) knowing that we are competing pretty much for the same grants and job opportunities? Would not be wiser to cite someone from (faraway country)? etc.
Anyway, what i see is that citation HAS a market value. It counts. And it can be easily marketized as a form of scientific capital (which is, as we know, in a certain extent exchangeable for jobs, grants, invitations, fellowships etc. which means money, power and reputation in the long run).
Also I see from time to time little groups of scholars in my country citing each other's work regularly, but not dealing with relevant works of other researcher who are not the members of their "inner circle". And that bothers me a lot.
There are two issues in your question to deal with. The first is about scientific ethics: if a paper is relevant for your work, you should cite it (even if you do not like the author/s, or you do not agree with its contents). The second issue is about corruption; even if science is thought to be objective and strictly rigorous, scientists are humans, and so, sometimes few get into bad habits, such as the one you mention. The punishment of the publication might be a way to encourage the editors or referees from other journals to be more critic about this situation.
Ok, but what if you could cite, f.e. 20 relevant papers after a statement, but you cite only 19, omitting intentionally the not liked author? Or you cite all relevant paper from a friend, but only the most significant ones of other authors? Or (the best one), you do not even read anything from a not liked colleague, so you wont have to cite anything from that pesky little b*stard!?
It is not my inner paranoia telling me, I've seen these happening.
Well, that is a personal choice. And yes, I see that happening now and then. But above all that, however, there exists the ethics of scientific research. When a scientist chooses not to read someone's paper because of a non-scientific reason, he becomes a weak scientist because is missing an important piece of knowledge that might be a keystone for an efficient follow up of his own research.
Then, depending on the field of research, if there are many different people working on a broad area you might be able to avoid citing a reference by citing others; however, if you work in highly specific area or particular question with few groups working on it, it is more difficult to avoid citing competitors papers when their papers are relevant. In many journals, the number of references is constrained to a certain number; thus, the author/s should carefully choose/s which are the relevant papers.
Tight citation circles (cartels) diminish science by excluding the many worthy publications of those outside the circle. And if editors and grant reviewers are within the circle, there are no checks and balances on the practice. This leads to the "emperor's new clothes" problem; the top scientists all believe the same things, talk about the field in the same way, and perpetuate the same types of thinking in their publications and grants. Those who say, "yes, but!" are silenced, killing innovation.
Formal or informal citation cartels are fairly easy to spot using Web of Knowledge. Search all papers of the author, get the citation report and then list all citing articles. Use "Analyze Responses" to get a listing of the top citing authors. If most citations stem from a few authors only, I'd take this as a sign of a "citation cartel" or of work that is mainly of interest to a limited group of people. In either case, I'd be cautious of using citation counts as a measure of the person's scientific achievement.
I agree witfh statesmentt of N.S. Magesh · 9.44 · Manonmaniam Sundaranar University
Even though this kind of practice is not only confined to the friends circle, but publishers do ask the authors to cite relevant work from their own journal. as this will increase the impact rate. Cancel Save
Even though this kind of practice is not only confined to the friends circle, but publishers do ask the authors to cite relevant work from their own journal. as this will increase the impact rate
Also, I agrree wtih several comment of Janos Toth; all others are discussios on meaning and ethical aspects of "Authors's citation cartel", which we all olerady know,
If two scientists work in the same field, and know each-other research they will cross-quote their articles. I am not sure this is weird in anyway. Let's say a group works in the same university. They should know their own colleagues research, so they may and should quote-it, if they use-it. Let's say a PhD coordinator has a team of PhD's working on different parts of the same project. They should quote both the professors starting papers and their colleagues papers that lay the grounds for their own research. That's not wrong, and should not be seen as wrong.
If measurement does or does not take this into account, that should not be the author's concern. We do not work for citations, we work to create science.
I find citation metrics wrong. They do not measure positive and negative citation, and ignore groundbreaking work that require years of promotion to be noticed by the scientific mainstream. What they do measure is fame of the researcher, but that may be less related to quality but to position and luck.
There is such thing as a "fashionable" research and such thing as "outside the box" research. Citations provide more a measure of fashion in science.
Some really edge research may be both written and understood by a tiny community. Should they be dismissed because of citing each-other?
I do agree with Bradut Bolas
Unless the work is related it can't be cited.
I recently sent a paper to a journal for publication but it was sent back with the editor saying papers from within the journal should be cited in my work. I just understood the implication from this question. And this is a springer journal. Unfortunately, i could not find any work relating to what i was doing in the journal archive! The paper is still pending with them.
It took a long process of explaining to the editor (using my limited internet access) for my publication to be accepted by the editorial board.
Is this the norm?
@Samuel: No, that is not the norm! That is completely unethical every which way. (1) You should only cite papers which you quote, paraphrase or summarise. You know which they are. (2) Referees might have picked up that you overlooked an earlier work, in which case they should give you the full reference for you to fix it. (3) How can you ever announce a new discovery that has no precedent to the field?
Please blow the whistle on the editor.
@Ian, thanks. The journal has not published an paper in that area but others have. The only one near that is in press and was forbidden from citing it. Had to add it with the doi.
@Samuel: Perhaps you should consider whether a journal that has to resort to such practices to improve its impact factor is worthy of publishing your results...
@Janos
I like Ashwin do agree with Teresa's observation, "tight citation circles (cartels) diminish science by excluding the many worthy publications of those outside the circle". It not only diminishes science and make such people scientists who cannot think something original. It also have given birth to a phenomenon known as cut and paste research made possible by internet and computer whereby a senior's help is taken to edit it and put the research in an organised form without his knowledge. Latter to change the language drastically so that plagiarism is not detected a language expert is hired.
So far request for CITATION is concerned it is reminiscent old days when progressive writers, poets, educators and others not only be promoted and propagated by the stalwarts in circles but also inducted in the faculty without care of the future of the posterity.
That is the conflict of interest by design in scholar community to uphold honest principle with certain interest but has dilemma with justice principle in general
This situation happen in scholar project is more,i think the team is irappropriate model to research,individual work patter is good,many projects produce fakes to destory human happyness,we propose abrogating the model!
FYI, http://scholarlykitchen.sspnet.org/2012/06/29/citation-cartel-journals-denied-2011-impact-factor/ links to the story
Interesting conversation, thanks to all the contributors. This is of timely value for us in India, as there's a serious discussion on impact factor and number of citations for humanities research.
Dear Maxim, It also bothers me that in humanities the idea of a template that follows a typical citation method is comparatively of recent origin and many of the journals have individual citation methods or customized methods. for example in the field of law, the Journal of the Indian Law Institute has been there for half a century, and follows its own method of citation. We really do not anything about the IF of this journal, yet since its been around for long and has consistently brought about good articles, it is seen as good value publication. But then if i go by that yardstick, then i would be making a subjective analysis, and where the IF is not clearly known then citation cartels could be more problematic, university research groups and their career development could be left to the number of vaguely calculated citations based upon some search engine statistics. could it be possible to scientifically track the citations, apart from the method that you have already suggested. In India the University body brought about a points-based system considering points per each publication(separate for national and international) irrespective of the IF. I don't know if that can be a fool-proof method, but it seems as a model that overcomes the problem of citation cartels.
Best
Dear Sai, please could you briefly describe this point based system for national/international publications? I'm an editor of a small OA journal and since our first issue, I experienced that
1.) 30-40% of the submitted manuscripts had authors from Indian institutions.
2.) to date not one of these articles was i able to forward to peer-review because of excessive plagiarism (like half-page blocks copied verbatim from other publications. I really hope that this does not struck you as an attempt to play some kind of race card, but I observed this phenomena mainly in manuscripts with Indian authors).
3.) These manuscripts was loaded with citations to other Indian-authored papers with medium to little and no relevance to the topic examined in the paper. Many of the cited papers was published in well-known predatory journals (journals aiming to collect author fees and publishing manuscripts regardless of their scientific quality) and also contained plagiarized text.
I can not help myself but assume that this phenomena is somehow related to some unique academic metric system which rewards international publications and citations from international publications to such an extent that authors risk everything to publish in any international journal. At least i do not think that i can explain this within the frame of a general "publish-or-perish" argument. I heard that, f.e., in China, universities pay a concrete sum if a researcher manage to publish in an international journal. Do you have some similar policy? Can you please comment on these experiences, as i really want to understand the possible motives for this behavior. Ty.
Dear Janos,
Impact factor for Indian journals is a comparatively recent phenomenon, especially in humanities. We published in journals that were there for long and acknowledged by senior researchers as standard, and our work is left to the subjective evaluation of the people on these journal boards.
To encourage quality research, the university regulatory body in India brought about a compulsory publication model, allotting a certain number of points to each publication in refereed journal, different for national and international. Every faculty member has to gain a minimum number of points in the publications category at the end of the year. While the high point is that we are not left to the arbitrary evaluation by the editorial board that has been a possibility before, the downside is that there is a certain level of clamour for publishing to meet the requirements, and hence quality gets affected.
Predatory publishing has found place now because there is a mushrooming of journal publication houses, where there were very few quality ones earlier. Could have been better if the established journals were taken over by the reputed publishing houses, or went on a pay for download model.
The situation though is better with a few ones which are listed on the open access database, the few I have accessed, have journal policy specified in detail
Dear colleagues I am sorry to raise some criticism. There are some differentiation against the papers coming from the developing countries.Citation cartel is also applied against the publications of the authors from the developing countries. Their papers may deserve citation but they are not cited because they are out side the cartel.I mean, there are micro ,mini and macro cartels. I noticed that there is international citations cartels.
When you submit a paper for publication,you get always the comment that you failed to cite so an so.. papers.
I agree with the colleagues that the citation is not a measure for the quality of research but it may measure the research activity. Activity may not always lead to quality.
As the great research centers and institutions increase all over the world the situation will be much better.
Who has the power, has the say!
Thank you
I think that this is a challenging issue. Our challenge is to be part of the circle by studying that circle. Let us keep on trying.
Dear Maxim and Janos,
Found an interesting link on chronicle with a few suggestions too.
One of the issues pointed by the authors of the article in chronicle, is known as chain publishing here. A research supervisor can present his groups findings with minor diff fences and carrying all his students names to different journals, like one result finding for each journal and change the order of authors, with his name appearing in the first. He can cite the result finding in one published paper for another proposed publication and thus increase the number of citations for any given paper.
http://chronicle.com/article/We-Must-Stop-the-Avalanche-of/65890/
Dear Maxim,
Am amazed. This is ridiculously hilarious. 3051 could possibly be the number of people working on this specific domain area all around the world. This can only be true if they are collaborating.
Most of the time, and we have seen this in India, the supervisor gets an authorship, despite his contribution to that specific finding being reported, being negligible and also already reported in previous cited publications. There are many like that who are there by default. Students have no choice here coz they want the publication, so they give up on the first authorship. I wonder if we can also suggest a model where a student's work gets due recognition as the first author. We could also see if all the authors get the same value on their portfolio.
Maxim,
I thought this problem of large by default authorship is of industry sponsored research. For instance, one paper of my husband in medicinal chemistry letters had even the director and the president of every department that they interacted with as authors. This is because In industry it does improve visibility of a person, but in academics it is ridiculous to have the same group members quoting or citing each other's previous published work. This is like a mutual admiration society.
For once, this practice can be discouraged, if journals start encouraging single-author and student publications, at least in their online versions. There could be student publication dedicated journals, with seniors in the editorial board. The supervisor can be cited, acknowledged, but not given authorship. Since the incentive for the supervisor here is less, there would not.be too many authors in a paper, and also young researches will get a chance to present their work for review. Also could it be possible to suggest a dry run/academic discussion of the findings before publication. The dry run is often done for case studies in management sciences, where it is subjected to scrutiny about the participants and authorship.
Maxim, Is it possible to have a method whereby the databases/abstracting and indexing sources put restrictions on the number of the authors. Like all the support staff for a particular research group can be named in the acknowledgment instead of an authorship. Or it could be possible that the authors of Paper PPP in Journal AAA will get cited in the paper QQQ in Journal BBB, but not in the same journal. That would stop many people from publishing in the same journal and indulging in citing previous repeatedly, which works well for the journal also, because it assures the visibility of the journal often.
In India we find that this chain publishing is largely done by the supervisors to improve their chances and the group itself is left with not much say in the decision. I was thinking that this problem of chain publishing will significantly reduce if the supervisor does not remain as the corresponding author. Then there is no incentive left for them to indulge in chain publishing. The corresponding author is a major issue as there is a certain importance attached to that in the metrics system of academic career progression. Remove that, the supervisor will still get a citation but not the major value chunk, because he becomes one of the authors, so he/she will not be scouting around for chain publishing. Often, I have seen situations where the supervisor insists on being the first and corresponding author and also journals publishing without batting an eyelid if they see a well-known personality as author. When issues came up about that paper, they just shied away from taking responsibility by pleading that they have just been added for the name value. If this can be discouraged and the regulatory mechanisms for academic institutions bring out uniform norms on publishing methodology then the situation could turn for good.
Hi Maxim, citation cartel also survives because of career metrics. I was wondering if it could be possible to suggest a paper with more authors have lesser value on metrics unless it is a inter-disciplinary and collaborative work. For example, a book chapter of monograph can have more value than a research paper with authors from A to Z.
The question is really not about multi-authorship, Maxim's favorite obsession, but about citation cartels.
First, in answer to Janos' follow-up question, NO, during my 30 or so years of scientific publishing, I've luckily never, ever been approached by a colleague with a request that I cite his/hers papers in exchange for similar "services".
Thinking about it, neither have I been asked by a journal editor to cite more papers for that particular journal to affect IFs. The only external influence having been put on me as an author is when manuscript referees sometime point out a paper which would be appropriate to refer to. However, in most if not all cases, these suggestions have been genuinely scientific, not a form of self-promotion.
Maybe I'm naive, but I honestly don't think that citations cartels exist in my sphere of research.
However, think Bradut Bolos points out a much more subtle mechanism which is certainly a real problem, which is the biased selection of references cited by the individual authors. As Bradut points out, in most cases this is probably unintentional - you cite the references you're most familiar with, and often, these are studies by your collaborators. However, I've also seen cases, although rare, of research groups which avoid citing papers by competing researchers in such a way that it must be intentional. Here, the manuscript referees actually can have a correcting influence, by pointing out that the citations are biased.
Bjorn, You are very lucky that you have not been approached for citation. My experience is much lesser compared to yours, and I have received one request for citation on RG. So its for sure that we cannot leave our research career for destiny and luck.
Some of the journals with high IF have on their editorial board people who are not from the main discipline of the journal but are largely working on the functional areas, so we just cannot expect the journal referees alone to handle this issue. Maxim is right when he says we need a cogent citation system to be drawn up, it can be domain area specific, but just not leave it for the citation style-sheet to handle the problem.
While I do agree this discussion is about citation cartel, we cannot ignore the link between huge authorship issues and the citation cartel, because the cartel could have been a result of some arrangements between the authors. Often we find instances where a senior professor has agreed to be an author on the paper only if his work is cited, because for that senior person citation is an important factor, and the junior researcher has no choice except accept such request, because it gets him a possible favourable access with the journal referees. This is all happening around us.
To Maxim Kotsemir:
I do not think that punishing a journal for multiauthor papers is a good idea. Look e.g. at particle-physics papers, resulting from CERN collaborations! There are plenty of coauthors in these papers, but I can assure you that all these people worked hard to get the results that were published there.
The argument that researchers do not have to be put on the coauthors list because they have been paid for their work is silly. Sorry to say that. Are you paid for your work? Or maybe you work for free? If you are paid then - according to your own argumentation - you should not be an author of papers based on the work you made for your institution - because the institution has paid you already. Going further with this argumentation: every research institution should employ paid and unpaid staff (well, the word "employ" is not completely adequate in ths second case), and only the unpaid staff writes research papers, while the paid staff makes all dirty work. Otherwise no papers could be produced at all (unless you find a journal accepting papers with zero authors). :-)
Returning to the CERN example - I presume that in this case it is extremely difficult/ impossible to divide all this intertwined research/technical effort into hundreds of separate papers. There can be (and surely are) some internal tech reports, but they are not interesting without a final result, which is one for all contributors.
We have biased from your original question about "citation cartels".
It's funny to read in this thread about people who "worked hardly" and are "hardly working." I realize that on the surface this seems to mean they have put a lot of effort and time into their tasks, but to a native speaker of English "hardly" usually means "barely", that is, investing something close to the minimum posible effort. When I worked in construction as a young man, co-workers would joke as they walked by, "Working hard, or hardly working?"
Well, as an author I was forced to cite paper(s) from presumed reviewer(s) or any paper from the journal XXX - several times. so what? Should I retract my paper from XXX?
@Linas: The presumed reviewer might have written a paper that you overlooked, and you should cite it if relevant. But, you say "forced to cite" so it was not relevant. In both the cases you gave, you should have refused (and blown the whistle). However I understand that in a specific research area, there may not be any alternative journals to send your work too, so should you retract your papers and not publish at all?
@Ian, situation was different - small dataset and high IF, so really there were no choices
One option is this: I organize a Conference in my City, I invite a small set of Authors, I pay their hotels, restaurants etc and at the dinner I am kindly asking them to cite my paper: Beautiful cosmos, clean, scientifically ethically built and of course not 'dirty'.
Another worse example: I am the Director of a PhD program, I invite Authors all around the world, I pay their expenses (from citizens taxes, not from my wallet) and of course:
1)they invite me back to their home institution as a lecturer
2)they cite my paper
Beautifull academic world indeed...
If the two researchers work in the same field they can exchange of knowledge and information in order to solve the research problems only
I see no problem with scientists and scholars who know eachother and who collaborate on projects, that often raises the standard of the work accomplished. But when the word "if" is inserted (as in I will scratch your back IF you scratch mine) the contrived collaboration is of a different matter indeed, and not an honorable collusion. To keep out bias and politics and favoritism is to advance science and keep it ethically pure and scientifically more sound.
Boycott the offender journal - don't subscribe, dont submit articles, and tell your colleagues about the offensive practices.
Similar situation occurred with a friend of mine, when he went to submit hid synopsis. Director of research (who was not related to his field but somehow co-authored a paper in the field) rejected it saying, "why don't you cite my paper, first include that then I will accept it".
Later the matter was sorted out with interference of supervisor.
PS- The problem does exist and has many faces.
I have witnessed such cartels form and thrive! I was even invited to be part of such a cartel. I find it beneath me and, I agree, it diminishes Science in general.
The vast general public is all-too-aware of the incestuous nature of our present-day Science! They at least intuitively understand “something ain't right.” The merry-go-round reports: today this is bad, tomorrow it's just fine... The public is losing faith in Science and for good reason!
For us, scientists, we are well aware of the incestuous and insidious behavior that goes on. Cartels in publishing is only the tip of the iceberg! Let's mention research funding cartels, the revolving door of Government employees to the contracting world, the insidious and corrupt relationship between contracting companies, the awarding of those contracts and the seemingly endless parade of old familiar faces returning the trough to feed once again. The corruption is so ripe, so pervasive, that society may be referred to as a Corpse, a rotting, putrid corpse.
The real question is, “To what extent should anyone place their faith or trust in scientific journals of our day?” If it is admitted that Science has been compromised by moneyed influence, by cronyism, nepotism and yet still more abominations, then what, pray tell, should we believe in!
I agree with much of what Luisiana said, except that it applies to some academics and scientists, not to the abstract figure of "Science". Our work should be judged on a case by case basis to avoid excessive generalization, thus being fair to those scientists who are maintaining a high ethical standard. What the latter phrase means is another matter for discussion, but there seems to be a rough consensus out there.
Even such things happen! Citations are managed in this way too! This is too much. My God!
It is a hopeful sign the fact that best kept secrets of publishing are coming out from the darkness of a small community which always knew them. Who knows? Perhaps this enlightenment will give rise to a better science than of today.
Dear Demetris,
Till a few years ago, I did not know that people invite others to get invited in return. Actually, here in our country, in the Universities there is a custom followed. For the Practical Examinations in the Masters Programs, External Examiners are invited from other Universities. The traveling costs are paid by the University. The Heads of the Departments concerned usually decide who exactly would be the External Examiners. I observed that in some cases, those who are invited are Heads of Departments in other Universities. They in turn invite those people as External Examiners to their Universities!
Thereafter I observed that such people arrange conferences, and invite those people, of course as you have said, using money not from their pockets. In return, they too get invited to conferences! This nonsense in the name of reading articles is a regular feature in our country.
I never thought deeply about it. Now from your letter, it seems, it is a phenomenon the world over! Well!
In fact, in the Indian Universities, to examine PhD dissertations, the Supervisor has to submit a list of Examiners, which is used by the Executive Heads of the Universities, the Vice-Chancellors, to select two names who are thereafter requested to examine the thesis. A few years back, I observed that in some cases the list of examiners is actually a list of friends and relatives! Of course, not in all cases this has been happening. There are some insincere people who have been in this trade.
I have seen this happening a bit too often, cartels do operate in academics too, but it sure is unethical. Some times it is inevitable as a very select group is working on that subject, so mutual citations have to be done, but when it becomes a regular practice then of course it hurts and should be avoided. I have also witnessed, that many authors use your work, but forget to cite the source which again is not a desirable trait. Some journals are not so particular about all these practices too. But would strongly advocate that the " you scratch my back, I scratch your" type of behavior should be condemned always.
@Janos, I have started following You because of this nice thread. Actually, I have got this thread by some of my colleagues, and I am very grateful.
I may say that I have met such practice in my country and Balkan vicinity, but I did not know that such malpractices are so spread out, it just scares someone who is ethical and honest in science!
We should use our Research Gate pages, discussions and fight against such anomalies!
@Ljubomir, you will get surprised to see that civilized nations are worse than ' Balkan vicinity' and this will happen now, when Serbia took the 'ticket' as an incoming country in the EU...
Dear all,
Your student/colleague has prepared the first draft of a paper.
He/She ask you to read/edit the text as a coauthor.
Do you add some references to it?
Do you offer your published papers to be included in the paper?
Do you remove some of references from the references list?
Whether your purpose is just to improve the scientific content of the paper? Or…..?
Certainly the REFEREES also falls in to the same situation.
Here is the right time to avoid the mentioned undesirable problem/consequences.
LET'S DO THE RIGHT
Jones, you said it all.
I feel a few undeserving works might be tactically promoted by these citation cartels, but truly worthy works can never be suppressed. It WILL certainly catch the attention of larger scientific community.
Such practices cannot be blamed if are done within limits, like a coin has 2 faces similarly good and bad things always occur at same time -- short-cuts cannot be blamed - these always help you achieve targets with time saving, only thing is short-cuts must be safer. Self-citation is not a crime with related references in a similar theme / paper - stacking is not good. Many journals silently maintain their impact points same for years, without a little short-cuts or management of citations it can't happen. I have seen many journals maintaining like 1.2 IF for 4 consecutive years and so on ..... practically how can same IF be there for 4 years without any much difference? It is all science and some game too - some are exposed and some are not.
As far as I know self-citation is not a crime. Side by side cross coating is also not weird. Such practices are acceptable n the scientific arena within limits.
Such practice must at once be genuine in its relevance to the text of one's article and help shine greater light on a given construct, theory, or fact. To cite for any other reason is egregious and irresponsible citing.
I think that the number of independent citations is still the best criterion to measure the scientific outcome. A phenomenon of friends-citing evidences first of all about good networks within the field. I think that it is much worst when the evaluation is performed by commissions incompetent within the field and by non-transparent procedures. Then the “good networks” are required within the institutions, a practice which could be much more corrupted.
The issue is still very interesting and I think that one cannot avoid - unfortunately - such "malpractices", which have impact on "impact factor". When the number of publications and the number of citations (as countable, numerical data) are "must" criteria for academic/research evaluation and when the obstacles of language are so serious etc etc, then there will be problems. As in this discussion, there are found expressions like citation cartel, independent citations, "good networks", I would like to know if there is any research done on this exact issue. How can we define "independent citations" or "good network"? And, can we say that ISI which produced and published Science Citation Index based on particular scientific journals is responsible for the creation of "citation cartels"?
Such practices should not be encouraged by Scientists' community.
Dr.S.Ravindran
And here is this week's largest citation cartel:
http://www.washingtonpost.com/news/morning-mix/wp/2014/07/10/scholarly-journal-retracts-60-articles-smashes-peer-review-ring/
Wow, Ian. As a peer reviewer this is frightening. Of course, it appears the underlying motive for this was to misinform the scientific community and by extension the public on automobile engineering testing results. Thanks for sharing this.
Thank you so much Ian. So, can we still talk about independency? I think this is an issue discussed for years and decades, but it still works like that. It's a scandal!
Dear @Ian, it is criminal act! Peter Chen should not only give resignation, but he should also be processed by Court of Justice! Same applies to Professor Nayfeh’s retirement!