I agree with Andres. I think it is up to the editor to intercept papers that are not a good fit for the journal or have other critical errors before sending them to multiple reviewers who have to spend unpaid time review a full paper that is a bad fit from the start. It doesn't mean that a paper is bad, but possibly does not fit within the aims of the journal.
yes, I got rejections and I also reject poorly-shaped MSs when I am acting as an Editor.
Two main reasons to do so:
1) being out of the scope/aims
2) poor scientific quality and/or writing
This practice is appropriate, in my opinion, under these two circumstances (others could be added as well). Besides, I think that approaching adequate reviewers is becoming increasingly difficult and this is a way to not overload them with 'irritating' MSs.
This depends on what you mean "rejection". An Editor of a quality journal may decide to reject a submission due to a variety a reasons, none of which have to do with the quality of the submission and everything to do with the readership of the journal, the importance of the material to a given area covered by the journal, and the placement of the matter presented within the area. Realistically, this is a rejection nevertheless. We should also consider the competitive nature between submissions, the esteem in which the senior author (corresponding) is held, etc.
Today, it is not unusual for a quality journal to reject 80 or more percent of submissions. Of course what is meant by "quality journal" may vary, but consensus dictates that this quality is the impact factor of the journal.
Because the Editor and its editorial review staff constitute "peer review", it is a serious matter that faithfully represents what is understood to be "peer review". Circular reasoning? Maybe?
It seems quite appropriate to me. Why would an editor waste the valuable time of the reviewers for a manuscript they know is not suitable, for whatever reason, for that journal. I think peer review is needed in order to publish, but peer review is not needed in order to reject.
Thank you very much to all of you. My question was just posed because I wanted to have opinions from other colleagues. I have been also section editor of some Microbiology and Plant Pathology journals and currently I review papers for several of them.
Also, I would like to use the conclusions of this forum for a course I'm giving on "Scientific communication and exploitation of results" in a Masters course on "Food Biotechnology" at my University. This question has been asked by some advanced student!.
I obviously agree with most participants opinions. However, it is obvious that some journals make a first selection or decision, specially when the evaluation is not clearly in favour of rejection/acceptance, taking into account the expected impact of the paper, and the contribution to increase the impact factor of the journal. A kind of good marketing.
I am surprized by this question because as an Editor-in-Chief I reject around 50% of all submissions without involving reviewers. Peer reviewing does not mean that all people around the world are allowed to submit a manuscript and may ask for peer reviews.
Around 30% of the submissions miss the scope of my journal. Should I waste the time of my colleagues who do honest work for nothing? In addition, reviewing such submissions means also wasting the time of the authors.
About 10 to 20% of the submissions are likely not to survive the first round of reviewing for various reasons.
Many authors send me manuscripts dealing with topics to which we have published a plethora of articles, but fail to name any in the references. I send them a long list of abstracts of related papers and ask them to convince me why I should publish their work.
Altogether, I receive around 600 manuscripts each year. Since we ask at least four reviewers, I need to select 2.400 people each year. This is why I reject 50 % of the submissions. Out of the rest, a maximum of 20% survive the peer reviews.
Continuing on my question of September 2013, there is an article by Kate Yandell in The Scientist entitled "Riding Out Rejection" that I recommend to my question followers.
The address is: http://www.the-scientist.com/?articles.view/articleNo/42261/title/Riding-Out-Rejection/
I have indeed rejected several such papers as an editor or a reviewer on quite a few occasions, before a thorough review by external referees. There is every reason to do so, when the scope is wrong for the journal that it was submitted to, when the theory is shallow, the problem studied is not motivated well enough, when the problem is even non-existing, or even old news, when numerical tests are horrendously badly done, when the language usage is so horrible that it would take far too much effort to cure, it and so on. This is a growing problem, too. The world of science is too big, I fear.
I have about a hundred rejections without peer review despite having had my various works professionally edited on many occasions. My belief is that they are due to bias as my conclusion is controversial it is also however correct. A compilation of them can be found here: http://www.baur-research.com/Physics/rejections.txt
I can understand and agree with the previously mentioned reasons for such rejections “desk rejections”, however, is it justified to reject a manuscript because as an editor you have many subscriptions at the moment and cannot review it??
I have experienced this without any suggestions as to when the appropriate timing would be.
Yes I have received desk rejections. In retrospect some were justified and I also took the rejection to mean the papers needed polishing. Which I did before presenting to another journal. On one occasion I did not feel it was justified as the editor held a different definition and approach to consciousness to the one I was defending (I did not know this before presenting). It went straight through in another journal. But I feel maybe the journal missed out by not at least exploring the possibility that there are different ways of looking at consciousness (even though the published work of the editor himself was centred on one aspect only). This also must apply in other areas in science. From this experience I learnt many journals are very specific in their approach, maybe influenced by an editor who does not want to go beyond his comfort zone. It also means a reader will have to read articles from many journals to get an overview of the scope of the topic that they are interested in. Maybe one day we will have journals which are a bit more eclectic
yes I have received so many times form the editors. Long back I have sent two papers to two different journals, after two months I have received adverse comments for both MSs regarding the subject and presentation. Then I have sent those papers to British Journal of Phycology ( Now it is European Journal of Phycology). Reviewers and editor also appreciated the work and asked me to merge two paper as single one. Some times paper rejection depends on the expertise of the reviewer.
It is bound to happen because each and every paper cannot be sent for peer-review. If the paper is out of scope with regard to the objectives of the journal, then obviously peer-review has no meaning. Secondly, if the paper lacks the desired quality be it for language or structure, it is the chief editor's rightful decision to reject the paper and save precious time of the reviewers. Mostly good journals receive much more submissions than they can actually publish so criteria has to be strict and its is quite justified.
If a paper is out of scope or has language or structure issues, then it is usually rejected by a junior editor performing initial quality checks and the reasons are specified clearly in the rejection letter to the author. This is not a desk rejection, it is a valid quality rejection. A desk rejection is when an editor in chief decides after a paper has passed initial quality checks that he is not interested in publishing it. A standard form rejection letter is then sent to the author which provides no valid information as to the the reasoning. This is where there is an clear and obvious bias loophole in the system.
As per publication norms of the journal, The editor has full right to reject an article if he feels it will not contribute anything new in that particular area.
Its strange that the revised paper is not sent to reviewers but rejected by the editor. The only plausible explanation for this is that one of the reviewer of your paper was also ur editor. The other reviewer comments (if any) may not be strong. Hence, the editor was not satisfied and rejected ur paper.
By seeing the rejection letter more information can be derived.
1. I guess you mean reviewed, when you in fact wrote revised. :-)
2. I have on several occasions rejected papers that were absolutely horrendously bad. I see no problem with that - in order to salvage such a mess of a manuscript it would entail the editors themselves finding a way through that mess that would make the paper publishable. That is NOT the editorial bord's main duty. If ever a rule is set in place where the editors have to adjust the mess that some authors send to journals, I for one will resign from the editorial board.
Yes, I received such kind of rejections more than 3 times.
The first rejection reason is that the journal has too many manuscripts in the queue so based on that the editor rejected our submission.
In the second one, the editor rejected the manuscript without sending it to reviewers based on the reason that, we compared our algorithms with the classical algorithms in that field.
In the third rejection, the reason is that the Chief Editor can't be able to convince the editorial board to accept the manuscript for peer review in the journal.
Yes, it is quite possible. And I have experienced it so due to out of journal scope.
it is regular practice owing to some reasons such as
lack of fitness in journal scope
lack of high-quality research or writing
sometimes manuscript is not framed as per journal guidelines and the journal is already overwhelmed with high-quality research publications in the pipeline.
I ve not experienced this as a person and yet to have any contact with some one with such experience.
However, I strongly agree with prompt review process and notification of a peer review from the editor.
Review reports as often done have highlighted area pointing out the grey areas on the manuscript.
In my opinion, the language of disseminating a rejected manuscript is the most important. Should not be derogatory but corrective and encouraging as the paper would definitely have area of strength despite many weaknesses. Eg it may
Good experimental work and data collection but lacks good presentation, citation and explanatory power.
I am just having an ugly experience with “poor” Editor job, which reject my paper based on similarity index (36%), but without its analysis. The Editor didn’t provide me the entire similarity report, only similarity index (%), number of matching sources and list of 5 sources with highest similarity. Editor stated that the references and bibliography section were excluded from the similarity check. However, I performed the similarity check using paid iThenticate software and obtain the similarity index of 32% with 163 sources, close to that reported by the Editor (36%). The complete manuscript submitted was checked, including the bibliography section, acknowledgments, and authors' affiliations to achieve similar results. From 163 sources, 156 have less than 1 % of similarity. And more than 50% of sources had similarity ONLY with bibliography section. So, the bibliography section was not excluded properly from the similarity check performed by Editor. I notified that fact to Editorial board, and I gave the complete analysis of the similarity report (the Editor’s job). So, from the Editorial board I received the suggestion to resubmit my manuscript. Before resubmission, I rewrote some parts of manuscript to reduce the similarity index, and I performed (again)the automated plagiarism check using paid iThenticate. The minimum length of similarity was set to 10 words, as 10 – 11 words are the recommended minimum length of similarity for the automated plagiarism check. Reducing this length to 8 words (default in iThenticate) results in too many small similarities, which are mainly "common terms/phrases" that appear in many different sources. Even with the value of 10 words, I obtained many similarities of this type.
I obtained the similarity index of 28%, 12,970 words - 185 matches - 139 sources, for the entire manuscript. However, excluding the reference section (bibliography), authors affiliations (organization names and address), and acknowledgment ("common phrases," organization names), the results of similarity check was 9%, 9,883 words - 65 matches - 39 sources. The highest obtained similarity was 1% for 2 sources of 39; 37 sources have a similarity of less than 1%. From 39 sources, 4 have similarities ONLY in references cited in the text.
The manuscript contains many geographic names, names of faults, geologic formations, climatic phenomena, organizations, software, methods, processing steps, algorithm names, satellite names, public database names, terminology commonly used in different areas, etc. have to be continuously cited. They cannot be altered, and no specific citation should be added for any of them usually. However, the manuscript was rejected again, based on “high similarity index” (36%), without providing complete report. Editor states again that he excluded reference and bibliography section from similarity checking, but it cannot be true, because two first source (from 5 that he provided to me) have similarity only in bibliography section. I found that iThenticate doesn’t recognize every reference format, so Editor must check the entire report to verify if the bibliography was excluded properly, but he didn’t it, basing his decision (twice) in the raw similarity index value. Additionally, the small length of similarity used by Editor produced that the manuscript about landslides has high similarity with job about diabetic disease (source 4)!!!! Summarizing the expressed above, I very disappointed with poor editorial job.
The following is an example. I just received the rejection recently.
I have read your manuscript, AMOP-D-20-00339 "Two adaptive derivative-free conjugate residual algorithms for constrained nonlinear monotone equations"
With regret, I must inform you that I have decided that your paper cannot be accepted for publication in Applied Mathematics and Optimization.
Unfortunately, your manuscript, as valuable as it may be, does not conform to the standards of mathematical proofs required for publication in AMOP in support of illustrative numerical computations. Thus, it cannot be accepted for publication in AMOP. You may wish to submit your manuscript to a suitable engineering journal.
I would like to thank you very much for forwarding your manuscript to us for consideration and wish you every success in finding an alternative place of publication.
It is perfectly acceptable for a journal editor to reject any submission based solely upon his personal opinion as to whether it will pass peer review or not. This has been proven in a scientific article from the seventies I believe to save time. It is idiotic because anybody can tell you that it saves time to ignore the things that are your job to do.
Here are plenty of examples: www.baur-research.com/Physics/rejections.txt
Yes: and it is a totally shameful-Unethical act by the editor, especially looking at the names and the source of the paper ( if it was from a third world country>>> rejection is high without looking at the content of the paper).
Many researchers from my university and other regional universities got the same experience of immediate "One-Day rejection". If they added one author from a Western university (without adding any major contribution), the acceptance rate changes to be very High.
What a shame!!! We work hard on the research for more than a year, and we got a rejection in one day without looking at the content.
example of these journals with "One-Day rejection":
MDPI "Applied sciences"
IMechE " Journal of Systems and Control Engineering"
Yes... several times... My latest encounter was early in Apr. 2020. My 28-pages (single-column) sole-authored manuscript was desk-rejected by the EiC within less than 24 hours. I emailed the EiC directly and inquired about this rather weird decision. He ignored my first email, and then three days later, I sent him a reminder and specifically asked him to reply to my email. He immediately responded and provided a rather shallow "by the book" response with fictitious and superficial reasons for the decision. For instance, there weren't enough references (although there were 56), numerous grammatical/typographical errors (without pointing any), problems with organization (where exactly?), unclear contribution (although the list of contributions was given early in the paper), and lack of comparative results (like 8 figures were not enough?), and others. It was clear to me that he'd copied and pasted the reasons directly from some other document, as most of the reasons seemed totally irrelevant to my manuscript. I emailed him again, and asked him to be specific, and provide a more detailed review. He simply replied and stated that he was too busy to do that, won't be responding to any other emails, and wished me the best of luck (how lame!).
I decided to take this up with the Journal Manager (not my first time), who---unsurprisingly---sided with EiC (to save his face?). She was like "he is a well-known guy in the field and he knows what he is doing.". My response was: "Ok, so let me tell you what happened. He simply didn't see big names on the manuscript, and it came from Kuwait University, so he decided to reject it right away.". She responded with "We treat all papers equally, and bla bla bla", and even suggested several other potential journals for my manuscript (like that wasn't humiliating enough!). I was like "Thank you, I have been publishing for years".
To prove my point, I re-submitted my manuscript (as is) to another journal "of the same publisher" literally 30 minutes right after receiving rejection decision, as I was not expecting much from the EiC or the Journal Manager (from past experience). The good news is that it got accepted early in Aug. 2020 (i.e. within 4 months from submission). May be I should send the EiC and the Journal Manager a copy of the acceptance decision...
In general, I found that publishing with some researchers from European, UK, or US institutions is much easier... Otherwise, the AEs or the EiC will find a way to reject the paper, if not desk-rejected, then after receiving the reviews, and irrespective of their significance...
apparent weaknesses of customary peer reviewing (for journals)
- reviewing is anonymous
- the ,if so, rejection of a paper is without explanation except the notorious disjunction 'not in scope of the journal, by topic or by quality' (ty 4 telling)
- thus, transparency zero
and
- reviewing takes, people say, up to 2 years response time, which were really poor
While scientific reviewing is an important part of scientific activity, it seems, that the described customary way of doing reviews suffers a lot from the immense amount of papers scheduled for reviewing.
I have sent lots of rejections as the Editor-in-Chief. I have also allowed my editors to write rejections based on their own expertise, but under my control. The justification is that all editors had been selected to sufficiently know the scope of my journal by the Editorial Board. Any of us was eligible as a reviewer. Thus, two of us would suffice as reviewers.
However, I would never reject anything or allow others to reject a submission on the basis of the content which we would dislike personally. Rejections were sent regularly if the method of a study was not appropriate to support the results. A common example of this is selecting a wrong population for the participants, e.g. students who simulate top managers. My opinion is, results not obtained on the basis of sound methodology should not be discussed at all. The authors would be allowed to write an opinion paper, but not a full paper presenting a study.
Certain topics, e.g. TAM (technology acceptance model) were almost fully excluded from peer reviewing after publishing a number of studies because enough is enough.
On the other hand, we would try to treat the authors fairly by allowing them to name a list of reviewers if we would suspect that our reviewers would tend to reject a submission for certain reasons. E.g., all my reviewers are likely to believe that the participation of users in the development of a work system would be beneficial. A paper suggesting the opposite had almost no chances. The authors of such submissions were asked to name five potential reviewers out of which we selected one. If the reports of the two reviewers did differ substantially we assigned a third or even a fourth reviewer.
In a few cases, we published papers without an external review if we assumed that this would cost time but not change the message of the paper.
I have found some voices in the thread suggesting that rejecting a submission without reviewing would not be acceptable. During my last year, I had to process about 800 manuscripts. Should I go for 2,400 reviews and evaluate them?
True. I have named some reasons. But there are others. Anyone in the world has access to any editorial system to submit what s/he wishes. Some people are smart enough to submit the same stuff to various journals with a different title and abstract. Does the editor need to ask peers to review that nonsense?
Another reason in my case had been caused by the marketing of our publisher . A funny person had tried to attract submissions and put a variety of terms in their adds that would not apply to my journal. All of a sudden, I received manuscripts from India that all were out of scope. But only from India. Given the case that authors from India tend to speak better English than in many other countries, I was highly surprised. After about two years, I found out that the terms from the marketing had been tagged as key words.
If somebody would look at our statistics, 98% of submissions from India were rejected because they had nothing to do with our scope.
Yes, it happens many time! Some editors reject manuscripts even without reading for various reasons. Sometimes they refer you to the open access journals.
It happened to me several times. In 2 cases, my papers pusshed from low IF 2 or 3 to Q1 journals with 6.5 and 7.2 IFs. Do your work honestly and do not worry so much about "where it is published!".
Sometimes the editor-in-chief or even the editor can reject the paper with peer revision as they are experts in your field. For example, maybe your paper is out of scope.
This is called a Desk-Rejection. The editors of a journal make a judgement call on whether a manuscript is worth sending for review or not. Having few reviewers to tap into, they reserve their request for review to articles that seem to have a good potential for publication. The rejected article may have a major defect or does not fit the area covered by the journal.
There are no "rules of a peer-review process". You can find the rules a journal applies on its website. And these rules apply when the makers of that journal decide to initiate a peer-review process.
As editor-in-chief, I have decided to reject a manuscript under certain conditions without asking anybody else. The first condition was missing the scope of my journal.
If an input passed this, I checked whether or not the content was sexist, nationalistic or racist. The examination includes the potential of the results and figures and tables to be misused for the items named above. In the latter case, the author was asked to modify them. E.g., authors from South Africa had found that the black population made less use of the Internet. Their goal was not racist, but this result can be used for racist purposes. Thus, I asked the authors to reevaluate their data with another grouping of the subjects. They did, and the result was that the main factor was social status. This was acceptable. Similar procedures apply to papers dealing with gender as a variable.
The next examination was the adequacy of the methods used in relation to the results. If the method is apparently faulty, I would dismiss even the most interesting results. If the method is questionable, I would ask a colleague who is an expert in methodology. Any results we publish shall be based on sound methodology.
The last condition is the most difficult for the review process because a meticulous application of all rules of a sound methodology may end with a rejection of all manuscripts. In fact, one of my associate editors accepted only one paper in three years. Well, this is definitely not extremely wise for a journal.
The real challenge was the rejection rate. We had to accept no more than 10% of the overall input. Is it helpful for authors if we initiate a peer review process that takes months but would be unable to publish their work for years?