Most reviewers of academic papers are volunteers, who spend valuable time for research when they try their best to improve our papers and evaluate their appropriateness for publication. Usually, authors welcome constructive criticism and applicable advice on all aspects of their papers offered by reviewers even if they do not follow all suggestions and comments.
As a reviewer I try my best to finish the reviews as soon as possible (in most cases I am asked to finalize the review within four weeks). Very often I think I can help improve a paper, be it the overall organization, the language, or description of theory and methods used to at the results stated. Sometimes I reject the request because I do not have the knowledge needed to seriously review a publication.
As authors we want to get suggestions that are concrete and practical to meet internal or external deadlines. What are your experiences with the review process practiced in the current academic publication setting? Some ideas:
- constructive vs. destructive criticism
- timeliness of review result
- appropriateness of language used by reviewers
- level of understanding the paper under review
While I'm inclined to sympathize with Afaq's point that the debate may not be conclusive, I believe the topic is worth discussing even if it only affects the behavior of individual reviewers and submitters as they see points of view they may not have previously appreciated.
With regard to the sub-points of the original question:
- I have gotten benefit, in one case great great benefit, even from destructive and occasionally insulting reviews. If the reviewer becomes emotional, I can learn what triggered it and perhaps I may learn more about the underlying beliefs and assumptions that are driving thought in a field. If the reviewer sarcastically dismisses an approach, then I had better either justify the approach more carefully, or find a new one. In one case I did find a new one and an important paper resulted. It is important not to over react. At least the journal editor called for a review. That's first base, and not easy to get on! And at least the reviewer took the time to say something. The worst case is getting told "we have many good submissions and can't get to yours." Then you know you didn't make the cut but get no information. A few journals, even IEEE journals, do not give review feedback because the community is small and there is no way to keep the reviews anonymous. In that case it is essential to write the editor or someone else "in the know" and find out what the objection was.
- Timeliness seems to be getting better in the last few years. A delay means either the editor is having trouble finding a reviewer, or has lost track of your manuscript. A follow up note is advisable after 3 months to make sure it is not the latter. That happened to me once. I had waited nearly a year! In another case, an editor asked me to remind him in about 3 months, and I had to do so 3 times (9 months) as he went through 3 reviewers before he found one who was able to complete the review.
- Most reviewers I've encountered are at least civilized in language, but if they don't like your work of course it comes through between the lines.
- The level of understand is the most difficult area, when new ideas or methods are used in a paper. Papers that make incremental improvements in accepted theory or methods are easier to review and get published more quickly. There is nothing to do here except keep trying to find ways to explain the transition from established thinking to your new methods.
Hope this is helpful to somebody. In addition to publishing in my original field, I have published cross-discipline in two other fields and do not have a PhD in any of them. So believe me, I've seen just about every possible response from a reviewer (including good ones). But it is possible to get through if you keep polishing your work and are willing to change it when you realize a better idea.
It's great to make comments that you think will improve the paper, but you should be careful as these are typically bit more subjective than other comments. In my mind, it's vitally important to make a clear distinction in your review between issues which you think need to be fixed for the paper to be publishable, and suggestions which you think would help the paper but which aren't, in your mind, absolutely necessary. Otherwise, useful feedback and helpful suggestions can look like onerous requirements for further work that isn't really necessary.
In my experience, many suggestions for improvements are followed, and some aren't because the authors prefer their approach, so I think it works very well for everyone (as long as you don't then go back and insist on a change that you called a 'suggestion' the first time around).
Peer-review (PR) is not just about negative criticism. I think the reviewer should look at both bright and dark sides mentioning the power points of the work and suggesting improvements and considering weak points to strengthen the work.
Reviewers comment might be about deleting, modifying, or adding some information but the reviewer is not in a position to decide about the final edition and publication of a paper but the editors decide about such issues.
Although there is a doubt that PR leads in high-quality of works, still part of the scientific community think that it is the only available filter for screening the quality.
Also, I think that PR could not be done only by the experts in the field and sometimes there is a need for statistical, methodological or even technical PR. While PR is subjective enough (and sometimes biased as a result) to cover every aspect of the work, it is unclear providing PR checklists by some journals could help the quality of PR.
PR is not just about journals. We experience PR process in different levels:
1. Proposal or grant application
2. Final report of the research
3. Self-PR before submitting
4. PR of our invited colleagues before submitting
5. PR by journals
6. Commentaries published in the journal after publication of our paper
7. Commentaries we receive through email or other channels from readers
8. And maybe quality control studies after publication of the paper
Each level of PR has its own characteristics and worth researching.
Considering Falsifiability in Science, we could not separate right science from wrong one. I think that every paper is capable to be published or at least to be shared. We need to let researchers and readers to get critically appraisal skills and use it while studying the papers while PR could not quarantine the quality.
Doing PR includes legal and ethical aspects. Being expert in at least one aspect of the work, considering ethics of the paper (not only obvious plagiarism that could be checked by the journal but also data fabrication/falsification, etc), being polite in comments, replying on-time, saving the work as confidential material before publication and preventing sharing it, reporting reviewers' possible conflict of interests and so on are some of issues that I've faced during contribution to journals in different roles.
I agree that peer review is a huge responsibility and there are individual variations on how this is dealt with.
I found this 'how i do it' article very helpful in understanding the review process and how the responsibility can be managed.
regards, raza
Article How I Review an Original Scientific Article
I review for over 30 international journals. Most of them encourage reviewers to be 'positively constructive' - and I agree that this should be the aim of all reviewers. However, I would say that with a good percentage of submitted manuscripts that I review - this is not always easy. If authors submit 'careless' or poor quality work - or have not carefully considered the scope or house-style of the journal - then negative feedback becomes a given. Even with good quality manuscripts - there will usually be feedback to improve - which may be taken as a slight by the authors. That's natural and normal. As an author, when I receive this 'inevitable' feedback, I evaluate it - take on board what seems to be reasonable - and defend against what I think is unreasonable with a clear rationale.
@Farhad, your comments on the review process are very welcome and broaden the introductory statements of this question. Thank you for your time explaining your views.
@Justyna, yes, I'm interested in experiences with the feedback to your papers submitted for review, and your approach to 'inappropriate' feedback is rather helpful. Never do professional work when you're angry.
Just got feedback from a journal this morning. The two reviewers made similar comments and now, looking back, I can see exactly what they mean - and will get started on the revisions shortly. In this instance I can easily accept their comments and feel that incorporating them will make the paper more robust - so I am grateful to them for their time and effort. But I have had comments back on other papers where reviewers seem to have missed the point of the paper and made some rather odd requests and where one reviewer said the opposite of another. So, my experience with the 'quality' of review is that it varies. Despite having had some 'interesting' feedback, I have generally experienced constructive support. So, as an author I feel that peer-review has specific problems but overall the system works. Finally, getting feedback helps me develop as a peer-reviewer and (I hope) increases the quality of my feedback to others.
Correct Eric - a balanced response here. As you suggest - I think that, overall, the review process works and, generally, there is a correlation between better feedback and the history and quality of the journal you are submitting to. There are, of course, always going to be exceptions - but that tends to lie with individual reviewers every now and then - rather than say all reviewers of your paper. Personally, I welcome detailed feedback on my manuscripts. There are often nuggets of useful information that help to refine and better the final product. I'm not naive enough to think that I've ever submitted the 'perfect' draft manuscript. If anything, I am more wary of reviewers that offer very little feedback - either positive or negative
Depends on the quality of Journals: My experience was in percentage I have more than 90% satisfaction. many of my recent papers were in new journals: great to see the reviews of good quality
You might be interested in reading my paper on reviewing of statistical papers, entitled "How to reject a statistical paper", downloadable from RG ( https://www.researchgate.net/publication/237012528_How_to_reject_a_statistical_paper._European_Science_Editing).
While reading, please remember - it's ironical!
Article How to reject a statistical paper. European Science Editing
Ramanan - how do you gauge 90% satisfaction. Reviewers don't usually give a mark - or is this how you've interpreted the feedback?
As an author, I have received very helpful PR comments, especially, when the PR process has been open.This means that we ask reviewers to agree to their signed report being passed on to the authors. If the manuscript is accepted for publication, their signed report will be posted on the website along with the article and the other reviewers' reports. But, this method has some limitations. Maybe some reviewers don't want to be identified by the authors.
PR should be constructive and done with positive attitude, but some journals as I have experienced doesn't show any cause for rejection where as the same unaltered paper will be accepted by others. These practices, i.e. rejection without showing cause should change. At least showing the ground of rejection gives the authors chance for improvement.
@Saumendu, thank you for this response showing a critical point in PR by some players in science publication. I think it is not unethical to give the names of journals that have rejected your submissions without further feedback, so you might want to state more details.
Moreover, it is not often the case to be welcomed to publish a paper without being asked for revision of minor or major problems. It would also be interesting to know those journals.
I have experienced the range of PR outcomes: accepted; minor changes, major changes, rejected, unsuitable for the journal. I have only had one paper accepted as it was and that had come from a chapter in my PHd thesis - so it had be reworked many, many times and scrutinised by two eagle-eyed professors. More commonly I have been asked to make changes (minor & major) and the feedback from this peer review has been where I have really learned most and feel that this has helped me develop my analytical ability as well as my writing style, voice and structure. Being rejected outright is tough and, as Saumendu says, rather frustrating as the feedback tends to be rather brief. The last option 'not suitable for this journal' was part of my 'apprenticeship' where I had to learn to direct papers at specific journals and I hope that I have learned from this.
Regarding work that I have had rejected - I rework the articles and send them off somewhere else, as I still believe in their worth. Then once they are published, I delete all my old files and email records so that I don't have a record of who rejected me and who didn't. I decided to do this so that I wouldn't be prejudiced against journals who have rejected me and therefore start each new article afresh. So, I am still happy to send an article to a journal who has rejected previous work, if I think it is the right journal for that article. The only reason I don't consider some journals is where it has taken them a really long time to review the work and the whole process seemed unnecessarily drawn out.
What I have taken from all this is that:
1. PR review can be scary
2. PR generally improves your work
3. Be selective about who you send your work to (and don't hold grudges against journals just because they have rejected you)
Sage advice Erik and, yes, rejection is always dissapointing. For me - it is mostly so because reworking and further resubmission delays the process further still. However, I don't usually view it too negatively because I know that, at some point, it will be accepted somewhere. I'm the same as you - I don't hold grudges against journals and editors - but do avoid resubmitting to journals whereby the reviewing and decision- making is a somewhat protracted process. Another thing that I look out for, where journals offer an 'early release' or 'in press' facility (and many do now) - is the amount of back copy. One of my target journals is currently the highest ranked in its filed on ISI - and, on average, has up to 100 articles in press. If I do submit to it - I have to weigh it up against the potential for, from submission to pubnlication, potentially being up to 2-years and how that would affect the 'currency' of my findings. Not so much a problem with conceptual manuscripts - but potentially an issue with data-based articles.
To name a few are Fitoterapia, Bangladesh Journal of Pharmacology, Oriental Pharmacy and experimental medicine.
the quality of a review either by a reviewer or an editor will necessitate the improvement of paper and thereby making sure a quality product/paper is published.
Recently i submitted a paper to American journal of Infectious diseases and microbiology: topic: Invasive fungal infections
the reviewer comments has improved my paper a lot
I was asked to discuss a new sub topic (immunology of IFI's).
One must make sure that a paper must go to an expert in the broad area of research: which may help in good review
I have had mixed experiences: some reviews have been constructive and helpful (suggesting clearly what to do to improve the paper) while some have been too vague: e.g. what can I do after receiving a comment that most of the important literature has been cited although some articles have been missed (without suggesting which I should cite) or that some issues still need clarification (without explaining which)? Also, sometimes reviewers have very different expectations: e.g. add more of this / remove this from the paper, so it is not possible to make both happy. Nowadays I try to write long replies to such comments: explain what I did (e.g. I do not just tell that I rewrote the abstract, but I also paste the abstract to the reply) and if I could not do it, then why, moreover, I also explain all the main points in my reply to the editor.
Tiia - you're doing all that you should do. All reviewers will have their 'subjective' and individual view on submissions - so there will always be a degree of 'variability' between them - unless a submitted manuscript is either very good or very bad usually. The trick is to take on board the suggestions that make sense to you and carefully feedback how you have incorporated them into the revision. For those that don't make so much sense - then offer a clear rationale as to why it might not be possible. Reviewers don't want to be 'happy' - nor expect you to change everything that they suggest. They do, however, want to see that you have 'thought it through carefully' - and worked out what will work and what will not. You can't please everyone all of the time - and, often, reviewers don't expect you to.
I can happily report that I have been satisfied with every peer-review that I have received. I treated all their words as being gold.
Generally agreed about the recent posts. The vast majority of feedback that I have received from reviewers for submitted manuscripts has been helpful - if not always appreciated at the time. I can also say that, when I'm reviewing myself for journals that anonymously feedback all the reviewer reports - back to the reviewers, then mostly the feedback is in 'synch'. When all the reviewers are stressing a certain point or two - then you know that it is a true omission and has to be addressed in detail.
One recent anecdote though - as a point of difference. I recently co-authored a study as the supervisor of a capable Master's thesis student to an established and reputable journal in our field. After just over 3 months - we got the feedback. It was two sentences from 1 reviewer - saying exactly 'I am not interested in this article. It says nothing new'. In defence though, I did get back to the editors (who appreciated bringing it to their attention - and are putting in new systems to ensure that it doesn't happen again); one editor re-viewed it themselves - and it is now 'in press' in that journal!!
@Dean, interesting point. It raises the question of quality assurance of review processes in academic publishing companies.
I wonder what happened, if we had something like the Beall's list of predatory OA publishers but for journals with poor review practice. It would be like control the controllers. The editors would get a chance to improve on their processes (systems), and the submitting authors could be sure to get sound feedback.
@Michael, Interesting point back. In the case of the particular journal I am on about - they are usually very good. I've dealt with them for over a decade (in fact I'm on the Board which doesn't harm anything) - and so this was probably a 'one-off' - at least in my personal experience. For the larger 'corporate' journals - they are using online tracking systems, such as ScholarOne, so I would imagine that it can be tricky to pick out the 'rogue' reviewers - unless editors are very vigilant. I'm not sure that it can be left to lower-paid administrators to pick these up - so some will slip through the system. But, I agree, even with good journals - an 'incentive' like you suggest to 'never' get it wrong might be useful. Had that thesis student of mine submitted on their own as a first-time author - that might have been the end of their publishing career.
In practicality the review process should be in such a way that if a paper is worth only then it must be held for more than 2 months:
Eg: my recent submission to a journal: American journal of medical and biological sciences i have received a very detailed review with in 12 days: this paper was asked major revision before being reconsidered.
What i mean to convey here is that it is the duty of editor to make sure a reviewer is assigned and that the reviewer is informed of the time and his reservations.
reviewer cannot keep the paper and not review it on its merit: this happens when your paper is not accepted with very short review report
Hi Ramana - probably best not to identify specific journals - and it may be that they just had quick reviewers. Personally, I have a 'rule of thumb' that, when asked to review, I will respond within two weeks - even if I'm allowed 2 months or more.
Not much to add to the conversation. From time to time I have the possibility of being reviewer of national and international journals. I strongly believe that this activity should be more professional as it takes a lot amount of time to make a good revision.
Comments which are constructive helps a lot, but sometimes you may see some comments ON THE OTHER TONE..
I would like to draw your attention to the recent fake cancer study organized by the Science Magazine. Obviously, in this case peer-reviewing didn't work well no matter if scientific journals are open access or not, possessing high impact points or not ...
http://news.nationalgeographic.com/news/2013/10/131003-bohannon-science-spoof-open-access-peer-review-cancer/
Interesting Peter. The truth is we must change the way peer reviewing is organized. I honestly think, as I said before, that the process should be more professional. Reviewers should get paid for what their work and in exchange take full responsiblility of their decissions.
Alejandrao, I agree with your suggestion. However, personnally I believe that peer-reviewing possess also intrinsic limitations and can maybe lead to a sort of epistemic conformism in scientific communciation (?).
Some time ago I was inclined to favour a pay-per-review model of peer reviewing, as @Alejandro said to raise professionalism of the underlying process. Comments on another thread related to the topic discussed here led me to slightly different view. The question is: who is paying the reviewer for what? There are many scenarios available, and some of them bear the problem of increased bias by the reviewers.
Others favour the disclosure of the reviewers' names (on the title page) to acknowledge their work and to concede the responsibility that they share with the editor to them; the editor has the final vote, of course.
@Peter, I agree with you on noticing some sort of conformity in scientific communication, which may lead to traditional thinking in areas that are not at the cutting edge or highly competitive. I think this is not so much due to streamlined reviews but to narrow-minded editors, who, on the other hand, are responsible to watch the shares of their publication in terms of impact.
The progressing commercialization of science does not help us.
here is on of the journals in which i have published my paper. look at the review history and you will certainly believe that it is possible for rapid publication even with fast review process that this publisher follows
look yourself with this link
http://www.sciencedomain.org/abstract.php?iid=205&id=12&aid=1548#.UlUNO9JHIoE
Michael, I agree with you and your remark concerning narrow-minded editors who (have to) consider impact as their priority. Peter
I also agree with Michael and Peter that some editors are not ready to spread positive vibes in the manner they review papers to others. Moreover, Farhad gave an excellent exposure to the matter.
Thanks to all.
Thanks, @Ramana k v, for sharing this interesting piece of current change in peer reviewing. In the attached files I can spot one reviewer's name from Nigeria and some anons. Also, the person, who made the final decision to accept is xxx'ed out due to anonymity. It's a step forward, though, I admit freely. Thanks again.
Journals ask the author to submit their paper double-spaced. It stem from the era of paper-written manuscripts. Since there some peer reviewers participating in this discussion, I would like to ask them: do they really still prefer this format? It renders the manuscript more obscure. it seems to me: its organisation is less obvious. In addition, it may seem discouraging long. Why not have it send in WORD-format instead as pdf by the editor, so that the reviewers can alternate between single-spaced and double-spaced at will?
As some people still prefer to print out all papers they get and make notes with their pen/pencil, they prefer the double-space format as then they have more space for their comments. I would prefer the single-spaced format, too, as I do not print such papers out. A pdf file is better than a Word file if you have lots of figures, tables, math symbols etc as then they will look the same in all computers.
The author could also submit two version: one for a first quick reading, single-spaced, tables and figures in the text, and one as usual.
I will add the question here
https://www.researchgate.net/post/Are_Peer_Reviewers_in_Journal_publications_politicized
Another question
https://www.researchgate.net/post/Is_there_Research_Publishing_Divide
@Matts, there are encouraging signs of open peer reviewing.
http://med.stanford.edu/ism/2013/october/pubmed.html
Most of the times, the peer reviewers are NOT volunteers, in the sense that, they are INVITED. They do reviews, since it looks good on their record, but, they have to be invited by the chief editor. I am talking about very reputable publications.
Review process is not perfect. Sometimes, you get one reviewer, that didn't understand 30% of the content, and (s)he gives a low score to the submission based on some small thing (s)he understands. These are the ones you look at with your jaw on the floor ! Happens all the time.
My experience is that, the broader the conference's topic range is (or journal's), the more difficult time the Chief Editor has in finding qualified reviewers. This is where "playing safe" is a double-edged sword when you are submitting a paper. If you play it too safe, it will be perceived as boring, but, it has a much higher chance of being UNDERSTOOD by every reviewer ... But, its boring ! Boring (i.e., incremental) things have a lower chance of acceptance ! If you play it to aggressively, and have an out-of-the-box idea, there will be some (or a lot of) reviewers that do not understand the material that well. They will base their reviews on what they DO understand. In that case, you better make sure that, English is perfect, there is not a single typo ! No technical flaws etc ...
So, I would summarize and say, IT DEPENDS ON YOUR SUBMISSION. If you are introducing a brand new idea that is technically solid, the peer review process could be very unpredictable. I have seen this happen even in extremely reputable journals. If you are introducing an incremental idea which is very well understood in that field, the review process will be pretty predictable. The question, then, is: is the idea good enough to get accepted ?
@Tolga, if '"it happens all the time" that referees base their final score on partial understanding of the paper under review, what can editors and the referees themselves do about it? Being a referee myself I know that most publications ask me, how much I am familiar with the topic of the paper. Here the responsible editor has a chance to weigh the final scores of the referees, given their self-assessment is correct.
I worry about the last part of Tolga's commentary since it is true but then I worry also about the integrity of the reviewer specially if he/she denies the original idea because he/she thinks he understood it one way or another and MAY BE have a sudden urge to test it him/herself! (I have witnessed this several years back by very influential editors!).
As for Michael's response. I am happy he is a person of integrity but can we simply assume the same about others?
I believe in good intentions, but the harsh reality is beyond that.
Dear Michael Brückner, A quality of peer reviews provide very helpful for improving not only publications but also some time provides and orients towards a good research.
@Michael , @Hussin,
For smaller, and much more reputable symposiums, where only 40, 50 papers get accepted and the impact is extremely high (example : ISCA), there are two additional concepts that are introduced that significantly helps this issue. These are impossible to implement in big conferences like ISCAS .
I will list them below.
*** SHEPHERDING: This is when a very senior person in the field overrides all comments, and accepts the paper. In this case, this senior person is putting his weight on the table and saying (I know this subject so well that, this paper has merit. Accept it ... PERIOD).
*** PC DISCUSSIONS: Program committee (PC) discussions are very valuable to get the opinion of the reviewers. WEAK ACCEPT and WEAK REJECT means, these reviewers won't FIGHT for it. However, STRONG REJECT fights for rejection, and STRONG ACCEPT fights for acceptance in the PC meeting. These review results are coming from reviewers that are so sure about their decision that, they will explain their reasoning to the PC, and will try to convince the others. Of course, although unlikely, when you have a STRONG ACCEPT and STRONG REJECT in the same reviews, heavy discussions start :)
@Tolga, strong accept/strong reject OK, however, strong words, too. Agreeing overall with the content, I wouldn't use "a war breaks out" and found it cool. I understand it's metaphorical character, but I do not sense wars as laid-back.
And, Tolga, your shepherding metaphor (which I do like) points to an important problem relating science that is being discussed in Vitaly's question
https://www.researchgate.net/post/Are_science_and_democracy_compatible_And_if_yes_or_not_why
Folks, question was about MY papers, yes? Well, some reviews proved, that reviewer did not understand what is paper about. Some reviews were negatove because some high authorities were "touched". Several cases, where reviewer suggested changes, which are not possible (wrong statistics, for example).
But absolutely much more reviews had something positive, changes made manuscript better. In few cases, reviewer worked hard, and rewrited parts of the manuscript, improving language.
Ah, yes - when quality of the manuscript language is doubted by reviewer, who is making mistakes himself, then, sorry....
I think it all depends on what side of bed people got out of bed in the morning. There are so many issues that impact upon the way reviewers view a paper - after all we are all humans
Yes Alan - I can relate to at least some degree of subjectivity in that area. Same with student assignments - what mood I'm in can certainly influence outcome!!
Mood and the quality of the papers in the batch, strange that it all seems to work though
Agreed again Alan. At my university - we mostly all mark to a 'probability distribution' and, in general, overall marks tend to cluster and behave as they should do. For those journals that publish their yearly stats (may just be for the Editorial Board) - my experiences are that, overall, manuscripts do the same; at least with 'conventional' journals that is
@Michael, Yes. I exaggerated the PC meeting a little :) In the end, it is a very good and cordial discussion and these kind of extreme differences are the exception rather than the rule !!!
===================
This group talked a lot about how SOME publications might not be reviewed fairly. Unfortunately (ironically sometimes fortunately ?) the opposite is true too. A conference paper that you discovered some technical flaws on gets accepted !!! In that case, if the flaw is fixable, you fix it before the camera ready deadline, and submit the fixed version. But, what if it is NOT fixable ? This time, the onus is on you ! The right thing to do is to RETRACT the paper. But, this is an exception too ... It ideally shouldn't happen.
====================
Although these small unfair circumstances might be the fact of life, isn't it true that, there is such a thing as LLN (Law of Large Numbers) ? A capable scholar might be endured to these inconsistencies in the short term, but, the probability theory tells us that, the scholar will eventually converge to his/her deserved value in the long run. This is especially true , since there are so many different ways to disseminate a paper with so many diverse reviewers and/or point of views. If a genuinely great idea was rejected because of reviewer inconsistencies, in the long run , a great idea will find its place ...
Vitaly, I wonder, have you been in Santiago de Compostella, Euro-american conference on mammals?
@Tolga, thank you for appreciating the friendly tone of this discussion. After all, many of us seem to be in two boats at the same time. Today a researcher submitting, tomorrow a referee reviewing and rendering a judgment.
Dear Michael Brückner, We can not devise any mechanism which can make any conclusive decision on quality of peer reviews of papers. Individual instances and comments about peer reviews of papers will lead us to an conclusive debate.
As overall, each of us know that the peer review of scientific manuscripts is a main pillar of further research in modern science and medicine. And, this process relies on experts and knowledgeable researchers to provide objective reviews to ensure the quality of the papers. However, the review of manuscripts for peer reviewed journals raises many ethical issues and problems and those are sole responsibility of editors and chief-editor of the journal. But the ability of editors and chief editors and their ethical approach varies from one journal to the other. Sometime biased review processes are adapted leads to unauthenticated publications.
You can hover your mouse near the top right of your comment to edit it.
My humbling experience is that in most of the times reviewers don' t help the authors to improve their article but are trying to find reasons to reject it. Probably I have not a large experience, but as a new author that is what "I have felt around".
@Vitaly,
then you have a "twin", same name and at least similar surname. Sorry, I thought it was you.
While I'm inclined to sympathize with Afaq's point that the debate may not be conclusive, I believe the topic is worth discussing even if it only affects the behavior of individual reviewers and submitters as they see points of view they may not have previously appreciated.
With regard to the sub-points of the original question:
- I have gotten benefit, in one case great great benefit, even from destructive and occasionally insulting reviews. If the reviewer becomes emotional, I can learn what triggered it and perhaps I may learn more about the underlying beliefs and assumptions that are driving thought in a field. If the reviewer sarcastically dismisses an approach, then I had better either justify the approach more carefully, or find a new one. In one case I did find a new one and an important paper resulted. It is important not to over react. At least the journal editor called for a review. That's first base, and not easy to get on! And at least the reviewer took the time to say something. The worst case is getting told "we have many good submissions and can't get to yours." Then you know you didn't make the cut but get no information. A few journals, even IEEE journals, do not give review feedback because the community is small and there is no way to keep the reviews anonymous. In that case it is essential to write the editor or someone else "in the know" and find out what the objection was.
- Timeliness seems to be getting better in the last few years. A delay means either the editor is having trouble finding a reviewer, or has lost track of your manuscript. A follow up note is advisable after 3 months to make sure it is not the latter. That happened to me once. I had waited nearly a year! In another case, an editor asked me to remind him in about 3 months, and I had to do so 3 times (9 months) as he went through 3 reviewers before he found one who was able to complete the review.
- Most reviewers I've encountered are at least civilized in language, but if they don't like your work of course it comes through between the lines.
- The level of understand is the most difficult area, when new ideas or methods are used in a paper. Papers that make incremental improvements in accepted theory or methods are easier to review and get published more quickly. There is nothing to do here except keep trying to find ways to explain the transition from established thinking to your new methods.
Hope this is helpful to somebody. In addition to publishing in my original field, I have published cross-discipline in two other fields and do not have a PhD in any of them. So believe me, I've seen just about every possible response from a reviewer (including good ones). But it is possible to get through if you keep polishing your work and are willing to change it when you realize a better idea.
One more thing I forgot to mention. Careful experimental results can help a lot. Just hard to get in fields like macroeconomics and gravitational physics.
Good input Robert. I have also published across research fields and encountered the two extremes of editors. The important point is to seek learning from the experiences.
Cheers to all
my experiences of quality of peer review of manuscripts that i have published was very fruitful in 90% of submissions. I have always felt that the review process and the reviewer comments have helped me a lot in this process of research doing and writing
What frustrates me about the peer-review process in the humanities is that scholars come from so many different backgrounds and schools of doing research that sometimes their comments and suggestons are not very constructive. Besides, there is a whole continuum of methods to use from relatively quantitative to purely interpretive that produce various results. I always try to use the reviewer's perspective to re-evaluate my study's assumptions. It helps me to spell out my implicit ideas and clarify my points. As regards revisions, the review process sensitizes me to the points I need to justify more and alerts me to other positions in the literature (that is so vast that few people are able to grasp it all). I resent reviews that would like me to reconsider the composition of my paper (which in my field is not strictly predetermined), because, usually I had worked hard to ensure this was the optimal way to develop my line of argumentation.
@Katarzyna, have you ever tried requesting particular reviewers, or reviewers from a particular approach? Just curious. Some people believe editors will avoid requested reviewers, though many journals ask for them. I have noticed that on a difficult paper sometimes an editor will choose an author from one of the references.
@Robert, according to your peer - review statistics, how many times has an author to submit his article to different journals in order to be published? Ie what is the value of probability of success p in the relevant geometric distribution P(X=x)=p^x*(1-p)^(x-1), x=1,2,..n?
Proposed rule of thumb: Truncate the series when x=2. Of course, if you keep on lowering your sights, p should increase with attempts.
@Demetris, I think the gist of my comments was that this function depends heavily on (a) the type of material, whether it is incremental improvements and whether it has experimental confirmation of a theory (either experiment or theory alone is more difficult), and (b) the affiliation of the author, i.e. prior association with the community of reviewers, which at least will get one a serious review on the first try.
My very first paper on radiation effects on some new flip flop designs was accepted into a major conference and an IEEE journal, somewhat to my surprise. But it met all the criteria, an incremental improvement to an interesting new design, and a bunch of expensive test results showing confirmation. This paper attracted well known collaborators on future projects with first-try acceptance (often with many reviewer comments that had to be addressed and changes made).
By contrast, similar work without good experimental confirmation (budgets have shrunk, in case anyone didn't notice) and with less well known collaborators was turned down, submitted to and accepted by a more informal conference, and so I am working on an improved technique to address the concerns and will try again for the big conference & journal. But plans may have to be abridged due to funding constraints as travel and conference fees have basically been canceled in the last two years, more so since the sequester went into effect. I see no change for several years, and the conference is overseas next year making it expensive to go on my own.
The third case is radically new work published cross-discipline, the most difficult case. In the case of the physics paper, the various submission tries weren't of the same paper. After a first (rather over ambitious) try with PRD, one of the reviewer comments prompted me to spend a year going in a different direction. Now here is where the review politics come in. PRD saw enough similarity they would not send it to a reviewer again. I had used my one shot with them and I had to send it somewhere else. It took two or three tries, and a lowering of my sights, and after publishing a second paper I expect this battle will go on long beyond the end of my life as it requires a consensus on a major theory change. But I am looking for an angle on an experimental prediction on a new observation which might move things along.
The fourth case is again cross-discipline, but this time in economics & safety & systems engineering. In this case it seems to be a matter of finding the right venue. A purely economics journal did not recognize it was new, as risk compensation is familiar to them. After revising to clarify what was new, a second economics journal gave one of those "we're too busy to look at it" replies which one assumes means they didn't recognize the author name or affiliation. (Some journals, you know, publish mostly or only invited papers.) Finally I tried a system engineering journal, and the editor said it didn't fit there but he thought it was new and interesting and referred me to two other journals, cc'ing the editor of one of them who invited submission of the paper. He is currently having it reviewed, so this story is not over yet.
That's not the simple numerical answer you were asking for, but I hope I've explained why it's just not that simple.
I was usually invited by the editors for peer reviewing; I have never been a volunteer. Some editors ask for volunteers. In that case one should have sufficient time to split between one’s work and multi reviewing process. As I’m custom with good journals, reviewing is usually fair. I haven’t meet biased comments yet, and papers could follow several revisions i.e. rejection is not immediate. My last experience in publication was not so fair; my papers were rejected without sufficient reviewing and 2 times without any reviewing. In the first case I was not able to give response to reviewers’ comments as I was not asked to resubmit the new version following the revision (papers were rejected at first submission with reviewers’ questions and comments, I haven’t understand how to proceed). Adding to the longue time response (approximately a year, and 7or 8 months in the second). This has made me doubtful with this research, and in the same time I have understand by my pairs that it could hold interesting approach … Anyway simultaneously the research was stopped by my institute which was not informed by this experience yet!!! Till now I haven’t understood the temporal similitude between papers rejections and the research project decision from my institute’s executive.
Based on the above entries, please consider answering a question I have submitted to Research Gate: Are Peer Reviewers in Journal publications politicized?
Many thanks
I appreciate the shared experiences among the participants of this forum.
@Hussin, the link to your question is
https://www.researchgate.net/post/Are_Peer_Reviewers_in_Journal_publications_politicized :)
Though i am an Indian, from the experiences i had with both Indian and foreign publishers i developed this understanding that when i have sent a paper to a new foreign journal i got a better review (mostly for improvements) than when my paper was just accepted by Indian emerging journals. I have great regards for many Indian journals that have extensive peer-reviewing before acceptance.
@Vitaly, is there any confirmation this case of self-reviewing has actually happened? What an embarrassment for the journal - and for the co-author, who is discredited having followed this way of self-promotion.
Also, I wonder whether dear colleagues can confirm that we get highly professional feedback from top journal referees.
Dear Professor Vitaly Voloshin, My own experience related to the quality of peer reviews for our papers is as below.
Whenever, we got our paper rejected from a reputed international journal; in return we got many new research ideas from the reviewers' comments.
Michael: I had all kinds of experiences with reviews, including very good and constructive critics from reviewers. In general, I think I had more positive then negative contributions from reviewers (including one rejection that made me learn a lot from the reviewer's arguments - a signed review, as a a matter of fact) and more negative then positive responses from editors than I would have liked (something that suggests how important it is to understand a journal's editorial line). In any case, however, submiting a paper to a journal does not guarantee that we get useful feedback.
From the side of editor: sometimes it is really hard to find a reviewer, especially if manuscript is not "usual". Promised reviews do not come; time is wasted. That is my worst experience about reviews.
@Linas, I appreciate your view from the editor's office.
Should authors then be careful being 'too unusual'? Also, if you push the reviewers too hard, they might be tempted to hurry through the process and deliver poor responses.
What do you think of open peer reviewing? Would reviewers possibly feel more responsible to deliver quality feedback?