I find that the time taken depends upon the article. If it's good, I take more time to offer improvements. If the paper is poorly constructed and simply inadequate, the review is swift. It is the papers in the middle. These are the ones that require major revision. One must be careful here to offer practical advice. length is an issue.
I'm unsure about your statistics of 5 hours of reviewer time. I'm often very busy when a request for a review comes in, am out of town, etc. and cannot place the review at the top priority.
You would be amazed at how poorly written most submissions are. I suggest that, whenever possible, all authors (including myself) pay for a professional copy editor, use the comments to make a revision, read your new version over quickly, before submitting it
Another problem is that many articles get sent to the wrong journal. A lot of effort goes into the editorial policy. It is there for a good reason.
I concur with your point that, the time taken to review an article largely depends upon the quality of article.
Severin, Strinzel, Egger, Domingo, and Barros, (2021) conducted a research on the characteristics of scholars who review for predatory and legitimate journals. In the study, they found out that, about 13.7 million reviews take place every year with 5 hours as the average number of hours required for a review to complete reviewing process. However, there might be many outliers but 5 hours is the average (one of the disadvantages of central tendencies in statistics).
Sir, very often than not, most authors from English-speaking countries think that since they can speak fluent English that qualifies them not to give their manuscripts to experts in English for copy editing. Similarly, the experts in copy editing when contacted for copy editing by a professional, they begin to think inadequacy about his/her knowledge domain. I thin there is the need for creating more awareness on this divergent issue on a similar continuum.
As per the choice of appropriate journal and editorial policy, that is where important issue lies.
Sir, during the pandemic, there was reduction of number of days for COVID-19-related articles and hence got published faster than non-COVID-19 pandemic. How can this trend be ensured in the post-pandemic period?
Reviewing the review process: Identifying sources of delay
J. Lotriet Cornelius
Australas Med J. 2012; 5(1): 26–29.
Published online 2012 Jan 31. doi: 10.4066/AMJ.2012.1165
"Abstract
Background
The process of manuscript review is a central part of scientific publishing, but has increasingly become the subject of criticism, particularly for being difficult to manage, slow, and time consuming – all of which contribute to delaying publication.
Aims
To identify potential sources of delays during manuscript review by examining the review process, and to identify and propose constructive strategies to reduce time spent on the review process without sacrificing journal quality.
Method
Sixty-seven manuscripts published in the Australasian Medical Journal (AMJ) were evaluated in terms of duration of peer review, number of times manuscripts were returned to authors, time authors spent on revision per review round, manuscripts containing grammatical errors reviewers deemed as major, papers where instructions to authors were not adhered to, and the number of reviews not submitted on time.
Results
The median duration of the review process was found to be 74 days, and papers were on average returned to authors 1.73 times for revision. In 35.8% of papers, instructions to authors were not adhered to, whilst 29.8% of papers contained major grammatical errors. In 70.1% of papers reviewers did not submit their reviews on time, whilst the median time spent on revision by authors per review round was found to be 22 days.
Conclusion
This study highlights the importance of communication before and during review. Reviewers should be thoroughly briefed on their role and what is expected of them, whilst the review process as well as the author’s role in preventing delays should be explained to contributors upon submission."
Dear Prof. Sundus F Hantoosh Thank you so much for sharing this important research, which implies that the problem has been in existence for a distant past with seemingly no available model to tackle it. I sent a request to the corresponding author. When received, I will do the necessary. Thank you so much once again, Prof.
QUALQUER TEXTO TEM QUE SER CLARO, DIZER NA PRIMEIRA LINHA QUAIS OS OBJETIVOS, AONDE SE QUER CHEGAR COM O TRABALHO, E A CONCLUSÃO DEVE TER UMA AUTO-AVALIAÇÃO SE SE CHEGOU AONDE O PESQUISADOR SE PROPÔS NO INÍCIO. SE COMEÇO E FIM SE HARMONIZAM, SE O ACHADO DA PESQUISA É IMPORTANTE, E DEVE SER BEM APRESENTADO, ISTO É O NORMAL, E ISTO DIMINUI O TRABALHO DOS REVISORES, SORTE NAS PESQUISAS, ANDRÉ
A framework for assessing the peer review duration of journals: case study in computer science
Besim Bilalli · Rana Faisal Munir · Alberto Abell´o
The final authenticated version is available online at: http://dx.doi.org/10.1007/s11192-020-03742-9
"Abstract
In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of Computer Science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool — Journal Response Time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick deskrejections that do not require reviewers).
Conclusions
Based on our study of the peer review duration in journals, we claim that the process takes much longer than needed (as also supported from existing literature), but it specially takes much longer than reported by publishers. To this end, we proposed a framework, and based on that developed a tool (JRT), that extracts the ’hidden’ information for accepted articles. Our results showed that, (i) half of all the articles in JRT took roughly more than six months for their first revision, and a quarter of the articles (third quartile) took more than 10 months, (ii) there is no overall evidence neither for improvement, nor for worsening in terms of the review duration for the journals of CS. Furthermore, the values computed by JRT confirm the previous studies in that CS journals tend to have long revision times (e.g., the average response time for CS journals reported in [13] is 5.5 months and the average first revision time computed by JRT is 6.36 months), (iii) for the ten selected journals, there is a huge gap between the average first revision times reported by the publishers and the average first revision times computed by JRT (on average the deviation is greater than 500%, and this we suspect is the effect of considering the review times of rejected articles too). Hence, the values reported by the publishers tend to underestimate the overall review time. Finally, (iv) comparing the acceptance times for different publishers, we observed that, independently of the publisher, the process takes far too long for more than half of the articles.
In conclusion, due to the challenges faced during the collection of the data, we advocate that critical information about the peer review process (e.g., the dates when the article was received for review, when it was accepted, etc.), instead of being buried inside HTML and PDF documents, should be provided in a programmatically consumable way (e.g., through APIs), or machine readable format (i.e., semantically annotated), so that one can easily get the data to analyse the problems in the peer review process."
I first look at the title, abstract, skim the contents, and then read the summary. This orients me to the paper and gives an initial assessment of the quality of the article. Then I bear down by carefully reading the paper. I write a review. This is put aside for at least a day and review my review, make changes and then send to the editor. Always I try to be both fair and kind.
You are right Murtala Ismail Adakawa to understand and state that continued exponentiality leads to instability of a system, e.g. the publishing and reviewing system of scientific literature. A paradigm shift towards a scientific sharing/caring publication agenda will require the disappearance of
hierarchical bureaucracies In science itself, i.e. the current pace of ICT and AI will be the agents of this shift towards a more open research culture.
In this sense, science and journalism will merge deeper together, also with respect to methodology, creative writing and dissemination of new ideas (and practices).
_______
Knowledge has three degrees—opinion, science, illumination. The means or instrument of the first is sense; of the second dialectic; of the third intuition. To the last I subordinate reason. It is absolute knowledge founded on the identity of the mind knowing with the object known. Plotinus
Thank you so much for the remarkable insights into this question that seems to disturb scholarly community for a long time. While AI has gone far in impacting our life tremendously, critical analyses involving strategies to ensure precise reviews by AI are inevitably necessary.
Yes, I agree with Profs Murtala Ismail Adakawa and Stephen I. Ternyik about depending on the piece to be reviewed. Sometimes it has been written by an expert in the field and there are just a few points or queries to bring up, while in other cases, it is obvious that the author/s have the necessary knowledge but are not used to 'putting it down on paper' to share with others in this format. And need a lot of assistance to make a good idea into a good article / paper.
Reviewing time depends on various factors like quality of work , guide line of journals, and many more. No body can force to review , this is our choice to accept review or just refuse. Usually this is a aprescigious job. prestigious work
@ Prof. Mary C R Wilson I agree with your points. Interestingly, it can be argued that, while writing a piece of work, there is a complex relationships between tacit and explicit knowledge content. What a person knows tacitly may not necessarily be expressed when required. In such a situation, a reviewer should serve as a guide to scaffold the author to a level where they can articulate their points correctly.
@prof Virendra Kumar Saxena I concur with your point, sir. This is especially the case if the manuscript lacks precision. It takes longer than expected. Actually, review is not by compulsion or coercion. It is a consensus between the reviewer and journals or publisher but with profits academically not financially
Published online 2012 Jan 31. doi: 10.4066/AMJ.2012.1165
Reviewing the review process: Identifying sources of delay
J. Lotriet Cornelius
"Abstract
Background
The process of manuscript review is a central part of scientific publishing, but has increasingly become the subject of criticism, particularly for being difficult to manage, slow, and time consuming – all of which contribute to delaying publication.
Aims
To identify potential sources of delays during manuscript review by examining the review process, and to identify and propose constructive strategies to reduce time spent on the review process without sacrificing journal quality.
Method
Sixty-seven manuscripts published in the Australasian Medical Journal (AMJ) were evaluated in terms of duration of peer review, number of times manuscripts were returned to authors, time authors spent on revision per review round, manuscripts containing grammatical errors reviewers deemed as major, papers where instructions to authors were not adhered to, and the number of reviews not submitted on time.
Results
The median duration of the review process was found to be 74 days, and papers were on average returned to authors 1.73 times for revision. In 35.8% of papers, instructions to authors were not adhered to, whilst 29.8% of papers contained major grammatical errors. In 70.1% of papers reviewers did not submit their reviews on time, whilst the median time spent on revision by authors per review round was found to be 22 days.
Conclusion
This study highlights the importance of communication before and during review. Reviewers should be thoroughly briefed on their role and what is expected of them, whilst the review process as well as the author’s role in preventing delays should be explained to contributors upon submission."
For many universities, journals and publishing houses in the countries of Eastern and Central Europe the main factor of slowing of the reviews process is multitasking and overloading of reviewers with obligatory lecturing (pedagogical) and publications (research) load. Also for some - big mass media or organizational and administrative load. You have to have certain priorities - otherwise it is impossible "to be everywhere".