It can be a bit difficult to understand whether a scientific manuscript is worthy or not. Can please everyone state the points that they look for when reviewing an article for publication?
I have done quite a few reviews. Dinesh has quite a good starting list but my concern is that reviewers can reject work because it doesn't fit with what they believe is the direction that research in an area should be taking.
1) I look to see whether the authors have provided a solid reason for their research question. How have they supported the research with literature (preferably fairly recent literature)?
2) Is the research method described and reasonable for the problem that they are trying to address?
3) Have they provided sufficient data to support their analysis and conclusions?
4) Does the whole paper develop a sound argument or is it fragmented and lacking flow?
I usually also comment on ease of reading as I find some authors seem to have difficulty writing good sentences or they use words that are poorly defined and possibly not known to their readers. I have had cases where an author has invented words. It sounds scientific if I use big words is one line of argument. I agree with Einstein that "Things should be made as simple as possible, but not any simpler." or "If you can't explain something to a six year-old, you really don't understand it yourself" also stated as "If you can't explain something simply, you don't understand it well." So I am critical of people who force me to go hunting for information to understand the basics of their argument. Even if I am a novice in their field (which I am usually not for review assignments), I should be able to understand enough to know whether there is a valid argument in what they have written.
Short answer: Do I understand the paper and is the flow of ideas convincing.
Longer answer (in examples): It depends on what research method has been used, quantitative/qualitative/experimental/ethnological/documental etc.
For papers using questionnaires I would like to know how the sample size was calculated (age and sex prfiles etc.). What questions were asked? Are they leading to biased results?
Algorithm: Is it described in a way that one of my skilled students could implement a program based on the desciption?
For all research: Are there any limitations of methods and experiments mentioned? Do I trust the claims? Are there any ethical considerations that I would raise?
A good source for answers to your questions is Briony J. Oates: Researching Information Systems and Computing. Sage Books 2005.
Thanks for your answer, I have been asked to review 2 papers for the first time. Both papers seems interesting and I really would like to give constructive comments for authors to improve their work.
Hello Pouya. Right now I wonder the same question. Maybe the instructions in the folowing link can help you. I find this site very interesting in this way. Best regards
Check you are the right person to review. if you think, you are the right person then check for your (time) availability. later decide, you want to go further.
Purpose of review, to publish or not...if publish, how to improve the quality.
You've already got good responses here. Indeed checking that the topic is right for you based on title and abstract is the first important point.
Most journals provide an online form that partly dictates the way a review should be done. After reading the manuscript for the first time I prefer to start by writing a short summary of the essential message of the manuscript. Next I go through the paper and provide the comments per section.
The points to check depend on the type of manuscript (review, scientific, implementation).
I can recommend the following text for review insights:
A. J. Smith, The task of the referee. http://www.cis.nctu.edu.tw/~tzeng/taskoftheferee.pdf
G. Cormode, How NOT to review a paper: the tools and techniques of the adversarial reviewer. http://www.research.att.com/people/Cormode_Graham/library/publications/Cormode09.pdf
Thank you very much for all your comments and links. The papers given to me is completely relative to my research area, my main concern is how to provide the authors with constructive comments as some of my reviewers did for my own papers.
I am not expert, but I just did some review for my professors.
I will just add another comment to the previous ones.
I do not have a specific methodology, but I always start reading it as reading new article, and try to categorize its topic to specific topic which I had an experience before to choose if I can do this review or not..
After reading abstract and conclusion, then skimming the whole article, I try to specify its contribution and how it was done and presented. Sometimes, I have heard about some similar ideas, so I began to search about them to specify the difference, and look to the article if it shows a good comparison or not.
above all it has to be well written to be understood easily showing showing its goal with ordered manner.
I have done quite a few reviews. Dinesh has quite a good starting list but my concern is that reviewers can reject work because it doesn't fit with what they believe is the direction that research in an area should be taking.
1) I look to see whether the authors have provided a solid reason for their research question. How have they supported the research with literature (preferably fairly recent literature)?
2) Is the research method described and reasonable for the problem that they are trying to address?
3) Have they provided sufficient data to support their analysis and conclusions?
4) Does the whole paper develop a sound argument or is it fragmented and lacking flow?
I usually also comment on ease of reading as I find some authors seem to have difficulty writing good sentences or they use words that are poorly defined and possibly not known to their readers. I have had cases where an author has invented words. It sounds scientific if I use big words is one line of argument. I agree with Einstein that "Things should be made as simple as possible, but not any simpler." or "If you can't explain something to a six year-old, you really don't understand it yourself" also stated as "If you can't explain something simply, you don't understand it well." So I am critical of people who force me to go hunting for information to understand the basics of their argument. Even if I am a novice in their field (which I am usually not for review assignments), I should be able to understand enough to know whether there is a valid argument in what they have written.
Writing a own scientific manuscript like reviewing other munscript. First of all you have to be professional in the work or area. Experience in writting and publishing is needed to review other people manuscript. These days most of journals have standard form of reviewing that can help you. I do agree that most of the above comments are helpful but expereince in writting and your area of interest is the most helpful factor. I think we have to know: what are the conditions for choosing a referee in a journal?.
From our "Guidelines for Research Students", which is concerned with helping students understand how reviewers look at their reports and papers (adapted from Oates, Researching Information Systems and Computing - great book btw) :
1. Evaluating survey-based research
Was the research aimed at a wide and inclusive coverage of people or events?
What method was used for generating data? Was it used via the Internet? Was an Internet-based survey appropriate or feasible for this research topic?
What information is given about the sampling frame and sampling technique used? What additional information would you like to have?
What sample size was used? Do you think this was enough?
What information is given about the response rate and dealing with non-respondents? What additional information would you like to have?
Did the researchers make efforts to find out if there were significant differences between respondents and non-respondents?
Do the researchers use the survey to make generalizations about a larger population? Is this appropriate?
2. Evaluating design-and-create research
What kind of artefact did the researchers design and create? (construct, model, method, instantiation?)
What makes this piece of work research and not just normal design and development work?
What information is given about the researchers' development methodology, if any? Is this enough information? Is it an appropriate methodology?
Do the researchers discuss all stages of the systems development life cycl, or just some stages? Are enough stages discussed, or too many?
Do the researchers describe their use of any data generation method during their design and creation work? Is the description satisfactory?
What do the researchers tell about how they evaluated their atrefact?
Proof of concept, proof by demonstration, real-world evaluation?
What evaluation criteria do they use? Are these appropriate?
Do the researchers use their results to make generalizations about the use of their artefact in other situations? Is this appropriate?
3. Evaluating experimental research
Was a hypothesis or predicted outsome of the experiment clearly stated in the introduction to the research?
Was the research a true experiment, a quasi-experiment or an uncontrolled trial?
What information is given about the independent and dependent variables manipulated or measured in the research? What additional information would you like?
What information is given about any participants and how they were found? What additional information would you like?
What information is given about how representative the sample is of the wieder population about which conclusions are drawn? Are you satisfied that the sample is representative?
What information is given about the apparatus and the process the researchers used to make measurements? What additional information would you like?
Assuming their statistical analysis is correct, have the researchers convinced you that they have demonstrated cause and effect?
4. Evaluating case studies
Have the criteria for choosing the particular case(s) been described and justified?
What kind of case study strategy is used? For example, exploratory, multiple and longitudinal.
What data generation methods were used? Do you think enough methods were used and enough data collected?
How long did the researcher spend in the field? Do you think this was long enough?
Does the research look at relationships and processes and provide a holistic perspective?
What kind of generalizations are reported, if any?
How does the researcher link the theory to the case study?
5. Evaluating action research
Did the work involve an iterative cycle of plan – act – reflect? How many cycles are described? Do you think this is enough?
Do the researchers make explicit their framework of ideas (F), methodology (M9 and area of application (A)?
What data generation methods were used? Do you think enough methods were used and enough data collected?
Do the researchers discuss the extent of participation achieved, and any limitations in their claimed outcomes caused by lack of full participation?
Do the researchers recognize the problems of self-delusion or group-think, and explain adequately how they addressed them?
What pratctical and research outcomes do the researchers claim from the action research?
How does the research measure up against the quality issues for new action research?
6. Evaluating research based on interviews
Were interviews an appropriate method for data generation for the stated research topic?
What kind of interviews were conducted for collecting the data of the research (structured, semi-structured, or unstructured)? Is this appropriate?
Were the interviews carried out in real world or online? Is this appropriate?
What information is given about the interviewers and how they might have affected the interview? Is this sufficient?
What information is given about how the interview was recorded and how the record was checked? Is this sufficient?
Are sufficient quotations from the interviews used in the report of the research?
Do the researcher use the interview findings to make generalizations about a larger population? Is this appropriate?
7a. Evaluating systematic observation research
Was the observation schedule piloted?
Does the observer provide the observation schedule? If not, how does this affect your confidence in this research?
Do you think the items observed were easily observable, unambiguous and independent from each other? Did they occur regularly enough to provide sufficient data but without multiple simultaneous occurrences?
Do you think the items observed were the most appropriate for the research objectives?
How long did the researcher spend in the field? Do you think this was long enough?
If there was more than one observer, how did they ensure inter-observer reliability?
Was the sample large enough and representative?
Did the observation avoid disrupting the naturalness of the setting?
7b. Evaluating participant observation research
What kind of participant observation was used?
How long did the researcher spend in the field? Do you think this was long enough?
Did the observation avoid disrupting the naturalness of the setting?
Has the researcher reflected on self-identity and how it affected access, perception of events and the reactions of others?
Has the researcher discussed the ethics of the fieldwork and any personal difficulties encountered?
What methods has the researcher used to convince you of the validity of the observations?
Has the participant observation led to insights that would not be possible using other methods?
8. Evaluating research based on questionnaires
Were questionnaires an appropriate data generation method for the research topic?
Was the questionnaire self-administered or researcher-administered? Was this appropriate for the research topic and setting?
Is a copy of the questionnaire provided? If yes, does it meet the guidelines given in this chapter for layout and structure? If no, how does this affect your confidence in the research?
What question types were used? Open, closed, or both? Was this appropriate?
Are the questions and possible responses clear, unambiguous, in the qppropriate format and in the correct order?
Do the researchers say whether they pre-tested or piloted the the questionnaire? If not, how does this affect your confidence in the research?
Do the researchers discuss content validity, construct validity and reliability of their questionnaire? If not, how does this affect your confidence in the research?
For all kinds of research the final questions are
- do the researchers mention any limitations of their work and could you think of any related to their research,
- can you identify any flaws or omissions related to the research reported,
- how efficiently the project has been reported overall.
No research is complete until the report has been written. The most carefully designed and conducted study is of little importance unless the findings are transmitted to others. The first aim of scientific writing is to be understood; it is not 'to produce the greatest number of papers from the minimal amount of data using the maximum number of words.
1. The problem (introduction) including general problem, literature review/significance of study, scope and problem statement.
2. Methodology (experimental section) including design, population and sample, data collection techniques, statistical hypothesis and tests.
3. Results, conclusion and interpretations (discussion section). This section is sometimes divided into two or three parts, depending on its length and the complexity of the data and the data analysis. The most important function of this section is to show how the data do or do not support the hypothesis, and how the results bear on the hypothesis.
However, the most prevalent weaknesses of reserach reports are: Absence of a clear problem statement, lack of operational definitions adequate to permit replication of the study, inappropriate sampling and conclusions that are not justified on the basis of the study.
I believe, a reviewer should see if the title of the paper matches the work done. The length of the paper is of significance. The data analysis according to norms , only necessary graphs and diagrams, abstract in accordance with the text, observations and conclusions and discussion whether coherent are all significant in reviewing.