You are right in terms of peer reviwers . I found some that knew little about the thing they evaluated. it was a problem because they have the power to decide .
They should have knowledge about what they comment.
The same happen is statistics. We try to minimize errors but it is only a try. We could never control 100% any result. In life is the same. The majority of reviwers are great but there are always some that are not. This is what science tell us. We never know everything. Knowledge is a dynamic process. There are always mistakes to minimize!
The majority of very intelligent people/scientist are great in human relationships but some are not. We never control all but our experience seeks to minimize its negative effects.
Statistics, as a science, is normally objective. However the human researchers and investigators are not, and may have their hidden agendas, which is why peer-review is important as a quality assurance.
There are statistical methods that are subjective as well, for example subjective logic, which allows for modelling subjective uncertainties, trust etc. in the random parameters of a statistical model.
Science seeks to maximize the objective, since the variables and models we chose to analyze the data are already a choice we make and therefore already include some degree of judgment.
Thus, the objective is what we consider that includes a small degree of subjectivity.
Similarly nothing is completely independent. What we consider as independent events are the ones that actually have a small degree of association with others.
We spent life to control the subjective, trying to minimize it, often using probabilities. That's also when we have to decide. We seek to minimize errors associated with any decision, but we work with the random, and we can only guarantee a likely result considering the history of such reality.
I think that peer review by itself does not guarantee objectivity. It could assure correction in the reasonings, not in the hypothesis for a practical study.
A discussion of this question is in the Spanish foro of Catholic.net.
You are right in terms of peer reviwers . I found some that knew little about the thing they evaluated. it was a problem because they have the power to decide .
They should have knowledge about what they comment.
The same happen is statistics. We try to minimize errors but it is only a try. We could never control 100% any result. In life is the same. The majority of reviwers are great but there are always some that are not. This is what science tell us. We never know everything. Knowledge is a dynamic process. There are always mistakes to minimize!
The majority of very intelligent people/scientist are great in human relationships but some are not. We never control all but our experience seeks to minimize its negative effects.
"... while a man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty. You can, for example, never foretell what any one man will do, but you can say with precision what an average number will be up to. Individuals vary, but percentages remain constant. So says the statistician."
Arar, I do not agree with Sherlock Holmes as if the people were not free to change their lifes, for example, from sinners to be justs or saints. With this change the percentages can vary and do not remain constants, for grace of God.
"Statistical models are sometimes misunderstood in epidemiology. Statistical models for data are never true. The question whether a model is true is irrelevant. A more appropriate question is whether we obtain the correct scientific conclusion if we pretend that the process under study behaves according to a particular statistical model."
Scott Zeger, "Statistical reasoning in epidemiology" in the American Journal of Epidemiology, 1991
Darko, in some cases the false model is irrelevant, but in others can deviate the true conclusion we need. It depends for example of the taken sample size to do a statistical inference.
The correction and objectivity of a science depends of its internal logic (reasonings), but also of the external logic (adequate to its practical use).
Yes, the Statistics can be an objective science but the men are over the science and a scientist cannot impose to other man/woman that he/she colaborates with his/her research.
Then, if we accept the will of the volunteers in participating in a survey or research, I think it is impossible to assure objetivity in the statistical inferences because the sampling design in reality is not objective, the probability does not select only the sample but the liberty of the persons of the population can affect impredecibly the sampling design and the inference making them subjective according with the will of possible respondents.
But if the sample units can be sampled and observed without problems, yes, it is possible an objective inference and an objective statistics.
You are wright : nothing in inferences is completely objective...there are always errors....from sample, choice of variables, choice of models...change the soroundings...then the most we can say is that we have a hight confidence in those resutls, but we should say why is that...
To do an objective statistical inference is necessary to assure that I say. It is not necessary to have high precision, but say the truth of the statistical procedure and if this is scientificly correct.
Statistics as a science is as objective as mathematics.
However, the application of statistical methods to real-world problems and the practical interpretation of the results are neccesarily based on subjective judgements. There is nothing like an "unconditional probability", probability is always conditioned on ideas, models, and assumptions, and it relates to certain -selected- information. All these conditions are chosen based on considerations of plausibility and coherence, but nevertheless is this process genuinely subjective. Statistics provides "objective" procedures to approach an inter-subjective state of knowledge based on the extraction and accumulation of information from data.
I think that the subjectivity arises of uncheckable suppositions, and objetivity of checkable conditions, no from data alone. No all condition is subjective, but all uncheckable supposition is subjective.
In some areas and problems Statistics can be objective. For other more ambitious purposes, the subjectivity in Statistics is almost necessary but with an inferior science since the conclusions are not objective.
Implement objective statistics is very difficult in medical practice, but it is possible. Most part of actual statisticians could not do it for their limited coherence and knowledges.
Statistics is as objective a tool as is arithmetic. The tool is objective, because the tool has no agenda. However how this tool is applied is another matter.
In other words, let's not blame statistics for the instances where it is misused, by people who are trying to push their agenda.
The use of statistics in medicine is unavoidable. How else to determine whether a medical procedure, or a drug, is better than the previous practice? Just guess? Just intuition? If medicine is to advance, you need to apply statistical methods to determine the efficacy. We can't make credible claims without evaluating the results. We can't evaluate the results without statistical analysis.
If the police officer walks with a dog, statistically, each of them has three legs - it can never be objective. If you use statistics skillfully and only where it has a justification - it not only can be, but it certainly is an objective tool.
But Sylvester, there is nothing implied in "statistics" that mandates that the four appendages of a dog, and the two legs of a human, be lumped together, and the arms of the human be ignored. Just because you just did that should not be taken to mean that "statistics says so." It doesn't.
Statistics will show that humans and dogs have four appendages each. Statistics may also be used to show that most of the time, dogs require four appendages to walk, whereas humans only require two. This can be demonstrated with high confidence level.
In fact, people use statistics all the time, even without realizing it. You know that with a high confidence level, if you turn the steering wheel to the right, the car will turn to the right. You also know when this might not be valid, such as on a patch of ice.
Most of what we do on a routine basis is the result of experience, meaning statistics, rather than being the result of classical physics calculations.
Theories of statistical testing may be seen as attempts to provide systematic means for dealing with a very common problem in scientific inquiries.The problem is how to generate and analyze observational data to test a scientific claim. Because the scientific inquiries involve theoretical concepts that do not exactly match up with things that can be directly observed. Furthermore, the scientific hypothesis may itself concern probabilistic phenomena.
There are some Models of Statistical Testing such as Statistical Models of Hypotheses, Statistical Models of Experimental Test, Statistical Models of Data.
Statistical models describe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds:
Sampling from a finite population
Measuring observational error and refining procedures
Studying statistical relations
Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets.[4] Testing a hypothesis using the data that was used to specify the model is a fallacy, according to the natural science of Bacon and the scientific method of Peirce.[citation needed]
Data collection[edit]
Statistical theory provides a guide to comparing methods of data collection, where the problem is to generate informative data using optimization and randomization while measuring and controlling for observational error.[5][6][7] Optimization of data collection reduces the cost of data while satisfying statistical goals,[8][9] while randomization allows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of:
Design of experiments to estimate treatment effects, to test hypotheses, and to optimize responses.[8][10][11]
Survey sampling to describe populations[12][13][14]
Summarising data[edit]
The task of summarising statistical data in conventional forms (also known as descriptive statistics) is considered in theoretical statistics as a problem of defining what aspects of statistical samples need to be described and how well they can be described from a typically limited sample of data. Thus the problems theoretical statistics considers include:
Choosing summary statistics to describe a sample
Summarising probability distributions of sample data while making limited assumptions about the form of distribution that may be met
Summarising the relationships between different quantities measured on the same items with a sample
Interpreting data[edit]
Besides the philosophy underlying statistical inference, statistical theory has the task of considering the types of questions that data analysts might want to ask about the problems they are studying and of providing data analytic techniques for answering them. Some of these tasks are:
Summarising populations in the form of a fitted distribution or probability density function
Summarising the relationship between variables using some type of regression analysis
Providing ways of predicting the outcome of a random quantity given other related variables
Examining the possibility of reducing the number of variables being considered within a problem (the task of Dimension reduction)
When a statistical procedure has been specified in the study protocol, then statistical theory provides well-defined probability statements for the method when applied to all populations that could have arisen from the randomization used to generate the data. This provides an objective way of estimating parameters, estimating confidence intervals, testing hypotheses, and selecting the best. Even for observational data, statistical theory provides a way of calculating a value that can be used to interpret a sample of data from a population, it can provide a means of indicating how well that value is determined by the sample, and thus a means of saying corresponding values derived for different populations are as different as they might seem; however, the reliability of inferences from post-hoc[15] observational data is often worse than for planned randomized generation of data.
Applied statistical inference[edit]
Statistical theory provides the basis for a number of data analytic methods that are common across scientific and social research. Some of these are: Interpreting data is an important objective of statistical research:
Estimating parameters
Testing statistical hypotheses
Providing a range of values instead of a point estimate
Regression analysis
Many of the standard methods for these tasks rely on certain statistical assumptions (made in the derivation of the methodology) actually holding in practice. Statistical theory studies the consequences of departures from these assumptions. In addition it provides a range of robust statistical techniques that are less dependent on assumptions, and it provides methods checking whether particular assumptions are reasonable for a give data-set.
The reviwers should be creative persons but ususally they wnat investigation as it is, being difficult to change their mind. Sometimes they do not read those papers and reject them just because sue they do not like some co-authors.
This could not happen if the procedure was more transparent. But on average there are other sources of information that have good reviwers, and one day those journals could become leaders. More higher they are, more probable to fall, and this is life.
No, statistics cannot be objective, because the way statistics were constructed, historically, was not objective. Having said that, you can also say, on the other way around, that statistics are a way to objectify reality, and in this way, it is objective (it creates a certain reality). All this is well explained by:
Desrosières, Alain (1998). The politics of large numbers : a history of statistical reasoning. Cambridge, Mass: Harvard University Press.
See also: Desrosieres Alain. How Real Are Statistics? Four Possible Attitudes, Social Research, 2001
I really agree with you dear helena in this statement : Sometimes they do not read those papers and reject them just because sue they do not like some co-authors.
and the worst is most of our papers is rejected immediately as our affiliation and our country , or they write us : sorry the paper is not in the journal scope while we have read many just our papers in their journal, it is not fair and unfortunately some reviewers or editors do unprofessional and non-academic facing to some papers.
Anyway, on the contrary we have many justice too! and it indicates that we have so many Good manners, and academic approach yet!
The problem is that the discussion has shifted from the original question. The question is "can statistics be an objective science"? not is the reviewing process in academic journal objective....
As a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data, statistics proper is an objective science. However, a misuse of statistics occurs too often, and this makes people think that statistics is not an objective science. As for the misuse of statistics, see the link below:
@Tabata sensei. Sorry, I can't agree with you. Statistics proper is not an objective science. If you read some books on the history of statistics, you will learn that all the concepts were strongly discussed and debated before people finally reached a compromise. Concepts and definitions are the product of a compromise, not an unbiased universal truth. You can read the book of Desrosieres (at Harvard University Press) I quoted earlier and you will learn how these concepts were developed. The meaning of the different formula, etc. were also heavily debaated.