In principle, nonsignificant results imply either that the original basis for your hypotheses was misguided (a problem with theory), or that something about how you tested the hypotheses was not adequate (a problem with methods).
So, you might be able to publish those results if you can generate an interesting discussion of those two issues. If there was a strong theory behind your work, then why might your results contradict that? If you used what the rest of the field would accept as appropriate methods, then what might have gone wrong?
That approach should also lead to a clear statement about the implications for future research that could either support or reject your findings.
During my research journey, a similar case happened to me two times. I discovered that original hypotheses are wrong. So, I restarted from the beginning points.
Dear Dr Bendaoud Nadif . Kindly review what is missing in the background and the details of your experiment, then repeat the work to have significant results.
I assume that the author refers to statistical significance. A lack of significance or the estimated empirical output can at times be more interesting than the expected significance and can lead to contribution in the literature.
Data collection is one phase while data analysis is another phase. Depending on the application of the dataset, slightly significant dataset may give some insights into the incompleteness of the data sample, which may sometimes lead to wrong conclusions.
baasically if this happens the he has look for his study design,the target group,validity of the tool and methods, And occassionaly insignificant results has to be reported which may be useful for other authors or inverstigators to modify the hypothesis,also publication bias will not be their.
Dear @Bendaoud Nadif I would like to refer to your question and statement: What if a researcher collects data and realizes that they are not significant? The data are not strongly significant, i.e., slightly significant.
First, one should note that what is the size of original data, number of variables, replications, etc. The number of degree of freedom also affects as to whether the outcome is statistically significant. That getting non-significant result after fulfillment of all the necessary pre-requisites means no statistical difference in the given set of treatments. This should imply repeating the experiment by taking more number of variables and treatments; the design should be in such a way which minimises the uncontrollable (experimental) error to the maximum extent. If the outcome is even slightly significant (I mean significance at p=0.05), it implies significant variation among treatments.
PS: The reply is based on my personal experience in plant breeding experiments.
The elimination of inputs that do not impact outputs is also important to report in scientific literature. This could be a problem only if you find other literature confirming the negative impact. But there are data repositories and briefs that accompany that data to demonstrate results. I would find one that is appropriate to your data/scientific area of study. Then I would do an exploratory analysis on alternate research that could be conducted from the data. I am convinced that scientists too often 'give up' on information when inputs are not explanatory to initial outputs X-->Y. Changing the type of statistical analysis process and/or including the lack of expected results to provide a basis for previously untested options can be a great resource in revealing viable X-->Y relationships.
If the study was well designed, with adequate sampling, lack of significance is a finding. However, if the hypothesis is well-founded and relevant, the reproducibility of the investigation must be considered.