ResearchGate encourages the posting of negative results/data. The idea is laudable insofar as traditional journals are highly biased toward positive findings. As a consequence, (1) important negative findings often do not get reported and (2) not knowing someone else has done the same experiment, the scientific community is at risk of spending time and money replicating failure rather than, as we do not do enough, replicating positive findings.
That said, do negative findings require as much rigor and care in reporting as positive findings? A negative finding may be important as it argues against a specific hypothesis (these can and are sometimes published). And a negative finding may support the null hypothesis. But a negative finding can just as easily arise because the experiment was not done correctly, because it was poorly designed, because it was simply a bad idea in the first place. And interpretation of negative findings can be subject to the same problem that interpretation of positive findings are: they can be over-interpreted, over-generalized, etc.
For these reasons, it seems that for the publication of negative findings to be really useful, they should be structured in the same manner as positive findings (intro, method, results, discussion) and should be peer reviewed. If ResearchGate is going to carry the torch and champion the publication of negative findings, should they not take on the responsibility to ensure that the reports are meaningful, valid and actually add something to scientific knowledge? That is, should they not accept the responsibility of instituting a peer-review process? As every scientist knows, negative findings (particularly things that 'just didn't work') are far more common than positive findings. Are we simply going to vomit up an endless stream of unvetted, unreviewed unevaluated negative data? Is this really a valuable service?
ResearchGate has an opportunity to add value to the field fo scientific publishing by taking on this challenge, but with it, it seems to me, they need to invest resources that create value, ie., a peer review process, otherwise they risk simply creating a mountain of undigested, essentially useless data.
Certainly they should! As you say, also negative findings can add information to open questions. And categorically ignoring researcher's work that show negative findings might have also been the reason for some of the unfortunate cases of scientific fraud we've seen recently.
The Journal of Unsolved Questions (http://junq.info/) is not specific to one discipline and publishes (peer reviewed) studies with null results. This is their admirable statement:
"Science is not always about success. Most research projects are unsuccessful stories producing ambiguous or ‘null’-results that don’t lead to unambiguous conclusion. Nevertheless this ‘failed’ research provides useful and valuable information for fellow scientists.Currently only research projects with positive results and clear conclusions have the chance to get published in scientific journals. Due to these publication practices a lot information is lost for the scientific community and additionally scientists find themselves in the dilemma of having to overinterpret data."
Hello Jeff,
I also has relative dicussion in researchgate, which is
https://www.researchgate.net/post/How_about_establishing_a_database_for_failing_experiments
My propose is to establish a database which collect the negative datas include raw data, results and any other things that can't be published in traditional journary.And the database can share these datas freely like NCBI's database, where peopele can search, look and download the DNA or RNA sequences freely.
When I post the idea in the Topic-Bioinformation, some researchers followed the questions, and in sum, these are some main problem may caused:
1. the authenticity of the results. It,s the fact that not everyone is honest, and not everyone is punctilious. And the numerous results, if collected, it's possible to attest it.
2. some field and some lab maybe be not willing to give the complete result, even the little results, which is negative, because of compete and other reasons. And if
the results help some else to get the good hypothesis, and then be avered, but the supplier of the results can attain what? the signname of the article, or nothing? It is still a problem.
I agree with publication of negative results, for the simple reason so that other researchers don't spend time researching something that has already been researched.
Ok, I totally agree with the publication of negative results. I think it could be of great value for the scientific community when adequately performed. Indeed, there is a free access online Journal of negative results in Biomedicine (http://www.jnrbm.com/) which demonstrates that negative results can be published. However, I can imagine that not all researchers will expend their time trying to verify properly and publishing their negative results, considering the importance we currently confer to them. Just think about how negative results will be considered for the evaluation of grant or project applications. We will need to change a little bit our mind to be able to extract more positive values from the negative.
Negative results provide more information on which future scientific research can be based, and don't repeat the same and land up wasting resources.
Sometime negative results are as interesting as positive one and for those "Journal of Negative Results in BioMedicine" is there to spread the word.
Hello to all participants of the dispute. Firstly, I'm sorry for my perhaps bad English. I generally support the opinion of the importance of negative results. But from my point of view, the main interest is idea, the idea of the scientist. What problem he solves, and how original and in good faith, he decides it. The result is important, but secondary. I don't agree with the fact that the knowledge of the negative results obtained by other researchers, saves time and effort, allowing you to «don't reinvent the wheel». In his time, navigator James Cook said that more southerly then 71˚10' s. w. impossible to penetrate and the continent Terra Incognita does not exist (i.e., admitted in a negative result). However, after 45 years, was discovered Antarctica. How can somebody's failure to stop the research, if the idea of experiment is interesting? The same applies to positive results - scientific discovery only then is scientific fact, when it reproduced in other experiments (a posteriori). ☺
Creo que pueden ser el punto de partida de nuevas investigaciones, Jaconda en sus investigaciones sobre salud mental positiva encontro que el trasfondo de la patología constituían elementos preventivos a reforzar, lo cual dio inicio a investigaciones acerca de resiliencia y factores protectores.
A negative result can only come from methodologically sound and well conducted studies. The main point is however that these findings are under-reported in indexed literature: however, there is a number of journals which are being founded and are devoted to publishing negative findings.
A couple of examples can explain what I mean:
http://www.jnrbm.com/
http://www.pnrjournal.com/
It is however worth saying that most inconclusive studies are not well designed, nor properly conducted.
Certainly they should! As you say, also negative findings can add information to open questions. And categorically ignoring researcher's work that show negative findings might have also been the reason for some of the unfortunate cases of scientific fraud we've seen recently.
The Journal of Unsolved Questions (http://junq.info/) is not specific to one discipline and publishes (peer reviewed) studies with null results. This is their admirable statement:
"Science is not always about success. Most research projects are unsuccessful stories producing ambiguous or ‘null’-results that don’t lead to unambiguous conclusion. Nevertheless this ‘failed’ research provides useful and valuable information for fellow scientists.Currently only research projects with positive results and clear conclusions have the chance to get published in scientific journals. Due to these publication practices a lot information is lost for the scientific community and additionally scientists find themselves in the dilemma of having to overinterpret data."
Very interesting discussion. It is a difficult issue. As several have pointed out, negative results cannot be taken at face value but need to be scrutinized (eg., peer review); however, as also pointed out, in essence few want to be bothered with this as there is little reward in negative findings, generally. I think the journals are a step in the right direction. Another might be some form of database with negative results and a brief write-up but that includes, importantly, the opportunity for other scientists to comment. To generate discussion about negative results, a sort of on-going peer-review relevant to only those who are interested in that particular data. thoughts?
Negative and positive are the two sides of the same coin. but then we have negative results where the result just fails to show up and we have another set of negative results which completely contradicts the result which we are supposed to obtain. it's the latter which is needed to be recognised and peer reviewed. this might even open up a completely different approach to the problem. Identifying them being the primary goal after which things might just get interesting on further analysis.
A negative result is part of the process of proving or disproving a hypothesis. It should be used as a platform to re-assess the criteria of your hypothesis and the methodology used to investigate the hypothesis. Negative results can add to the current knowledge of any discipline and this new knowledge can then be used to either further your investigation, perhaps in different directions than those originally planned or to admit that at this time it is not possible to prove the hypothesis. If the result is published it may give some-one else an insight to the problem that would not have occurred to the original investigating team. Remember James Cook and the Antarctic!
How could any researcher say that data is positive or negative, We can't be judgmental or prejudiced about the data obtained. Whatever data we obtain or observe required to be discussed and interpreted accordingly, if some results consistently observed different from earlier reports of others, then it needs to be discussed and it should be looked for the cause and possible reasons behind it. If nothing new or different from earlier reports would come in picture the research will be stopped at one point and no new dimensions will be searched.
It is indeed important to report negative results, for the sake of science. However, the problem is to sufficiently prove the scientific soundness of the research. There is an opposite "assymetry of knowledge". Just by proving that something doesn't work using different test doesn't imply that it is not going to work ever, or that it is not true.
I agree with everyone above; results that do not support your hypothesis are still results. However, I do think it is much more difficult to publish negative results, which is a shame. It is just as important to share results from these studies, not only to inform the public in general, but to inform other scientists within your field. If negative results were more easily published, I think the quality of experiments being done would vastly improve.
I agree Chris, as I said, I do think it is vital to publish this. We have all experienced delays or cul-de-sac research that should have been avoided if had prior information on what had failed. But that also brings another question, given the competitive world we live in (not that I am in favor of it), I believe one of the reasons of not disclosing what "doesn't work" is to have a competitive edge... which I believe it is going to be hard to fight against.
Chris, Albert - I imagined, for example, Copernicus or Darwin, which hide their scientific guesses, "to have a competitive edge..."
Yes, I am convinced of this. The day researchers "bet" by the null hypothesis and not by the "alternative hypothesis", I think we will always have less biased results, otherwise is in us the momentum to justify our own results.
Chris, I'm guilty that I was not understood. It was the irony of the Albert's statements of competitive advantage in science. These things are strange for science, as seems to me. Are we doing science for the sake of career? Want to know - this is the motto of a scientist! Science is not in the impact factor or citation index, we have it in our head. Sorry for the excessive pathos.
I absolutely agree. Negative results may sometimes be even more important than positive ones. It is critical not to publish negative results, especially if they are considered to represent not only the expected or desired results, but adverse events or errors in trials or study designs. It is important to have more peer reviewed journals dedicated to high quality and sound studies with "undesired" results. Although I agree with Dmitry that science is not in the impact factor or citation index, I believe that studies of "undesired" results should get cited more frequently to increase the impact factors, which are still low (e.g., for jnrbm IF 1.1) so that they get more attention.
Thank you Jeff for this excellent Topic!
Any scientific finding should be methodologically sound, and the peer review process should safeguard this. I do not believe that null results are any exception to this.
When publishing null results, it is crucial to understand the limitations of classical (frequentist) statistics. If an effect does not differ significantly from 0, this does not mean that you demonstrate that the effect is there; it only means that you do not demonstrate that it is. (see http://www.nature.com/neuro/journal/v14/n9/full/nn.2886.html for an extension of this fallacy). The use of Bayesian statistics could really help to quantify evidence in favor of the null hypothesis. I have applied them in the papers below:
http://www.frontiersin.org/human_neuroscience/10.3389/neuro.09.057.2009/abstract (null effects found, as hypothesized)
http://www.frontiersin.org/Decision_Neuroscience/10.3389/fnins.2012.00126/abstract (null effect found, contrary to original hypothesis)
I agree with Jasper. Bayesian statistics offer a way of testing the null hypotheses without the assumptions of Traditional null hypothesis significance tests (NHST). It is also worth noting that NHST are very sensitive to the power of the design. Inadequate power lead to false retention of the null hypothesis just as too much power leads to trivial findings being statistically significant. I'm constantly amazed at how few researchers in many areas actually consider the power of their designs and this is certainly an important consideration in publishing null results with traditional methods. In psychology, I have seen the figure of average power being only .3.
David is correct: effect size has become undervalued, in favor of statistical significance.
Exactly because of the lack of power, failing to reject the null hypothesis should never be considered as confirming it.
Absence of evidence is not evidence of absence.
True, but evidence of absence is.
Encouraged by all the Bayesian talk. Reading this Yudkowsky article is worth while
http://yudkowsky.net/rational/bayes
The discussion has taken a turn toward how statistical analyses are to be interpreted (or even conducted, ie., Bayesian), a very interesting turn, especially applied to negative findings. The discussion above seems considerably ahead of much practice in the field. I often encounter reviewers who will insist that a p-value of .06 confirms the null hypothesis. The problem with this fixation on an p-value as the arbitration of truth has been pointed out repeatedly both by statisticians as well as scientists in particular fields, but still this outlook seems to persists widely. This sort of rigid rules of inference seem particularly dangerous with regard to 'negative' findings precisely because of what several commenters above noted, insufficient power, particularly when there are unidentified factors that may obscure a true positive effect. At least with 'marginally significant' positive findings one can place that in the context of other findings, ie., one marginally significant observation with several significant findings that all point to the same underlying hypothesis can provide some sense of confidence. The effort to turn a negative result into a 'positive confirmation of the null hypothesis' may not involve the same effort to, so to speak, triangulate multiple pieces of evidence. The risk, then, is that we may be far to willing to accept the null hypothesis because it is easier to do so (ie., we won't invest in doing multiple related experiments to firmly disprove a hypothesis). Thus, the simple idea of 'just publish negative data/results' is not really so simple and requires considerable thought on how to approach negative data, such as being discussed here. How it is published, in turn, will depend upon such discussions. If ResearchGate implements some platform to 'publish' negative results without conscientiously considering all these complexities, they may at best create a repository of useless data, at worse create a repository of, essentially, misinformation. I have really enjoyed this discussion. Thank you to everyone who contributed.
If the study is done very precisely and honestly, and still negative results come, it means your null hypothesis is correct,,,,,, and you can go into whole new world of inquiry. ........ Thing is people get too attached to their idea to consider this possibility,,,,,,
Does it mean that your null hypothesis is correct? or just that you can't exclude it? As discussed earlier, it is just that there is not enough statistical confidence to assume the alternative hypothesis.
null hypothesis is important....... but the basic problem herein lies in the focus being on the main hypothesis. einstein once said you ought not to take any theory at face value...... most researchers run after an idea, but fail to properly study alternate ideas,,,, i.e. after they have formulated a certain hypothesis. this causes a knowledge and depth error in the study itself. when you look up a research literature, you start getting biased in a certain direction. this is essential for coherent research, but you mustn't fail to consider other possibilities as equally probable. this takes time, and in this era of zip-zap publication, researchers keep to their original line of thought. it can produce an einstein, but there are very less number of einsteins, one of the reasons being - exploration of all alternatives. remember it took him 35 years to completely formulate general relativity,,,,,, and even then it had and has errors.
and as a matter of fact, every theory or hypothesis is relative. you "frame" it, and thereby your theory works under constraints. this is evident in large number of NEGATIVE RESULT studies, and resulting rejection for publication; one crazy variable or outlier and your data goes into dump.
Yes negative result is equally impotent as positive provided your procedure analysis,methodology etc are correct. In statistic we test null hypothesis if your result is negative then null hypothesis is true in your case hence it open you a new chapter of your research. It is encouraging, rewarding, and satisfactory in profession. Thank you with kind regards
Yes, I have always looked forward to some kind of publication, or just repository, of well documented negative results. That way scientists will avoid the usually expensive mistake of going through the same steps many have discovered didn't work. And if at all the unsuccessful method must be used, access to documentation about the previous failed attempts will arm you with sufficient knowledge to judge while the previous attempts might have failed, and hypothesize on how to prevent the failure in the present attempt. And for the negative results to be useful in this regards, it has to be well-structured with nearly as much rigour as is associated with the publication of positive results. Peer-review is a good idea, that way the reviewers will be able to ask questions and make sure important details of the failed experiment are well documented.
I like the idea.
Negative results are inspiring, positive results are encouraging and both hold the key to success.
Apologies if this appears simplistic....but it seems to me that if your methodology and statistics are correct, then you get a result and that is the result that you get. Results which represent the truth, the whole truth and nothing but the truth are what researchers should be looking for. We should attempt to support or not support a hypothesis, and not be biased by a desire for a positive result. If a study/experiment comes up with a result, and has good methodology, and gives knowledge, then it should be published - absolutely regardless of whether it is positive or negative. The positivity or negativity should not be the main focus. Truth is truth. Perhaps it is a problem with linguistics to refer to results as either positive or negative, as this leads them to sound good or bad?
Many of projects start with the replication of previously published results. I think this is a very good practice and in my experience it often fails. Specifically for young students this is an eye opener.
The reasons might be manifold. However, plain errors (in previous or the present experiments and analysis) however are rare. Most often small technical differences that should be irrelevant but are not cause these discrepancies. It is obvious when you try to reproduce a lab results obtained with artificial stimuli and in an artificial context in a natural setting. Effectively you demonstrate that the previous results do not generalize.
Therefore it is important to publish and document negative findings. Of course, the value of such a publication increases when the power of the tests is given (what size of effect would have been observed to be significant) and when possible causes for the differences are investigated.
Negative results are as important (if not more) as positive results. Of course, if you have the appropriate controls and can show that the negative result is not the consequence of a failed experiment. I would even treat them with more rigor, and for you will be important to know that some experiment gives a negative result instead of stumbling on a wall trying to do some experiments that others (in some cases may others) know they give netative results
We should be able to plan our design and analysis ahead and have journals accept our protocols for publication independently of results. This might shift journals' selection criteria from positive results towards high quality methodology and outstanding paradigms.
Some medical journals, such as the Lancet, already offer such opportunities (guaranteed publication on acceptance of protocol prior to data collection). Maybe neuroscience journals might follow?
I teach to my pupils that positive or negative results are both similarly valid; considering only positive results is a prejudice.
Sometimes negative results are exactly what you want!! Blood tests that show you don't have cancer is just one example!! This may be slightly off course but just to add some sense of humour into the topic.
From reading the replies, it appears that most scientists recognise that a negative result can be confirmed as a valid result after rigorous investigation into methodology, the experimental design and the whether the current paradigms apply or not. Why then do publishers who use peer review and also reviewers not also recognise this? Could this be the due to the environment of unrelenting pressure and monetary constraints that most scientists now have to work with designates a negative result as a failure rather than a step along the way? Although it is good to work within budgetary and time limits, surely it should be recognised that hypotheses are questions to be answered and sometimes the answer is no!
Keep in mind that in most study design, failing to reject the null hypothesis is not a negative result: it is an inconclusive result.
This is an interesting and important question which I am grateful has been aired. I have had to deal with a similar issue when publishing some of my data. Reviewers were initially adamant that the study was underpowered as the finding was "not quite significant". They initially failed to notice that the power calculation was not only made prior to the experiment, but was based on multiple previous studies. It was true that by increasing the number of subjects the data more than likely would have acheived significance; however as the basis of the contention was that there might be insufficient time being given for recovery between the points of activity it was important to accurately and proportionately reflect this.
As all opinions listed above, I still have a question: how should we publish negative results?
I would think that the same as positive results, giving an explanation of the meaning of these negative results and if possible doing experiments that confirm the explanation
To the original question posed here: Negative results should be treated with even greater rigor than positive results. A false negative result is more dangerous than a false positive since the former is not likely to be repeated by other investigators, while a false positive may be expected to be built upon further. For this reason, any publication of negative results here or elsewhere requires appropriate rigor.
Inconclusive results are too often confused with negative results. An inconclusive result is one in which no interpretation can be made from the data; either the experiment "didn't work," or the proper controls were not done in order to determine if it did.
Negative results, on the other hand, are quite interpretable and provide important information. These are experiments in which appropriate negative and importantly, positive controls are performed in parallel with the experimental group. The statistics here should be no different from what is normally done and shouldn't be altered post hoc. What is important, however, is to compare the experimental result to a meaningful and robust positive control to determine: 1) if the experiment was performed properly (that you would see a result if it were biologically present), and 2) the importance of that result relative to a known response.
Dear Anthony Gotter,
I agree with your comment, but I would like to point out something you mention on your second paragraph. In particular "the experiment did not work" it is a phrase I'ver heard very often for the last 15 years (I've never heard it the previous 15 years). I still think that the experiments neither work nor did not work, and as you point out, having the proper controls you can know whether there was some technical problem (i.e. your enzyme did not synthesize DNA) or it gave a negative result
To introduce some counterarguments to add to the general consensus view in science that negative results are equally valuable: I feel that Antony Gotter's response "Negative results should be treated with even greater rigor than positive results" is not a useful one for two reasons. One is that the corollary of this statement is that we should treat positive results with less rigor - a bizarre conclusion, surely, when we should strive to apply the greatest rigor to all our work! Perhaps, more importantly, the concept of "rigor" surely applies to experimental design and analysis of data rather than treatment of results - in other words the rigor of scientific enquiry is what determines the quality of our results, and is therefore applied prior to our knowing whether we have so-called "positive" (= more readily publishable) or "negative" findings.
@Gavin: I agree with your general statement that there should not be any difference by terms of 'rigor' between 'positive' and 'negative' findings. However, this seems not to be the main underlying concern in this discussion (as far as I understand it), because there still remains the question how 'rigorous' the peer review will be conducted (and IF there will be a peer review). There are numerous studies rejected for publication in every field, often because of issues with the experimental design and statistics. Even more studies were performed (in a 'crappy' way), which got negative results.
If we don´t have the same type of peer review for those papers, the same quality criteria, we create a way to introduce wrong findings into the scientific community, which would make all our lives harder than easier. However, I really would like to have a way to publish interesting negative findings, not only when disproving an old theory or an older finding or such.
The key to interpreting negative results is a positive control or better yet several. If one can show that several applications of the same methods generate positive results, but some select few do not, then the negative results can have the same weight as the positive ones. Arguments about power are without effect when the same sample sizes are sufficient to obtain effects for some independent variables but not others using the same tests.
I wish I had more time to write about this now but I don't. Still the principle should be clear.
In response to Lukas - I take your point and agree absolutely. Perhaps the best way to treat both positive and negative results equally at the review stage is not to review the results, but to review the experiment. In other words, my crazy idea would be for a journal (or other medium) to review the background, experimental design, statistical approach etc etc (as is done already) but before the results are obtained. After all, this is often done, in part, by grant and ethical approval committees anyway. A "pre-paper" approved for publication would be accepted post-experiment independent of the findings being positive or negative, given a check on the data being valid and the discussion being appropriate. A little more work perhaps, but it would certainly add to the quality of research output in the long run. In some contexts something similar is done anyway - certainly the Lancet assesses clinical trials and will "make a provisional commitment to publication" after putting the original protocol online. But this is just a dream...
@Gavin that is a really interesting idea, analogous to registering clinical trials. Even if the experiment/study is not registered in advance (at least in my work experiments and studies evolve. . . I don't think I could sit down in advance, as in a grant, and write out the entire set of experiments and then subsequently provide a set of results; doing this with grants is merely a roadmap)-- it is still interesting to consider a pre-review of methods/exp design only, though I am not sure that (a) this would be practical in terms of labor and time and (b) whether it would significantly change anything. Even if the methods/exp seem sound, negative results may still be of less interest and still difficult to interpret. Nonetheless, it is certainly an interesting idea and with some types of studies may work very well (eg., large and/or longitudinal studies where it is very wasteful to not report negative findings).
Thanks Jeff, and you are probably right that this may not work ideally in practice, given the way experiments evolve. Key to the point about negative results is that you say they may "still be of less interest". The reviewing policy of PLoS ONE is particularly relevant here; they say that "Unlike many journals which attempt to use the peer review process to determine whether or not an article reaches the level of 'importance' required by a given journal, PLOS ONE uses peer review to determine whether a paper is technically sound and worthy of inclusion in the published scientific record." That would, in theory, address the issue we are discussing, although it may be more an aspiration than a rigorously applied principle, given that reviewers do no always do as they are told!
@Bogdan: This is in fact a serious question and shows the problematic interaction between some private companies and academia. However, in this case the problem is not a lack of peer review or other kind of quality criteria, the problem is that it doesn´t even come to the decision if it is publishable or not; it is stopped in advance by the company. In this case the scientists would not be allowed to publish there negative results in ResearchGate either (otherwise they would probably get problems with their funding).
Bring back the trend of looking at effect sizes and if at all we have to continue with P values and statistical significance, some attention has to be given to practical significance!
Yes, in part, as this data is as vital to the research community as positive data, although often not popular with theorists attached to their own ideas. In science we must first have integrity and then endeavor to disprove our hypotheses and not try to 'prove' them. This simple practice of the scientific method adds strength to the hypotheses to the point of perhaps a close approximation of a proof or potentially even becoming a law. Unfortunately many lose sight of this basic principle.
I agree with Jessica. I've read all the other suggestions / thoughts and I found them interesting, some insightfull. I think it could be useful to do a benchmark study on different bibliometric markers comparing different Countries, research models, research areas. Not all national research models are based dramatically on grants. In Italy, for example, we make a research, on average, quite poor and with less opportunity to obtain grants compared to, for example, the USA, UK, NL, DE. With less environmental pressure and less opportunity to get grants, the research might be more free also to publish negative results. But I am not sure that it is happening.
It would be nice if there was a repository for experimental methods, such that they could be "locked" in advance. (Similar to clinicaltrials.gov). Referencing this in a paper is nice. I have tried to do this for some observational research by posting a protocol document to google docs prior to collecting data (therefore it is in public view WITH a time stamp and all changes are tracked). We will see how referencing this document works - (although in this case the study turned out positive - but I was very convinced that it's design would provide an interesting answer if it was negative OR positive.
https://docs.google.com/document/d/1SEYymlSH-LDvVmpggu21qXnEdLLlH6--I8cZWMFNzTs/edit
Assumptions create bias. Assumptions range from choice of tests applied to the implicit 50/50 randomness in many control sets. Agent-Based Modeling (ABM) where agents follow local conditions and overall behavior emerges can sometimes avoid many assumptions.
I also agree with Xavier. Moreover, this is something that needs to be trained into students coming up through the ranks. It's relatively easy to design an experiment that does not have an impact if the null hypothesis is retained. However, careful design and proper methodology will result in an experiment that carries weight if the null hypothesis is rejected or retained. Many young scientists tend to follow the unidirectional clinical trials model (strongly represented in pharma trials) where it's all or nothing and a p>.05 is a failed experiment. This is dangerous for both the scientist and science because most experiments do not turn out exactly as planned, especially in social science research. Students should rather be trained to consider experimental design where information is gleaned if the null hypothesis is rejected or retained.
The problem is that there are three kinds of negative result based on confidence interval:
(1) The results are not significant but the confidence interval suggsts that they would have been significant with more samples (usually 0.05 < p < 0.15)
(2) The results are not significant but the confidence interval is so tight around zero that even if they were significant they would be too small to care about
(3) The confidence interval straddles 0 but is so wide that you can't be sure what's going on -- underpowered (as in 1) or nothing meaningful (as in 2)
I want to go back to the original question as it was posted: should negative results published (in this case in the web) with no peer review or after peer review? I must say that in this moment I have so many inputs from the literature and from the enormous amount of informations that it is very difficult to understand what is really important from what it is irrilevant. If ResearchGate or other web site will start to publish negative results without some kind of "validation" this will be totally useless, and, in a short time, no one will have some good input from this material. If negative results will be peer reviewed and accepted .... at this point they probably can be published in "regular" journals, may be, as suggested by Susan just comparing them to some "positive" results.
If negative or null results (given that these are verifiable "null effect" results obtained either by frequentist analyses with proper power or by Bayesian methods) are to be taken into account by the scientific community, they must be peer-reviewed- and published (or at least indexed)!
Increasingly, it appears that many scientific questions are being answered not by a single experiment that overturns a long-held hypothesis, but rather by meta-analyses which take into account all known research that has been conducted on a given scientific question. In my experience with these sorts of analyses, we often find "file-drawer effects" of likely gaps in the publishing record of negative or null results (assuming a normal distribution of effect sizes). These sorts of meta-analyses rely on the published literature, and if there are gaps due to a given result being "uninteresting" or "not novel", then this may be a short-fall of the publish-for-profit system to which most journals adhere. If journals are only looking for hot, new findings which will boost their impact factors, then they will be forced to ignore the findings which appear to make no real dent in old questions (either by providing results already in support of the majority, or by providing evidence of no effect).
If we want to have a useful "big-picture" outlook on some of the important scientific questions of our time, we need to have reliable sources for these meta-analyses; sources that provide verified negative, null, and positive effects alike. I'm not sure that RG is the appropriate vehicle for such sources (from a meta-analytic standpoint) unless RG null, negative, and replicated positive effects could be both a.) peer-reviewed, and b.) indexed in larger repositories- PubMed, Web of Science, Zoological Record, etc. so as to be accessed by researchers conducting meta-analyses.
One the problems I have is that we don't really acknowledge that negative reports are important. So many papers I read are entitled incorrectly. The title may read that there is an effect on some brain neurons but paper rarely indicates that the effect is no effect or the opposite effect. An exercise that I have used with grad students is to give a paper with the discussion omitted. There job is write the discussion and conclusions. Many times they come up with an opposite conclusion based on the results and methods. I have also been able to replicate some some one's paper and on my own. The presentation of negative data would sure have been helpful.
Agree with John; it is an issue out there. I would prefer to report the findings; with fair conclusion or discussion irrespective of negative or positive finding. Sometimes negative findings can save time for others not to repeat those experiments.
Rankan, thanks!! My students usually don't agree with many finding the results indicating no effect. A major problem is that the titles are misleading which is also the problem with many journal editors and reviewers. Several times we have noted that these papers may also provide clues why one has problems replicating their own data.
There is an excellent high-impact peer-reviewed journal here:
http://www.plosone.org/home.action
...that allows publication of null results. And yes, you need to be as structured and rigorous as publishing a positive result.
Yes Derek, as pointed out, and commented on, in earlier discussions (Oct 12th) above...
Although on first view the question seems to have appeal and point to what sometimes seems to be an issue in publishing, I believe the issue is too complex to provide a simple answer. It really depends on the framework of the research and its methodology to determine whether it even makes sense to ask the question, i.e. what "negative results" and "rigor" would mean. As some colleagues have pointed out, the concept of "negative" maybe tied to the "null" versus "alternative hypothesis" concept that is not the only way of conducting research. Think about inductive versus deductive modes of discovery and theory building, and empirical comparison of theories. For example, deducing a hypothesis from a well-established (tested) theory and putting it to a test will mean an entirely different background for "positive" versus "negative" results than reporting a finding from a less established or less general theory or even one in inductive exploration.
Related to this epistemiological background of the research the asymmetry between "positive" and "negative" will vary. The likelihood that a negative result is due to methodological problems (i.e. wrong or broken measurement devices) or failure to properly capture the phenomenon that is to be tested is larger in the cae of a well-established theory than in the case of a new theory etc.
Another case of testing, the experimentum crucis, tests predictions from two or even more theories against each other. Again, the meaning of "positive" and "negative" will change with this research design, as will what would and should be considered "rigor" in evaluating the research.
That said, within a given framework adjustments can be made for the asymmetry between "positive" and "negative", for example via the power of the tests that are applied and via the number of independent replications of results.
Mr. Schapera, that being said why not share your thinking on the topic? I agree but I like the discussion and the thought provoking ideas that are generated on many different topics. It serves a good purpose! This question in particular, has stimulated good discussion about the pro's and con's of sharing negative results. I would want someone to save me heartache from a messy disaster if the same research had already been done and failed. I would like to read how the person reached that answer so as to not make the same "error" if it truly was an error or not duplicate an experiment that needed to be designed better or for a different purpose! I love reading about possible failures, it helps me become a better investigator or scientist!
I think any forum which encourages discussion, helps out students and young, middle career or even more experienced researchers with problems cannot be irrelevant. Different undergraduate courses may not always have the same content or place emphasis on the same subjects, so a forum which allows interaction between people from different locations and professions which may not have had that opportunity previously in my humble opinion is a good thing.
One thing is the matter of negative results in the Popper sense of falsification.Another thig is negative results that deserve some basic reservation owing their modality of realization
Of course; if experimental SCIENCE is either correct answer or error, BOTH have the same importance.
A correct answer means the way you should follow; an error means the way is wrong. Then, publications should have the same value; the problem is to have very good data and write a good discussion, that is all.
As evaluator and reviewer of scientific manuscripts and Projects I have in account always that principle...
Best regards:
this question raises one of many inflammable issues regarding publication of research having negative results. i have undertaken a research finding the correlation between brainstem auditory evoked potentials changes in diabetes mellitus type 2 with disease duration 9 5-25 years of disease). in common sense, it should be positive correlation, but my finding was statistically not significant suggesting first few years of disease had the main impact on BAEP or we can say on central nervous system. in my view, it was a significant finding, but many did not find it important enough to publish. but in the last i got it published.
what i wanted to say is that negative results like not significant findings are also important equal to significant change findings as researchers will not waste their time, money and sources on finding or checking those points once again.