09 September 2012 72 10K Report

ResearchGate encourages the posting of negative results/data. The idea is laudable insofar as traditional journals are highly biased toward positive findings. As a consequence, (1) important negative findings often do not get reported and (2) not knowing someone else has done the same experiment, the scientific community is at risk of spending time and money replicating failure rather than, as we do not do enough, replicating positive findings.

That said, do negative findings require as much rigor and care in reporting as positive findings? A negative finding may be important as it argues against a specific hypothesis (these can and are sometimes published). And a negative finding may support the null hypothesis. But a negative finding can just as easily arise because the experiment was not done correctly, because it was poorly designed, because it was simply a bad idea in the first place. And interpretation of negative findings can be subject to the same problem that interpretation of positive findings are: they can be over-interpreted, over-generalized, etc.

For these reasons, it seems that for the publication of negative findings to be really useful, they should be structured in the same manner as positive findings (intro, method, results, discussion) and should be peer reviewed. If ResearchGate is going to carry the torch and champion the publication of negative findings, should they not take on the responsibility to ensure that the reports are meaningful, valid and actually add something to scientific knowledge? That is, should they not accept the responsibility of instituting a peer-review process? As every scientist knows, negative findings (particularly things that 'just didn't work') are far more common than positive findings. Are we simply going to vomit up an endless stream of unvetted, unreviewed unevaluated negative data? Is this really a valuable service?

ResearchGate has an opportunity to add value to the field fo scientific publishing by taking on this challenge, but with it, it seems to me, they need to invest resources that create value, ie., a peer review process, otherwise they risk simply creating a mountain of undigested, essentially useless data.

Similar questions and discussions