Do you agree with the article below regarding the value of negative results?
Blogs.biomedcentral.com/bmcblog/2012/10/10/no-result-is-worthless-the-value-of-negative-results-in-science/
No result in science is negative
even negative result is worth as it give idea to what not to do and thus saves time for beginner on similar project work.
But we should not go for it without demand or any specific requirement or till the time the project is of grt importance or any very exclusive experiment......
As if all will publish negative results number of negative publication will be more than positive ones.
Yes, Nestoras, I agree. negative results don’t mean absence of information. As written in the article, it may avoid redundancy of experiments, and wasting of time in bad directions. The tendency is to publish only positive results which can have as much impact as possible. It obviously leads scientists to oversell their results and to present them on a positive side, whatever it costs, whatever imprecision or omission it induces. For a given study, it pushes to mask part of the results that would attenuate or question a strong message associated with the remaining part of the results. This tendency to publish positive results is so anchored in scientific journals that even studies that contradict a previous study with are seen as negative results and are not even reviewed. That’s what is happening to my team right now, as our study failed to replicate a recent one. The editors of the journals we submitted to wrote us that they had so many submissions to examine that ours wasn’t even worth being peer-reviewed. A cons-example Is provided by Human Mutation which allow us to contradict a very imprecise study that had been oversold in order to be published (see https://www.researchgate.net/publication/46581100_BAK1_gene_variation_and_abdominal_aortic_aneurysms--results_may_have_been_prematurely_overrated). It’s not a publication of negative results, but it shows that, hopefully, some editors are more open-minded than others. So, yes, negative results deserve to be published if only because it prevents from giving too much credit to doubtful publications.
Article BAK1 Gene Variation and Abdominal Aortic Aneurysms-Results M...
An interesting question. In my opinion, a negative result is as valuable as positive results. Imagine how much of a time saver would it be for the whole scientific community if one were to know that an experiment you planned on doing will yield the opposite results? There are a couple of journals which are intended to publish negative results. Just google the term "journal of negative results".
I'm all for publishing negative results, especially in online journals, which are cheap to run and keep online. Especially results of reproduction of experiments that fail for whatever reasons ...
I think most of the positive data is plainly not what is claimed to be, when you start looking into it you see so much manipulation of figures etc., receive material from a previous publication and re-do it (just using it as negative or positive controls) and nothing happens and further tests show that clearly no, what was published can not be reproduced at all ...
I think the "change in perspective" is urgently needed, meaning to come clean and stop pretending we have positive results in this claimed abundance while we absolutely don't. A well thought out negative result is very precious and often shared only in the back channels today. So if you are not part of the specific clique and gossiping you don't know - still the money is wasted.
Producing crap negative results could increase - but if people know they can publish what they find instead being pushed to publish something that looks positive, they might finally start doing honest publications ...
I totally agree with you Ulrike. The need for positive results may lead to a camouflage or a falsifiation of hte results obtained to give them an artefactual positivity. You can always manage to manipulate figures so that they go in the way you expected. And it's easy not to report any experiment which would put the doubt on a study. Then replication of this sudy becomes impossible.
I am agree to Sakthikumar.
"negative result is it self a result" and it would be because of unawareness, less expertise following wrong protocol whatever...
but I think really help scientific community, a researcher who is working on the same topic one can save a lots of time .
I think that one has to publish negative results only if there is some shared expectation for getting positive results. If I get results that do not seem to confirm my own unpublished hypothesis, I don't want to spend the time necessary to ascertain that the results are clearly negative. This represents a lot of work, and I think it's no help for other scientists to add to the flood of useless publications. On the contrary, if negative results contradict previous results, or a trivial hypothesis, then we need to publish for the scientific community.
Publishing negative data is rather difficult as it seems, if the negative answer it self contributes to some positive findings, you can publish the data. also the negative data must provide the complete details that it is negative not because of poor technical inefficiency. Publishing negative data with required genuine repetitions will help the other researchers on the same filed to save their time, rather than working on the same aspect.
Another important factor is, there is no finding which has only beneficial aspects. Each and every thing has its own non-beneficial or adverse effects. this depends how we approach that.
Publishing a good paper is not a straight forward path; it is a time and resource consuming process: observations, design, experiments, analysis, write up a manuscript, survive the revision process.... But, behind a good paper, there is often a body of work based on the experience of negative results that will be forgot. It might be a good practice to describe and write up those experiments that failed or produced a negative result in a supplementary material. On the one hand, I do not think worthy to invest such a big effort preparing a paper just on a negative result, because it might happen that the underlying hypothesis is right but the experimental design, or the person conducting the experiment did it wrong. That is the reason why nobody can rely on negative results, unless obtained under highly restrictive conditions and controls. On the other hand, a place to upload simple reports on negative results might help other researches to design more powerful experiments or save them time.
Hi all,
Thanks Nestoras for bringing this discussion up. My dissertation topic is in data reuse and open data and i am a big advocate of data sharing. I am presenting the poster which inspired the blog post co-authored with Jian Tang at ASIST tonight (Baltimore, USA). We will have a qr code online survey in order to gather participants' perceptions about the publication of negative results in science. I will be more than glad to share the results of this survey in this space. Best, Renata
To Tobias Pamminger
I think that negative results mean that you can actually determine a confidence interval where any difference is unlikely to have biological significance. Other situations do not correspond to negative results.
Regarding this question, look at the Journal of negative results in Biomedicine, http://www.jnrbm.com/about
I think we shouldn't even be wondering about the value of negative results. As the article states and others have already pointed, if the science is solid, they should be published. So yes, I would publish my results for all those reasons, but most of all because we are a community and Science can only go forward if we share our knowledge. It's not (and shouldn't be) about who has the highest impact factor or recognition.
I've read carefully all the responses. It seems like we all agree there is a pressure to publish positive results, and that pressure might lead some people to data manipulation. So we're saying that it might be a waste of time to publish or read negative results, and at the same time we're reading a brilliant result and wondering if the data were manipulated? We're obviously doing something very wrong with the current way the scientific community is working. Maybe we would should rethink how we'd like to do science. In the end of the day, we are a community, so we decide where to put or ease the pressure, to promote honesty, to publish the negative results etc.
Dear Daphne,
It is the pressure to publish that push some people to manipulate their data. I am sure that if publication of negative results was rewarded, they would manipulate their data to get clear cut negative results. The problem resides in the bureaucracy of evaluation. Evaluating a paper means taking the time to read it and wait for years to see if it stands the test of time. I prefer to consider that individual bibliometric evaluation is fraudulent and contrary to ethics. Those in charge of evaluation should accept their duty only if they have enough time to judge the work, and in the absence of conflict of interests. In addition, papers whose conclusions have been dismissed by further data should not appear on a CV.
I agree with you Daniel. That's precisely why I said it shouldn't be a matter of impact factors or recognition. I'd like to think though we can find ways for a more fair evaluation.
What do you all think? If there is no "reward", would people bother to publish their negative results? How about if they didn't have to write a full paper? No lengthy discussions, introductions or results analysis. Just a detailed analysis of the methodology (for the obvious reasons) and reporting the results.
As I said in my first post, the need to publish should come from the fact that you think that your results are useful, whether positive or negative. Then, if you do something useful, this is the reward.
You can read Ben Goldacre"s book Bad Pharma about this question. It shows how Big Pharma impede the publication of negative results about their products, even if it is scientific and could benefit medical care and patients! It is not just a matter of career and cv. It is a matter of ethics, integrity and care for the common good. http://www.badscience.net/2012/10/questions-in-parliament-and-a-briefing-note-on-missing-trials/#more-2714
Dear Florence,
I think that the issue of negative results in clinical trials is extremely important. If a drug is actually tested on various diseases, on different groups of patients, and in various centers, one should use Bonferroni correction for measuring statistical significance. Hiding negative results in this case allows drug companies to sell products with no beneficial effects.
The British Journal of Medicine has just published an editorial on this subject:
"Clinical trial data for all drugs in current use must be made available for independent scrutiny", Fiona Godlee, editor in chief
http://www.bmj.com/content/345/bmj.e7304
In many instances the results we got seems negative in accordance to some theory/model, but these may be positive according to other theory/model that one fails to explain or hypothesis. If one had published those results the other fellow may be able to explain or hypothesis.
I just found your interesting exchange. Note that there is a parallel ongoing thread entitled 'Should research publications consider more failures in research? under the topic 'Negative Data in Science' to which some of us did contribute. It would have optimal to fuse the two. Some similar, and also complementary thinking ...
Sometimes yes. If the results are inconclusive or if the negative results have no value for the scientific community, then no. However, on critical issues related to clinical practice and outcome effectiveness re: evidence based practice standards, aboslutely. Negative results can be just as important as positive results. An example - when I did my dissertation a new treatment methodology was becoming used by many clinicians. However, in reading a critique of the treatment, the issue was raised that a lot of time and money and a lot of patients were involved in this new treatment, but no one had ever determined if the treatment did what it was supposed to do. I conducted a rigourous empirical study on treatment outcomes for persons undergoing the treatment vs. persons waiting to undergo the treatment. My results indicated that the treatment was not effective and did not do what it purported to do. After my results were disseminated other reserchers replicated my process with the same results. As a result an increasingly popular treament was abondoned by most people in the field.
In research every result is positive as every result teaches or reveals us
* The mistakes or lacunae made during the research which one may rectify in further studies.
* This saves the time, efforts and money of the scholar who take the same problem for research in later period.
Labeling " Negative Result" gives a bad mental impact on the scholar mind creates a discouraging attitude which is bad in long term particularly for younger scientists.
My view the results of the studies are to be concluded like
>Applied (may bring into practices or products)
> Not applied (not worthy to bring into practices or products)
> May be repeat with corrections to improve the results.
Of course yes... negative results help / guide the future aspirants for saving their time, money and encourage them to try other methods !
Exactly, if you discovered that a reasonable approach to a valuable research question did not work, you should explain exactly that, indicate possible reasons for the failure and maybe point to possible alternative solutions. In the end, such work might be even more valuable then a positive result, because it might save your peers pressured time and thoughts! For the same reasons, the 'negative' results should be published in an outlet of your domain - and well promoted.
I believe every researcher do struggle to obtain a positive result instead, perhaps obstacles in the research process often render the research invalid, but that does not mean the whole research process is useless. Result either positive or negative is usually the end point of a research process. publishing a negative result is always accompanied with the entire process of getting such result. If the first author look at the problem from point of view another author can pic the same negative idea and look at it in another point of view by applying different theory and hypothesis and that could help make the negative result to become positive as well.
Thanks for initiating this interesting discussion ! Yes I do feel that one must publish those 'so called negative' results !! I am saying this in particular with my personal experience in Pharmacological studies with certain Ayurvedic Drugs / Therapies.
What I have observed is that it also teach us about choosing of correct protocol , many a ti mes gives leads for yet another area which perhaps we have never thought of .. ! But at the same time we need to go deep down in analyzing why perhaps results are not as per our expectations !
The other as pect about reporting Negative Results are in a little different sense , ie XYZ herb is proving out to be Carcinogenic or Anti fertility agent etc etc. Here one need to find out at what dosage studies have been undertaken & whether these dosage are in line with the therapeutic range as mentioned in the original Ayurvedic Texts & contemporary Ayurvedic Practice.Such negative reporting should not be encouraged & should be reassessed thoroughly before publishing, since it does not give real view , rather creates confusion about that herb in future !!
I wonder about the word "negative" itself. What's a negative result? A result I did not expect? Or an experiment I messed up because of a personal imperfection or sheer bad luck? Or an experiment that went wrong because the methodological instruction was bad?
There are very good reasons to publish the unexpected but only a few if it were personal shortcomings.
I think that every scientist has a good feeling what to do IF she is considering why the result seems "negative" and that publishing would help others = advance science.
I would encourage publishing and require it in medical science.
100% agreed with dr Aashish for dear dr Nestoras's querry...as an excellent opinion.
No doubt the research is the search for truth, truth and only truth, so the real results of any research are to be published, this is to be considered as universal rule. Along with this one should not forget that research is never fruitless, the one desired specific or targeted aspect may be negative but other aspects teaches us many a new things. In view of spent intellectual, physical, time and money and other factors the result of any research are always to be comprehensively examined.
Appreciating all scholarly responses in this really nice discussion on a very important question on which depends survival of real science, I would like humbly to add my small bit trying to put the in the context.
Popper’s falsification states that no theory is final and purpose of the science is to test it and attempt to *FALSIFY* it as against Kuhn’s somewhat *DOGMATIC* concept of *NORMAL SCIENCE*. In a thread on ResearchGate I have pointed out that it is Conventional Scientific Method (s) that itself is a drawback for *DISCOVERY* or *NOVELTY* as researcher is always uses theory-laden or theory contaminated data either from well designed experiments in field or lab or using some soft or hard instruments. The researcher is always observing or recording data which are dictated by theory or some hypothesis in the framework of a particular theory and all other data/information generated in the process of experiment or other data generating methods go waste or unnoticed/unrecognised. But, these are unrecognised or unnoticed information or data which have potential either to negate the theory or to lead to some discovery. Science, in its truest sense, had developed significantly till the time of Newton. During the period of renaissance of scientific tradition till Newton several people and scientists had experienced that fruits or anything that is thrown upward came down on the Earth surface, but it was Newton who not only noticed it but also gave the concept of the gravity (my young son who is also reading this is asking me why did the whole apple tree not fall on the head of Newton so that he could have been spared of reading, understanding and proving Newton’s Laws). Everyone had observed that steam pushes the lid of the kettle, but it was James Watt who only observed and recognised the force of steam. In the same vein, in ancient period, it was only Archimedes who recognised the displacement of water per volume of a body while everybody had observed the phenomenon.
In this way, observing unexpected results without or along with anticipated results have the potential of not only to lead discovery but also falsifying a theory (as a result of negative results), a much sought after function of science in Popperian tradition of philosophy of science which is hypocritically followed by every scientist. Therefore, I am in full agreement with the heading of the topic of write-up in the link given in the post, “No result is worthless: the value of negative results in science”. These are negative results which are guarantee of progress of the real science. But, the fact is that vested results, influential scientists whose prestige at risk and political considerations have together a resolute attempt at “perception management”. Write something on the basis of scientific evidence and realistic rationalism against Darwinism, your research will never be published and if you get it published at your own expenditure, a whole community of government sponsored scientists will engage in dog fight with you and you will have no option else than to go in oblivion irrespective of the fact how prominent you had been.
Quite interesting. Failure to achieve a given expectation due to any reason at all is a result (positive on its own). So, a negative result in any experiment or scientific investigation is a positive pointer to the wrong way of achieving the set objective(s). In other words, it is a successful elimination of the wrong way of doing a
particular/similar things. Negative results should therefore be published; so long as the methodology used is rightly documented.
Robert: You have pointed out that “Theory of Evolution” is well established theory, but the declaration attached, herewith, is signed by 100 scientists in different fields of scientific inquiry. Several of them belong to your field of specialisation. May be to some of them you know personally or through their work. Of these 100 scientists a large number is of atheists and they disagree with the “Theory of Evolution” on scientific grounds.
Further, you will agree till currently it was assumed that an organism is mutated as a whole, but you might be aware that one or two month earlier, DNA analysis of a 200 years old tree that has almost as long life in UK shows that its leaves, branches, stems, roots and several other parts have mutated in different directions. What is your reaction to this?
@Robert:I am highly thankful that you somewhat agree with my response except the last sentence whereby I referred to Darwinism. I would like to point out that I am not the first person to use the term “Darwinism in lieu of “Theory of Evolution” as you will find out from a paper attached, herewith, authored by perhaps a scientist specialising in your own field of research.
@ Robert:
Excuse me because the order of responses and therefore attachments has changed. Please Read second response first and the first latter.
Thank You,
MFK
Thanks everyone who took the time to tell his opinion about my question.
Taking into account most of your answers I believe that the major problem is to find a way to share negative results. Not necessary to publish them, which will take much more effort, but as Daphne Krioneriti mention earlier, to have short reports available online, about experiments that did not give "desirable" results.
Results are results, it can be positive or negative, that mean negative results or no change in the findings are also results. How to present this finding and to explain is research work. So, writting an acceptable manuscript with negative results is not easy. Therefore, I consider who can publish a negative findings and the referees can accept the paper is a science work.
Research is trying to prove a hypothesis and both results are valid for publication. Editors are sometimes guilty of choosing to accept only the articles which will sell more journals but in the clinical trial world, it is important to know the negative trials which are all very expensive, requiring a control group on placebo so they would not be repeated again if done properly. The patients participating in the studies who are randomly assigned a placebo deserve more than being filed away and not contribute to the current knowledge in the treatment of these difficult diseases such as in Oncology.
Yes, if only to help others from venturing down paths that dead end.
I am not convinced that there are "negative results," if research is done with integrity. I mean, the results are the results...and they should contribute information to the body of knowledge around a certain topic. That said, if the research has a goal to "prove" something, the risk is that what shall be proven is not what is expected. Research is a epistemological journey...how we accumulate new knowledge and what we do with it are two different questions, actually. Perhaps the publishing of unexpected results is an episdemological question...certainly inclusive of the publication issue.
We have published a paper wit the next title: "Negative effect of combining microbial transglutaminase with low methoxyl pectins on the mechanical properties and colour attributes of fish gels".
As mentioned previously by several contributors, we did not obtain negative results (although we expected an increasing effect on the mechanical properties), we just obtained a negative effect on the properties we were measuring, but this was unexpected for us...
Publishing "negative results" may be considered in a way as an ethical duty. Time has been spent in all good faith on an experiment the conducting of which would have been appreciated as logical and reasonable by any researcher using common sense. It is highly important and sane for the whole community to say : "Don't go to the end of the corridor : the door is locked !" The hub of the Frei Universität Berlin is a good example of potential "negative results databases" ( http://page.mi.fu-berlin.de/prechelt/fnr/ ) which would be an excellent and positive initiative for the advancement of science.
I am confused. My understanding is the research is neutral...we ask a question and do the study to find out the answer...whatever answer there is. To approach a study "to prove" something already biases the research. I also know that if we keep our scholarly curiosity in the area of wanting to learn more, the outcome is then the information needed to inform other studies. What am I missing??
As per my opinion, negative results if published can be useful asset for young budding researchers. These will give them not only a right path to choose the correct problem for research but also would be of great help to possibly select appropriate methods. These negative results would also be helpful to avoid the repetition of the similar research.
Definitely, it is best practice to publish results- negative or positive. If it is negative, it has an effect and this effect needs to be researched as well, therefore if not published, it doing scholarship a disservice. Thanks
Simply, yes, results are results. It should be published to avoid future repeating the work by others and to support other works for comparing and analysing the data. The problem here is to find a journal with high impact to accept and publish the results?
Fully agreed with Adetoun and Fathi, Research results are always to be taken in positive sense as these yeilds many vital and useful information. All Research Results are always positive and negetive in one or other sense. So research results are to be published.
On a related issue, if researchers analyze their results for a wide variety of endpoints (for example, all cancers and other chronic diseases) in carrying out epidemiology studies, should they not also be expected to publish the results for all endpoints including those that did not show a positive association? Even if a publication does include the results showing a lack of association, often the abstract or title will highlight only those endpoints showing a positive association. Such reporting can skew the results of later meta-analyses if literature searches can't pick up those studies that did not find a positive association.
The answer to this question depends on whether you want to report a failure of an original experiment or a failure to replicate. Each is considered below. But first, consider how most of us go about our business: If we think of the scientific method's operationalisation we have to accept that somewhere between 1% and 5% of all publications reporting an experiment are reporting an error. Adoption of the hypothesis testing model with some criterion (alpha) means that we are sometimes going to get a false positive.
In the context of replications: It follows, therefore, that if one tries repeatedly to replicate and extend work previously reported, and one fails, that is worthy of the attention of the broader research population. Of course the attempts to replicate must be as accurate and comprehensive as possible. That is, they need to be careful, just as any research must be.
At the point that one does endeavour to report a (set of) failed replications the important question becomes interpretation. Either the initial report was truly an error and that is important, or the effect was observed but is weak or finicky and that too is important information.
In the context of original work: It is more difficult to find a justification for reporting a failed original experiment than a failed replication. That said, if the logic for an hypothesis is sound and can clearly be explicated and the methods are tight, then why not report? Certainly, if as a researcher you use what is known to deduce a new hypothesis and you soundly execute a test of that hypothesis a failure is a valuable piece of information. It may be, for example, the first hint that one of the original premises for the experiment itself might be in error.
So, would I publish a negative result? Certainly, provided the result contributed to the body of knowledge in a constructive way. That is, provided the conclusions that could be drawn from the work went beyond the simple observation that the experiment was not successful.
Estoy con Fathi M Sherif. Los resultados son como son. Y si son malos, son malos. Y mucho más en el caso de intentar reproducir técnicas ya publicadas y descubrir que son falsas.
Not publishing negative results (to a greater degree than positive ones) is misconduct. That is, provided the methods are sound so that results are informative (this criterion goes for both pos and neg results, of course). It's misconduct by the scientist if not written up and submitted. It's misconduct by the Editor if accepted for publication less often than positive ones. Period. Science has a very simple aim: to tell the truth about nature. That truth entails all evidence based on sound methodology. Telling only the nice part of the story is misleading. Simple as that.
I agree with many of you there. Some years ago, we, with some colleagues, actually started a online Journal called "Journal of Negative Results", but we are facing a lack of interest from Scientists which I cannot really understand. Negative results are results worth publishing as long as they are based on fine experiment design and correct science. All kind of negative results are important to publish, but the main question is not really will you publish them, it is more about Is a journal gonna publish them...
I just spotted that one long-ago response on this thread (among the popular ones, with many upvotes) seemed to suggest that whether or not negative results should be published could be settled on how they relate to hypotheses. I fundamentally disagree, and it doesn't matter whether the hypothesis is one long-established, or a new one by the researcher in question. If we only publish results when they fit with an idea, that idea will seem to have more, and more consistent, support than it really has. That's the core of the problem with not publishing negative results.
As I see it, the matter is methods. If methods and analyses are sound, the results are informative whatever direction they point (including "none", which may be the more common "negative result"). Then they should be published, to the extent the scientist controls that. Whether negative or positive.
The challenge facing us is to judge methods and analyses (including test conditions, "auxiliary hypotheses", etc) consistently and unbiased with respect to whether the results are "positive" (supporting your hypothesis) or "negative" (not). I think many - maybe most - of us are guilty of not being too concerned about methods when results are clear and fine, and more critical when things don't fit. In principle we shouldn't act like that. But it just 'comes natural', often without thinking, to many of us. There's the challenge.
And, unfortunately, there is - at least sometimes - a real asymmetry between positive and negative results in what they tell us about methods. If you have an experimental treatment that is compared to a control (or a comparison of two non-experimental groups) and you find no difference, you can't really tell if that means "no effect" or "the test/test conditions didn't work". If you find a statistically robust difference, you can in the case of the experiment say that not only was there a result but the methods worked as well. In case of the non-experimental comparison, it's slightly harder as confounding variables may have kicked in but a clear difference would still give some faith to the methods being ok (I know this is trivial, but still). So, by default, methods including test conditions receive confirmation from the result being positive - but not from a negative one (a "null-result"). A further challenge.
Which reminds me to remind that there's two kinds of "negative results": (1) results that show an effect/a difference, but not as predicted from the hypothesis at test, and (2) "null-results", where there is no statistically detectable effect or difference. These are often not distinguished as different in discussions on the matter. The former are straightforward, as they gain the same kind of confirmation of methods from the results as the positive ones. These should always be published. The latter are more troublesome as there is more of a question mark to whether set-ups worked as the researchers intended. But these should be published, too, with those question marks made explicit. Unless either researchers themselves or editors/reviewers detect major methodological errors. In which case results should not be published - not even when they seemingly and superficially support a hypothesis at test.
Dear Christophe,
To your report of little response to the Journal of Negative Results: I would believe most scientists want to publish their work in a topical journal, as any result is relevant to topical questions. I wouldn't want my positive results published in the (imaginary) Journal of Positive Results and, by implication, neither would I want them in your Journal of Negative Results. I would like to have them in a relevant forum for the research question asked. I believe most scientists would think that way.
Negative results are results...
https://twitter.com/Protohedgehog/status/344863590735572992/photo/1
Yes, Negetive Results Are Results, means reveals some information adding some knowledge.
When we plan/work on some project, we always expect some positive results though we aim a null hypothesis.If always positive results are to be appreciated then what is the need to do a research project. In research there should be no prejudice and the results shall be published as they are with honesty. These negative results may contribute a lot for future research.
Nice motivational comix on the topic:
http://upmic.wordpress.com/2013/06/10/negative-data/
If I think it has value, I want to have the results published but I don't know whether some journal will like to have it published.
There is no such thing as "negative" results in Science... The terminology (negative) needs to be purged-changed... A result is a result.
Badly designed studies can just as much show "positive" effects...
In fact, I'd say negative results are most important to publish because if they come from a well designed and extensive study, they are often the ones which you can learn the most from (see Ying Cao's comment) and initiate debate, and trigger further enquiry.
If all we do collectively is communicate results that confirm what we already know, than somebody help us!
Results are results, each one gives some information / knowledge which helps / guide in further work. The results may not be proposed expected out come but may be important in other sense, so always the results are to be published.
While reading one invited mini-review paper in Molecular Physics, where over 30 years of multireference coupled cluster theory in quantum chemistry were discussed from a perspective of the author of that paper, I have found an interesting contribution to our discussion about publishing or not negative results:
A citation from Peter Szalay, Mol. Phys. 108, 3055–3065, 2010:
"Unfortunately, we have not
published our unsuccessful attempts, although we
have made lots of observations which could have
helped others to proceed on this road. It is interesting
to see that many of these unpublished observations
turned out to be essential in the work of others over the
past almost 20 years."
I am always willing to do this, but most sensible publishers are not. There is a process of research which in the teaching context ought to include disclosure of the u turns and mistakes which are not published. This is sharing research process whereby students learn to do research. But attempting to make economic or prestige gain from what turns out to be a lack of correspondence between an "abstract entity" and its "experimental realization" is a non-sense. Science practice depends on developing a sense of plausibility.
Is some of "the all results journals" already indexed? I could not found any indexation for them and so for other journals in this "area". I agree about the importance of publishing the so called "negative results", but it is still the issue of scientific riguorosity that should be accomplished. By now, it looks that journals about negative results are not been took seriously by the scientific community, but should we?. Is there a journal especialized id negative results that you consider reliable or that is properly indexed?