I don't know if anyone has similar ideas, but the fact is that not enough people (especially in research) take advantage of failing experiments. An example of the application of failed experiments is the invention of Post Its. It is said that, a 3M researcher invented a glue whose adhesion was weak, but their purpose was to seek a strong glue. Some time later, another from the same company wanted to invent a piece of paper which can stick anywhere and then can be torn off without leaving a trace. They tried many methods, all of them were unsuccessful, until they became aware of the glue with weak adhesion. In this way, the glue which was initially regarded as a failure made a big contribution to our lives. Similarly, the phenomenon of RNA interference was not first observed by Andrew, in 1993 and 1995, other researchers observed it, but they all thought their experiments were unsuccessful and did not find the correct explanation.
I think developing a database for the storage of failed experiments makes sense for experiments that lost the chance to be shown in published articles. Everyone could upload their experimental results which they find unsatisfactory, including the experiment method, experiment data, and the experiment purpose.
This experimental data can give a good experience and lessons to the researchers who want to perform similar experiments. They can improve their experiment based on the failing example. Someone might also give a different opinion for the unreasonable experimental data, and then test their idea. Maybe a novel theory can be developed from these failed experiments. Or, like the Post It, the unsuccessful experiment data could be used to solve other problems.
So, how about this idea?
Hey,
I am following closely this discussion and it is great to see in which directions this discussion goes.
We at ResearchGate are soon launching a section in the ResearchGate profiles in which negative Results, Datasets and Raw Data can be published and shared. These datasets will then be recommended to scientists in ResearchGate who could be interested in that. These datasets will then influence again your RG Reputation Score, which you may found already in your ResearchGate profile.
What do you think?
Best
Ijad
I think this is a terrific idea! I agree with you: we underestimate the power of a failed experiment. We're so trained to think that failure = useless that we keep track of them very few times.
I feel, it is a brilliant idea, Weihang. Failed and negative results should be published.
Because of my limited knowledge, I have not heard these websites. I surprised someone had made some actual attempts. I hope these datas can be share more easily. Just like the NCBI, not only the form of journal.
Hey,
I am following closely this discussion and it is great to see in which directions this discussion goes.
We at ResearchGate are soon launching a section in the ResearchGate profiles in which negative Results, Datasets and Raw Data can be published and shared. These datasets will then be recommended to scientists in ResearchGate who could be interested in that. These datasets will then influence again your RG Reputation Score, which you may found already in your ResearchGate profile.
What do you think?
Best
Ijad
Great idea. Quite often due to the very nature of modern science, where positive results are linked to supporting specific hypothesis, leading to increase likelihood of publishing and therefore to increase probability of getting grant funding, we see a lot more positive data being published. But science should be about the results, not the political or monetary value attached to them and as far as the methodology and experimental work are sound, such results shouldn't be discarded as failed and berried somewhere. After all science shouldn't be about negative or positive results, but just results and because these don't support a given hypothesis they shouldn't be thrown away.
1/ pure datasets can already be published at http://www.datasets.com/journals/
2/ Publishing "negative results" is for me a stupid idea (excuse the abuse):
- it will immediately follow the rule "garbage in garbage out".
- Any graduate student (or senior scientist) can do an experimental error (forget a chemical in a tube), and get a strange result ! How many times a "negative result" should be repeated until it is validated ?
- Despite the fact we all know the terrible problem of referees, they are still useful to screen out terrible experiments, who is going to be willing to assess negative results ?
- Negative results are almost a rule in labs ! How to categorize (keywords ?) negative results, in order to find them ?
- Many negative results led to great discoveries, simply because a genius was there ans said, if it is negative, this is because I have a problem in my thinking (look a Schrödinger's history), lets think differently.
- Being a physicist before turning to biology, I am still astonished how biologists do experiments (even more now with kits and fancy equipments). It now seems that young scientists in biology have no idea of the technicity behind the experiment, and to my knowledge, this is the first cause of negative results.
- Finally, the main differences between a scientific paper and a non scientific paper are i) the methods and ii) the bibliography. Clearly a site of "negative methods" would be a better idea ! But considering the impact factor of such papers, who is going to take hours to build the bibliography, showing that either the method is new or that it has been published elsewhere but does not work well indeed.
To conclude, I will take
- an example for which I am a specialist : PCR primers.
Well 2/3 of primers published in rank A journal are either purely wrong (errors of copy paste) or wrong (designed on a single sequence, ignoring lots of alternate alleles). See for for example http://patho-genes.org.
- an example with the new NGS methods. Many such experiments have failed (bad design, bad enzymes, new protocol,...); any example of a sequencing center explaining such failure ?
Good luck any way
The question is not about publishing negative results but building a database. My comment is not "how about" but "how". You can set up your own database easily at linkdata.org. All you nneed is a table of data and the site will show you how to set it up so it can be searched and accessed by software. i can help!
Ijad,
I would be very interested in knowing how the datasets will be curated. I have a lot of experience with data repositories, and have found the devil is often in the documentation details.
Thanks.
-Monika
To Ijad
First I’m so surprised that you are the father of Researchgate (can I call you like this?). I think the attempt what you said above is worth to be done. I hope the section could be launched as soon as possible.
But there are some problems still left,
(1) which format is suitable for these negative Results, Datasets and Raw Data? The traditional format of paper, or the more freedom one? For the efficiency of utilize of them, I think the most perfect format should be developed.
(2) How to deal with the redundant information of datasets, ignore them (like NCBI) or get rid of them? And even reject the repeat or similar data, the selection strategies maybe hard to be projected.
From Weihang Ding
It is an interesting notion. I am unclear about publicizing "negative data". First, how one defines negative data. We all learn by doing experiments many times and some of which do not yield data that we expect. If this is the definition of negative data, why do you want to publicize it instead of refining the experiments. The other interpretation is that one cannot reproduce the results of someone else's. This is controversy and a third party has to verify the results. Even if someone wants to make negative data publicly available, what is the purpose of it. Are you seeking collaboration to refine the data, interpret, and publish it? Is it the best forum to accomplish this? In this case, there are restrictions of publishing data that are already publicly available including the internet. It is worth noting all of these potential consequences before we jump into this venture. Finally, what kind of data you are talking about?
Data is compilation of certain group of experimental /seen observations...and all observations are important!!!!!!
there is nothing like negative or positive data.....only depends on how efficiently compiled for analysis and reproducibility ( yes ,you may term acceptance or rejection of some data by others.... as positive or negative.) ....however,there are few journals purely for negative results..... PL. REFER
1.Journal of Pharmaceutical Negative Results : Instructions for authors
www.pnrjournal.com/contributors.asp
Journal of Pharmaceutical Negative Results is a peer reviewed journal developed to publish original, ... Research letters: Preliminary work with hard data can be published as a Letter. ..... Adelaide (Australia): Adelaide University; 2001. 10.
[PDF]
Journal of Negative Results in BioMedicine
www.biomedcentral.com/content/pdf/1477-5751-1-1.pdfFile Format: PDF/Adobe Acrobat - Quick View
by RS Hebert - 2002 - Cited by 25 - Related articles
Methods: We analyzed all original research articles with negative results published in 1997 in the ... lower range consistent with a study's data, provide an es- timate of precision ..... Type II errors in the Australian Medical Literature. Aust NZ J ...
Of course there are many well designed and beautifully executed experiments that "fail" in the sense that they provide evidence that the null hypothesis may be true. They fail to show an association, beyond chance, between experimental conditions and some endpoint we'd like to measure. In many cases, there simply *is* no association, and the lack of an association is itself interesting. But there is an enormous difference between failing to find a non-random association, and demonstrating the *lack* of an association. For example, if I compare two groups and find, using some test, a P value of .25, what does this tell me? By convention, it tells me that I cannot reject the null, and following the idea of parsimony, I therefore accept the null as a working hypothesis. But I have specifically *not* proved the null hypothesis. I just accept it by default. As Carl Sagan said, "absence of evidence is not evidence of absence." Having a database of negative results could, if very carefully analyzed, help provide evidence of absence. Meta-analysis would be extremely tricky, however.
Negative data is also a data. When we don't get the data we wanted, we used this term. but if any experiment giving simultaneously negative results we can interpreted that as a result. with that we can make a working hypothesis.
To Richard
- it will immediately follow the rule "garbage in garbage out".
Actually I’m not completely understand the rule. And I think brain is a wonderful and potential machine that can transfer the garbage to the good things. Thinking, which proceed in brain, is so abstruse for us that no one understand the biological basis of thinking.
- Any graduate student (or senior scientist) can do an experimental error (forget a chemical in a tube), and get a strange result ! How many times a "negative result" should be repeated until it is validated ?
Of course I don’t meaning the results which caused by operate miss could be regard as the unsatisfied result. And the fact is that many results are also hard to explain with the correct experiment program. I think these results are worthy to be focus on.
In fact, in the very low frequency, the experiment results caused by wrong operation also can bring positive influence. For example, the discovery of Ringer's solution, his tutor let him research the frog’s heart in vitro. But the in vitro frog couldn’t keep beating for a long time. But one day, one of the team use the un-distilled water (the water contained Ca2+) to compound the media, and the miracle happened, the frog heart in these media keep beating for a long time. Of course these miracle need luck and the wisdom.
- Despite the fact we all know the terrible problem of referees, they are still useful to screen out terrible experiments, who is going to be willing to assess negative results ?
It’s also the big problem disturbing me so far, I haven’t had a perfect idea to solve it since half year ago when the idea of establishment of database of negative results.
- Negative results are almost a rule in labs ! How to categorize (keywords ?) negative results, in order to find them ?
Why use keywords is not good?
- Many negative results led to great discoveries, simply because a genius was there ans said, if it is negative, this is because I have a problem in my thinking (look a Schrödinger's history), lets think differently.
Just because of the one person’s thinking is limited, so published the results out to let others can see them, the discussion can make sense.
- Being a physicist before turning to biology, I am still astonished how biologists do experiments (even more now with kits and fancy equipments). It now seems that young scientists in biology have no idea of the technicity behind the experiment, and to my knowledge, this is the first cause of negative results.
Although some of young researchers lack the experience of experiment or the consciousness which you said above, and which caused some strange results no one can explain, just like the saying said, ”don’t give up eating for fear of choking”. We shouldn’t ignore the potential value because of the partial barrier.
- Finally, the main differences between a scientific paper and a non scientific paper are i) the methods and ii) the bibliography. Clearly a site of "negative methods" would be a better idea ! But considering the impact factor of such papers, who is going to take hours to build the bibliography, showing that either the method is new or that it has been published elsewhere but does not work well indeed.
I don’t understand the program very well. The so-called “negative methods” is not absolute, I mean, to some experiment, some methods may be regard as negative, but in other experiments, these methods maybe are the inspiration to solve problem.
It should not be wise to publish negative data because who can prove it negative and what will be its basis. Once you fail your experiment, try to refine yr experiment you could be successful. or Other person can be successful to the same experiment. In biology, there are so many steps for performing each experiment. So many factors play roles for getting successful results e.g. reagents, instruments, efficient peoples etc. If any factor is wrong, you will get negative results. Probably you understand what I am saying about the negative data.
Learning how to refine their experiments is the required course for them. It’s essential to find the factor which caused a unreasonable result and correct it. But sometimes, when you get rid of all factors which you think could cause the result, and the result also is unsatisfied for you.
I’m not expect that the establishment of this kind of database can give help to everyone, even can only can helped someone finally, and I think the attempt is worthwhile.
I was surprised that there are journals devoted to negative results. Again, if one posts "negative data" on a public site, I am not sure whether they can be published even in these journals since it can be considered as "plagiarism". So, going straight to these journals to publish negative results may obviate this!
Since the tag question is "How about establishing a database for failing experiments?I don't know if anyone has similar ideas. ."........ the references of journals are given which are available few decades since .
However, a DATA is a DATA.... neither negative nor positive ( defined only by majority acceptance.or rejection).
....a classic example is combination of Trimethoprim+Sulphamethoxizole as successful antibacterial drug for more than two decades with millions of prescriptions all over world for curing many millions of patients ....
is classified now as a " negative combination" by subsequent experimental results published.... and the R&D experiments and observations continue to be there....
Results published in journals devoted to negative results should help to avoid repetition of similar and ditto experiments by others and saves time and energy .
I agree with Dr. Chakravarthy. Data is Data and it is certainly worth having a database on negative data. There is only problem. The database will have to be huge.
A very good idea.Generaklly the failed experimens are never published.As a result other scientists waste their time ,enery and cost repeating the same experiments.If such a data is available it casn certainly help almost all active scientists.I strongly favour this.
Krishna Misra
In generally, this idea is not bad and technically I even could create and maintain such database on my own servers. But, there are several important BUTs.
First of all, I agree with Anowar, that not each "unsuccessful " experiment is really unsuccessful. May be in the other hands or even in the same but with different reagents and/or equipment it could work as you think it.
Second. This would may be work good in ideal world. But! In real scientific world there is a very tough competition!
So I think many scientists will not share any "unsuccessful " experiments with anybody, because that most probably could give advantage to their competitors, as minimum they do not need to wast time on this experiment (but also money and resources).
So, to my mind, this generally good idea simply will not work on our real world...
I have to admit the fact that Andriy adverted, these selfless shares may be not the good things to the contributor who upload the experiment results.
The "good" data need to go through review process to get published. How about not so good data? Can it be considered regular publication? Anyone willing to cite these "publications"? Many questions!
Many a times" a data rejected by Editors of a journal is accepted well by editors of some other journal." ......
what should we call that data ? negative or positive .
I am sure it must have been the experience of many researchers
Dr. Chakravarthy I will call that a negative experience but not negative data. To prove any hypothesis one needs to do experiments and sometimes those experiments tell as that the hypothesis was wrong. Therefore, negative data is neither the result of a failed experiment. An example of negative data will be something like a the sequence of shRNA that does not interfere with the expression of the relevant target. In computational biology, we often develop predictors that needs to be train in both positive and negative data (eg. peptides that bind to something and peptides that do not). So the availability of "negative data" is very important.
That is exactly...compilation of negative experiences is negative data and has its relevance and impact
Although the meta-analysis would be extremely difficult, as Thomas says, could serve as another way of research, because the database would give the possibility to work with different hypotheses within the same area.
The reason why one journal views a paper unfavorable is based on many facts. First, the paper may not fit in with the major theme of the journal. In this case, in-house editors of all high impact journals reject the papers without review. If the paper passes this stage, then it is sent out for review by external reviewers. If the reviewers, usually two or three and some times four depending on the journal, feel strongly that the authors claim excessively without providing adequate supporting data, then the paper is rejected. It is not due to "negative" data. If the data contradict what others have reported earlier, then you have to provide a lot of supporting data before you can claim that the paper reports a novel finding. Some journals reject papers because they represent "minor increment" or slight alteration of methodology and do not necessarily represent substantial contribution. These do not constitute "negative data". The example quoted- the failure of a certain combination of drugs simply indicates that this does not provide expected benefits to the patients and so appropriately a "negative combination", which does not mean "negative results". In this regard, I think the examples given by Pedro Reche are appropriate.
The two examples cited are to emphasis that a "Data is a Data"... and data is obtained only on compilation of experiments and experiences (negative or positive).
1.The first example cited on editors views is on how specific human response for same data..
2.Second one on combination effect of drugs which has proved positive effect on millions of patients as powerful positive antibacterial agent and positive results published ...and now the negative effects are surfaced (again by experiments compiled as DATA )and thus the combination is rejected.
Negative or positive combination results are based on "data" published by researchers on same combination only.
I know I am rising questions that are not really scientific. But the topic is "How about establishing a database for failing experiments?", but not discussion what is BAD and what is GOOD data/experiments. So to my mind it is more about organizational issues rather then general discussion.
How to organize data storage?
Who will post to this database?
etc.
At the beginning of my scientific carrier I also had idealistic view on many things, but now I used to look on everything from much more realistic point of view and in my experience some good project did not work due to the fact that this world in REAL, not IDEAL.
Coming back again to more organizational issues. ))
While as I have told there is not a big problem to organize engine and interface for data storage and processing, the second question WHO) it is much more problematic.
While sharing of "good" already published results is not a problem (there is a paper presented to the community), sharing any unpublished results is not possible without agreement of PI or group leader. If person did this without that (s)he would be in great trouble. But most of PIs I know will never allow to make public any of "unsuccessful " experiments or even talk about that for the reasons I have already mentioned (competition, patenting etc.).
And what also came to my mind meanwhile.
How to distinguish when people post real "unsuccessful " experiment and when fake "unsuccessful " experiment. Who and how could validate and judge that?
What do I mean?
For example somebody could post "unsuccessful " experiment that is in reality works. And it has been done to mislead the competitors.
I could very easily imagine such situations....
Positive results, given proper and rigorous controls, can only be interpreted a limited number of ways, but experiments can fail/give negative results for many reasons, one of the most common would be some mistake made by the experimenter. How would you make sure that experiments have been carried out to an adequate standard? For peer-reviewed negative results there is already Journal of Negative Results in Biomedicine and other journals. To be of any use to other researchers, results should be peer reviewed. If I see an interesting observation in a database like this how do I know it is worth risking costly follow-up work?
I fear that an uncontrolled database of failed experiments would just be a collection of badly designed or poorly carried out experiments, along with a possible over-interpretation of the results.
The examples you mention (post-it adhesion and RNA interference) are not really failed experiments but unexpected positive results that could have been made use of by the original researchers with some imagination and curiosity (but would probably require more follow-up experiments).
The role of "human factors" in making decision about the quality of the paper exists at all levels to varying degrees and cannot be refuted. Review process remains anonymous and very little the authors can do to overcome the human factors involved in making decision about publication of papers.
The example of the effect of two drugs suggests that the results are "contradictory" and not that the later study is negative. A third unrelated study may clarify this. It is not uncommon that human factors play a significant role in filtering multicenter clinical data for presentation, which add complexity to the issue.
The definition of "negative" results is really important and has a saying as to whether or not these results should be made publicly available, if at all, and how. Most labs will be hesitant to publish negative data simply because they may not be "negative" but obtained incorrectly and can be repeated and verified. Will the negative data be peer reviewed? How one will identify "fake unsuccessful data" posted just to mislead the competitors? It will be difficult to scrutinize "negative" data published on unmonitored websites. It will be extremely difficult to analyze "negative mega data".
"A third unrelated study may clarify."... itself indicates that neither of the results of the contradiction of two well established or published results on combination can be termed as "negative or positive data " but with +ve / -ve results compiled as just "data" only. and reflect how human bias may be expected while compiling results as "data" .It is more complicated if non experimental/physical results are complied as "data"
Some interesting views on negative /positive results' publications are referred by
1.Negative results are disappearing from most disciplines and countries
mres.gmu.edu/pmwiki/uploads/Main/Fanelli2011.pdfFile
by D Fanelli - "that disfavours negative results not only distorts the scientific literature ... high-risk projects and pressure scientists to fabricate and falsify their data. .... Positive results differed significantly between disciplines, both in the average frequency ...
There is no denial that human factors leading to fabrication of data do exist in every discipline. This is exemplified by frequent retractions of papers in high impact journals. More guidelines are needed to avoid these unfortunate incidences. The role of negative results in data fabrication is not apparent however. Data falsification and fabrication do not appear to be due to "negative results" but due to the individual's intention to present data the never exist in reality.
Since individual's intentions to present a data cannot be predicted or apparent it is needed to avoid human bias , that we segregate any negative results as a data only ( can be verified subsequently by interested party to approve or disapprove as +ve or -ve results)
Their is a great problem with "Failing experiments" my previous research was a complete failure, because, partially funded by a company all the results demonstrated that... the devices they sold was quite illegal in my home country and in the whole european union... nothing could be done out of this... but I could be sued if I made these results availlable... sur that in one decade or two, someone else will have such a project, and ... face the same dificulties. 20 years ago someone in US had such a project, she had a good advisor that find a way to publish something out... I was in a bad team... hopeless
then a black list of research subject and team could also be a good idea
Posting a "black" list of research projects and the teams involved publicly may be a bold idea but will have complicated consequences!
I have been in a team that burns out a PhD student per year, asking for faking results, missed elementary bases in experimental disign... and ask for impossible results... the autorities of research are not effective in discrimining that.... for the sake of research, it would be a great point...
In directed, hypothesis driven studies the failed experiments are discarded while those producing positive results are retained as "failing to falsify the hypothesis".
In high throughput "unbiased" data acquisition no such filtering is possible, all the outcomes are in the resulting dataset - i.e. it contains the failed experiments.
Exactly in published data it depends on how the authors/editors look at their observed results to fit in to "title of the published 'data'.....the filtrate or the filtered ' merc' .Since negative data publications journals are already existing since long may be a data base can be thought for those results just as data without any claims.
As GMP auditors we believe that "validation of impurity profile of a product is as important as validation of purity profile" so is data with +ve or -ve results.
As a trash miner, I want to warn you about a few complications. There are many ways to get negative results. A careful study in which lack of correlation can be inferred is only one: experimental errors is another, probably more common. As noise is introduced to an experiment due to lack of complete control of external variables, the actual difference can be diluted. Lack of statistical power is another: while a treatment effect exists, the sample is too small to demonstrate it (you may even get the same average by chance).
So, if you want to head that way, you have to be carefull to document the experimental procedure in minute details, and to require power analysis.
I am afraid that as a scientists, I would not be willing to invest heavily in high level of documentation for an experiment described only in the "garbage bin journal". The chances that such a journal - or database - will ever become a major resource, or at least that my contribution will be are simply too low.
Just to clarify: when I say I am a trash miner, I mean that I take the trashy parts of experiments that were used to prove some other point, and study their unused parts. For example, many years ago I used chimeric ESTs to charaterize template switch kinetics in RT-PCR. As the data set I am using have been succesfully used for their original purpose, they are likely to be within described parameters. Still, I find that when I am concentrating on the off-focus areas, I tackle new and surprising artifacts.
As Dr.B.k Chakravarthy mentions there are some Journals that specialize in negative results (http://www.jnr-eeb.org/index.php/jnr, http://www.jnrbm.com/), and in principle any Journal should be able to publish negative results provided the design and statistical power was good. PLoS ONE, for instance, makes a point of not evaluating articles for their importance or impact, only for their rigorousness. Clinical trials have to be registered publicly (www.clinicaltrials.gov) before starting, and have to post the results regardless of the results and if they were published in peer review or not. I would speculate based on my experience that most negative results arise not from the null hypothesis being true, but from experiments not having enough power, or from poorly designed measurements. I agree with Eitan Rubin that most of us would not want to invest a lot of time in documenting our experiments or studies that were negative, since we probably set out to do the experiment expecting to reject the null hypothesis, and not having done so, we now have to work harder and faster to either redesign the experiment or rewrite and test our hypothesis. I would propose that in order to be scientifically useful and motivating, each field would have a database of proposed experiments that should be registered beforehand, in a similar way than clinical trials, and have to publish irrespective what the results were. The advantage to registering beforehand would be highly reliable results that were not selected after the fact, so that other researchers could know that they were not the result of fishing positive results out a sea of negatives. If the proposed experiment goes through peer review, it could even have an impact factor. The researcher could ask that the project not be published until the results are out, to avoid giving away their research agenda. Current Journals could have this database as a publication option, stating if the experiment was preregistered or not. Of course there is the problem of checking that the experiment has not actually began, but falsely reporting as not started some finished work should be regarded as scientific misconduct, in the same was as falsifying data, which is also difficult to police.
Regarding concerns from PI's - how about something like ScienceLeaks - place where people can post raw data showing failures in other papers anonymously?
It would be much better , if included causes of errors, like why the analysis was failed , like there may be errors like wrong parameters, lack of proper curation of data to be analysed , mainly i suggest it should be useful like some one should not repeat the same mistakes who z involving in the similar analysis.
People have to try hard to do the right experiments and obtain reproducible results and send for publication. An essential component of scientific publications in peer reviewed journals is that others with similar expertise should be able to repeat it and report the findings whether they are reproducible to a large extent or completely contradict earlier findings. In this scenario, I do not know what is positive and what is not. The definition of positive and negative results may vary according to the discipline, it seems.
"Unbiased" analyses are no exception to bias in terms of analysis and validation of data. A common error in presenting data obtained by "unbiased" analysis is due to adoption of low stringent statistical significance to clean up the data. Reports also indicate errors in sequencing data such as RNAseq data. Microarray data obtained in different platforms differ and cannot be directly compared. The same set of microarray data may yield quite different outcome depending on how the data are analyzed-whether the CEL files are analyzed using the software supplied by the company, which sells the system or analyzed differently. Handling mega data of course is complex and the future will resolve these issues with concerted efforts. Therefore, caution should be exercised in calling data as positive or negative.
I am not quite sure that "PLoS ONE, for instance, makes a point of not evaluating articles for their importance or impact, only for their rigorousness". Some papers published in journals are of questionable quality and make one wonder whether all published papers are subject to the same vigorous review process.
This is a good idea and will be a welcome package for beginners of wet lab work like me. For example: i did a PCR experiment yesterday on my time course samples. The gene of interest has almost high expression in 4h and 6h but no expression at all at 5h time point (even the house keeping genes are very low expressed). One of my colleague told me that sometimes the cells "reboots" when under stress and cannot express all the genes.
So is this the result of a failed experiment or a cell "rebooting" mechanism??
Is the 5th time point overnight? Did you check for viability of cells after treatment? Cells die under stress or become "nonviable" that could affect RNA yield and your results. By the way, I would not characterize this as negative result. You have to figure out what is wrong.
These time points are hourly based. So i stimulate primary macrophages with LPS and extract RNA at every hour (one plate for each time point). The PCR results for all the time points looks promising except for this time point.
By the way, How do you check the viability of the cells??
Viability can be checked by the old fashioned way-using trypan blue and looking under the microscope for the uptake of blue dye by dead cells or cells with compromised plasma membrane integrity. Consult any cell biology or immunology methods book for details. Since the macrophages are adherent, you have to liberate them from the culture well. I assume they are mouse macrophages. Regardless, you can treat the cells with 0.25% trypsin + EDTA (Life Technology). Aspirate the media and add trypsin-EDTA just enough to cover the cells. Incubate for 5 minutes at 37C. Dislodge cells by gentle pipeting (never use rubber policeman or scraper since they will disintegrate cells). Add media containing 10% fetal bovine serum to neutralize the enzymatic activity. Pellet cells by centrifugation. Resuspend in HBSS and mix an aliquot of cells with an equal amount of trypan blue and count the viable cells that exclude the blue dye. Do not incubate for more than 5 minutes with trypsin as it may kill additional cells. Generally when macrophages are cultured with optimal concentrations of LPS (1 µg/ml per 1 million cells), cells remain viable even after overnight when cultured in tissue culture media containing 10% FBS. Contact me directly if you need any more help with this part.
This thread is deviating from the original question all the time...
I think it would be worth doing this. ResaerchGate would be a very good place for starting it. It will not be easy, but very much worthwhile.
Peer review is not a good filter, that we all know. Examples abound.
Screening via the literature, when it is biased towards positive results is not a good idea. Having access to such a resource will be a great value, especially if it is publicly annotated (commented) and the comments can be openly voted for, much in the same way as the posts in RG are voted for (with names of people).
The power of proving "what isn't" is enormous. Especially when proving "what is" is difficult (very frequent). The peer review system dismisses this much too often. Some messages about "negative results" go out in papers, that is true, but the focus of most published papers is not on these messages. So, the reader needs to de-focus from the "positive results" to see the "negative results" even when they make it and get published .
Doing this would be of great value. The linkage between data and published papers is still under exploited, even in "positive results". We are in the area of "Raw Data Now" (Tim Berners Lee, http://blog.ted.com/2009/03/13/tim_berners_lee_web/). So it should be done accordingly.
Of course, there will always be negative criticism.
There seem to be three discussions mixed up here, all interesting, but it's quite important to keep them separate:
- whether it's worth recording things that aren't working (a database of technical failures?) - I think this was the original question
- how to usefully make data that you can't publish available to people
- how to report (peer-reviewed) negative results - the article in today's Guardian on instances of fraud in psychology papers also mentions 'neophilia' - the fact that journals love to publish papers that report something new, but not papers that refute them (http://www.guardian.co.uk/science/2012/sep/13/scientific-research-fraud-bad-practice) - also a major issue in determining drug efficacy.
Another deviation from the original discussion thread. Following up on Dr. Neil Stoker's point about 'neophilia', I want to add an important but completely neglected point in today's scientific publication. It is true that high impact journals encourage 'neophilia' primarily to keep up with the competition with other leading journals. In the process, these journals, forgo strict peer review. In fact, in most instances, only the 'in-house' editors make the decision even without external review. This leads to bias in determining what should be published. As a result, many articles turn out to be not so clean. These journals do not take any responsibility for that and simply publish retractions. It is time for the high impact journals to take some responsibility and scrutinize articles a little more carefully. This would be a service to the scientific community as well.
Wonderful idea!, we shoud contribute all our failed ideas to this data bank. In fact there is already a journal that published only negative results [all results journal--www.arjournals.com ].
Still on the possibility of a database, itt would be nice if we could develop a Minimum Information Standard so that such results could be easily searched for relevant features, as opposed to the plain text.
There are tools for doing it properly in the open source world . http://isatab.sourceforge.net/
I think that the problem of unpublished data (or failed experiments) for biological sciences could be related to the problem of long term conservation of data in other sciences (ie. environmental sciences). It will be difficult to modelize this kind of data because modellers don't know which kind of data they will collect some years after. They could pass all their time to make a system as general as possible, but this approach will fail on long term, because science provide more and more new data in time.
Perhaps, it will be not suitable to use a database system for two reasons: 1) the software product could be discontinued, and 2) all data sets could not be integrated in the system. This choice could mask useful data, needed some years later.
This problem can be also related to the case of managing RAW data. A lot of data (mostly tables, spreadsheet files ...) is not annotated, and finaly lost. Even if a document management system is used.
take example of a table (CSV, spreadsheet): the only annotation about this file is currently its name, and some keywords (eventually if a DMS is used), column names (not always found in tables produced by laboratories). If an IT team want to use this file, they will need (some years later) information such as: type of data (float, string ...) inside columns, normalized columns names between files, property, authors, units ... Result: they cannot use this data.
Another problem, is related to the effort needed to store this kind of data, it will be very difficult to find funds for information that will be eventually read and used by someone.
A way could be to change the paradigm and to define a truly generic (not only for biological sciences) file format for these files. This format (ASCII) could ensure that these files could be used in long term, could be annotated in depth, and could be used by humans or computers. This format needs to embed metadata but not in a large amount because users have not time to do this effort in day's work at laboratory or over long periods. It could be also interresting if the same format could be used at personal computer level or for a central ressource and if a focused view of data (ie. a RDBMS related to a particumar scientific question) could be produced at demand from this RAW data space.
We have defined a such as format (we called it CSVM: CSV with Metadata) in the case of tables (tabular or tabular like data such as key/values files). I have released 3 technical reports on arXiv about this kind of problem.
* One for a basic specification => Technical Report: CSVM format for scientific tabular data – http://fr.arxiv.org/abs/1207.5711.
* one for data interconversion => Technical report: CSVM dictionaries – http://fr.arxiv.org/abs/1208.1934.
* one for a summary of the last ten years of CSVM's use in various scientific fields => Technical report: CSVM Ecosystem - http://arxiv.org/abs/1209.2946.
I don't know if a CSVM approach could help to resolve this case, but I think that for RAW or unpublished data, the main effort is related to the conservation rather than on the exposition, like if we would operate a ‘data museum’.
There is already journals dedicated to negative results:
Journal of Negative Results
http://www.jnr-eeb.org/index.php/jnr
Journal of Negative Results in BioMedicine
http://www.jnrbm.com/
Journal of Pharmaceutical Negative Results
http://www.pnrjournal.com/
The All Results Journals
http://www.arjournals.com/ojs/
Unfortunately, they are never/rarely/not enough cited in papers....
Yes the same answer was given for the tag question on 25th July,however, many readers are not aware of the existence of these journals and some interesting discussions continued.
I am sure the elite followers of this tag question will agree that no one gets a positive result (the one which researcher aims at) in one single shot experiment or observation unless few negative results are generated ( rather I put it as results which the particular researcher is not interested in ) or observed and a rational data is analysed for.
The negative results for publication needs more scientific validation for consideration and should not be as a result ignorance or faulty experimental design.
Publishing results from failing experiments including datasets must rely on a precise definition of the context of experiments. According to Frederic Rodriguez relational database are not suitable tools and must be combined with annotation mechanism and extension capabilities. An approach based on a multi-paradigm storage system, or dataspace with ontologies, and semantic query/control tools is much appropriated.
I'm highly skeptical that this is a good idea mainly because I think that the notion of a ``failed experiment'' is very ill-defined: Experiments can fail for far too many reasons. Apart from the obvious (and worthless) failures due to technical errors in preparation/input-file/execution etc, there is also logical reason why this does not make sense: Any experiment is meant to confirm (in the sense of not falsifying) some hypothesis, and if reproduced sufficiently often the hypothesis is assumed to be true. Unfortunately, the number of possible hypothesis which one can formulate is not only virtually infinite but can also be quite nonsensical. Moreover the number of associated manifolds of experiments designed for testing them is also virtually infinite, and can also be nonsensical. By contrast, the number of hypothesis that turn out to be true and meaningful is very finite, and also the number of experiments necessary to confirm them converges rapidly. To illustrate this absurdity just think of the number of sensical _and_ non-sensical experiments you could design to test some non-sensical hypothesis, e.g. that there's a candle burning on the back-side of the moon.
I also don't buy into the ``the more data the better'' argument. I can generate tons of redundant or randomized data with a computer, the information content will remain zero, no matter how large the file. I'd be happy to offer very special bargain deals for such files to anybody on this thread claiming that data in itself is useful.
If, however, through some magnificent procedure (i.e. something much better than peer review) we could make sure that all those experiments and hypothesis be excluded that are in conflict with existing knowledge of truth then and only then such a journal would be useful. This, however, is quite a different shoe and until then we will have to rely on the No.1 lackmus test of all peer-reviewing to save us from floods of spam: ``Does the reported research even make sense?''
It is interesting that there are journals devoted to 'negative results'. I am wondering what criteria are used to 'accept' the negative results for publication and what purpose do the papers describing negative results serve. Even with peer reviewed papers in 'regular' journals, reproducing results is not straight forward. This is very common in any discipline of biology. Typically, a paper describes 'new' results, which need to be reproduced by others. However, not all results are reproduced by others. In this sense, regular journals contain at least some negative results. This is part of the scientific process. I am wondering whether the 'negative' results published in unconventional, exclusive journals are reproducible at all. If not, why invest time in 'negative' results? One personal note: with the exorbitant amount of current literature, I don't have enough time to read everything published in 'regular' journals in my area of interest and where would I find time for reading the negative journals.
There would be several relevant consequences: one would be able to see what is already tested experimentally and avoid repeating experiments (unless for reproducibility study); another would be that it could have role in the formulation of hypotheses and the planning of experiments, by reducing the scope of the thought process; etc.
Sundararajan Jayaraman: after negative results in the way the question was asked, you are the negative researcher I guess. Negative results does not means that it is not interesting. Imagine you are working on gene X. This is of course the gene to cure the cancer. The knockdown you are expected to make with the idea to find the Y phenotype, logic with your previous results, will take several months and dollars to test it. if somebody have made it, and published it as a regular paper but saying that the way you were expected to do it is not the right way, you will be happy to save time and money, right ? May be you misunderstand negative results and wrong results which are mainly causes by technical issues...
In response to Florent Hube's comment: My point was that most papers published have negative results and it is not clear how a journal devoted to 'negative' results can scrutinize the negative results without considering positive results for publication. Negative results are different from the conventional expectation and may have profound influence on future experiments. Sometimes, negative results are the products of bad techniques. Therefore, in my opinion, negative results cannot be viewed without the positive results. That is what the conventional journals publish. Besides, currently, I do not know of a good tool to distinguish negative results from bad results due to technical issues, method of data collection, computational problems, investigator bias, etc. In reality, no one can be a negative researcher including myself regardless of what Floren Hube's says. This is a forum for discussion and let us keep it that way.
I full heartedly agree with Sundarajan. @Florent: What do you think is the likelihood that negative results originally created to confirm some hypothesis can be recycled to confirm another non-related hypothesis? This is a combinatorially scaling problem (matching all the possible negative data with all possible hypothesis) and I think it's doomed to fail. Let me formulate a different question: Since negative-results journals already exist, are there prominent examples where such published data was used to successfully confirm other hypothesis? I bet there aren't ...
The word "negative results " is not properly understood.... and the very existence of such publications /periodicals are challenged ....some interesting recent articles are given as under ( one published in journal with "negative" tag and others with out.)
It is sure and certain any researchers working on Euphorbia milii will not repeat and waste time and energy on repeating antimicrobial activity work after reading the article in J.Pharm.Negative results (hence the article provides as good information as any other established other periodicals.)
1: Journal of Pharmaceutical Negative Results
http://www.pnrjournal.com/
Absence of antimicrobial activity of Euphorbia milii molluscicidal latex"
ORIGINAL ARTICLE
Year : 2012 | Volume : 3 | Issue : 1 | Page : 13-15
Absence of antimicrobial activity of Euphorbia milii molluscicidal latex
2.Larvicidal activity of some Euphorbiaceae plant extracts against Aedes aegypti and Culex quinquefasciatus (Diptera: Culicidae)
AA Rahuman, G Gopalakrishnan, P Venkatesan… - Parasitology …, 2008 - Springer
... 1989), antimicrobial (Adamu et al ... for C. sonderianus (LC50 of 104 ppm) and Croton zenhtneri
exhibited higher larvicidal activity with an ... The toxicity of Euphorbia milii molluscicidal latex and
niclosamide showed toxic affect to Anopheles albitarsis, A. aegypti, and Aedes fluviatilis ...
Cited by 88 Related articles BL Direct All 5 versions
There are plenty of such examples are available for elite readrers in other fields also....
Indeed, if the hypothesis is a negation ``negative'' results can be meaningful, and should be as reproducible and helpful as any experiment. This thread, however, is on ``failing'' experiments, as exemplified by the Post It example ...
Experiments do not fail ,the observers may fail to interpret the results due to many reasons ...either one gets the positive or negative results and as answered on July 25 th for the thread " a DATA is compilation of either positive or negative results ( many examples are available for a negative result which on further work provided positive data......as specified in the tag question itself on Glue and the phenomenon of RNA interference as examples which were initial failures and subsequent experiments created history).
The thread also discussed the need of publications for failed or negative results data, thus some of the references of the already existing periodicals for negative results are quoted .
Dr. B.k Chakravarthy, I see your point. But can you see mine? The mere fact that there are examples where this was successful is rather anecdotal than sufficient evidence. The true question would be if there's some sort of meta-study that investigated the ratio of successful examples of recycling negative results in a different context versus the total number of negative studies. I argue (and maybe I'm wrong with this but I haven't seen convincing arguments against it) that the overall effort is not justified---unless the experiments are extraordinarily expensive. And I do not think that many citations qualify as success.
As to the existence of such journals, do you think that it is a coincidence that there's no such journal in other branches of science, such as the physics/chemistry communities? Can you conceive of such a journal in mathematics?
I agree with Anatole von Lilinefled's concerns. It appears that some disciplines have these journals and not others for some reason.
My concern, as expressed earlier, is that how rigorously these manuscripts are considered for publication. In other words, it seems that this is a portal for papers that cannot be published in regular journals. With increasing retractions in high-impact journals due to plagiarism and wrong-doings, I am wondering whether the 'negative' results published in these specialized journals are also subject to the same level of scrutiny and how one would assure that these 'negative' results are indeed negative and are not simple mistakes. Although the cutting edge research including genetic/epigenetic research is expensive, it is hard to justify publishing invalidated results as negative results solely due to prohibitive cost and to save time.
These are the issues that we need to consider before embarking on creating portals to deposit negative data and starting journals in every discipline to publish them. This would be similar to the 'new generation' of online journals that literally pop up from very corner of the world with questionable authenticity.
1. My point of views are purely reflective of a generalized question for all disciplines on " establishing a database for failing experiments" ..... and experiments always give either positive or negative results ( irrespective of disciplines of research since all the data of results are meant to be analysed and relevant to human beings.... the biological species.........." accuracy and precision " are bound to play an important role and " failing experiments " also provide some key information for researchers and can not be brushed aside as invalid experiments.
2.Since I am in the field of "drug discovery" (which encompass many disciplines) I can provide many examples on how positive ( successful) experiments were on further studies found to be negative (failures) and as how negative experiments on further work provided positive data.( earlier on this tag discussions two examples were cited ) .
3.Regarding data publications on failing experiments ( I am sure such articles will be published only after true verification of the details by elite Editorial board/ referees like any other established journals) no harm in having such articles on negative results (failing experiments) and we should allow the readers decide as how best it is relevant for his/her work
Dr.Anatole von Lilienfeld:
Since you raised the question " do you think that it is a coincidence that there's no such journal in other branches of science, such as the physics/chemistry communities? Can you conceive of such a journal in mathematics?
Yes.....for your information "Some examples of available journals in Chemistry/ Physics are given .....which accept and publish failed experimental results"........
1.The All Results Journals: Chem
www.arjournals.com/ojs/index.php?journal=Chem
Block all www.arjournals.com results
The All Results Journals: Chem (ISSN: 2172-4563) focuses on recovering and publishing those experiments that either failed or led to “unexpected” results ...
2.The All Results Journals
www.arjournals.com/Negative Results in Chemistry. The All Results Journals: Chem (ISSN: 2172-4563) focuses on recovering and publishing those experiments that either failed or ...Chem - Biol - Current Issue -
There are many more if you search for.....
If the credibility of such publications are doubtful .,you may contact the Editors for further enlightenment....
for me all results provide some information or other to plan my experiments
" Negative results are different from the conventional expectation and may have profound influence on future experiments. Sometimes, negative results are the products of bad techniques."
This is true, but positive results can also sometimes be the product of bad or not refined enough techniques.
(a good example is the publication in science of XMRV in chronic fatigue syndrome).
What you say is also true. I am also disappointed by many examples of 'positive results' as products of bad techniques published in high impact journals without a problem. Numerous explanations exist for this. I do not want to elaborate. However, these have been 'peer reviewed' and gone under some sort of scrutiny. I am wondering if the negative results are also subject to similar scrutiny. Since we already have plentiful problems in understanding and reconciling with 'bad' results published in regular journals, encouraging publication of the so-called negative results in exclusive journals may negatively impact the authenticity of scientific publications.
I may be wrong, but I actually think the opposite. It is much harder to convince with negative results (mostly if they go against the accepted knowledge base).
To consolidate such results you need to work much harder and it is much harder to get your studies published in a good journal.
In fact, I found the study recently published in mBIOhttp://mbio.asm.org/content/3/5/e00266-12 an example of excellent and well-conducted research. It was exemplary that the leader of the research group worked together with those who published the original articles to solve this dispute.
Another good example is the meta-analysis done on all studies on antidepressants showing that they are probably not better than placebo. (as opposed to a meta analysis which was done only on the studies which were published).
I also think that publication of only positive results creates the illusion that science will eventually have all the answers and medicine all the solutions.
It does not give a realistic picture of how many studies truly prove the working hypothesis and how many new medications truly work. I think this creates pressure on people to "provide" positive results and eventually leads to less good science and medical research.
I support the idea of posting negative /failed experiments . If not anything, atleast people and new comers in the field can be aware of what mistakes be avoided to save time and money repeating the same things.
I think it's necessary. You can get details about positive data from literature but no details about negative results. Knowledge about negative result can help other people
The Reproducibility Initiative
https://www.scienceexchange.com/reproducibility
is bound to produce verified negative results that can be used for this. Science Exchange has partnered with PLOSONE and Figshare. Some of the experiments that they will try to reproduce will certainly fail, in known conditions. Each will result in a PLOS ONE publication about it, and the reasons for the failure will be there. The data will be made available in Figshare,which is actually great for this purpose.
I think it's necessary. In general, negative results are only share witn lab partners that could helps you. However sharing negative results with the scientific comunity could solve more experimental problems.
We do already have one :)
Journal of Negative Results in BioMedicine is an open access, peer-reviewed, online journal that promotes a discussion of unexpected, controversial, provocative and/or negative results in the context of current tenets.
Sorry for long reply!
I think question was more about database for the storage of failed experiments rather then actually publishing them in journals such as "Journal of Negative Results in BioMedicine" even if you manage to publish your finding still you need to place your supporting martial somewhere online for public access. There is still no real solution exist for positive and published results, let alone the negative results dataset. It appears that every journal is re-inventing the wheel. There is clearly a need of such initiative but wait!
In reality we wants life time free access for any such database but there is no life time guarantee for funding; funding ends hence project died. Tranche and PRIME were such example followed by ProteomeCommons, they are all dead or dying due to lack of funding. There is an other effort by BGI and GIGA Science as GigaDB http://gigadb.org (not sure how long it will survive )
2ndly, lets ignore the cost for keeping data alive versus quality of data. There is a need of balance between cost of re-generating the data versus cost of keeping it alive, the approach EBI is following. as technologies are becoming cheaper every day, the cost of regenerating the data might be cheaper then keeping it. its becoming a chicken and egg problem here.
3rdly, what will be the incentive for a researcher to put my data on any such repository? the question which partially address by +Ijad Madisch
I have my own experience, when we performed experiments what filed existing hypothesis. It is very difficult to publish something like this research. Usually editors send this kind papers to authors who previously proved hypothesis and they always reject papers with contraversial results.
This is nice idea. Any results have rigths to be published.
In my opinion, negative results should not be published in scholarly journals neither it be in any other scientific journals. Following consequences need to be considered with regard to the publication of negative results.
1. Once an hypothesis proved wrong experimentally by someone, it cannot be pursued by others to verify the same. Science is ever changing and ever evolving. What we were not aware due to lack of scientific technologies are now being identified.
2. More clearly, it can be said that just because we are unable to identify the existence of a relationship (due to lack advanced technology), we cannot say that no relationship exist between the substance under investigation and observed effect. Just because we see only visible light we cannot say that, there are no other light (i.e., other rays viz., UV, infra red, etc.).
3. Every one will start publishing the negative results in want of some articles in their name.
Knowledge of negative results sharing is important in a way to improve the evaluation methodologies, rectify the mistakes if any and to come out with an effective way of doing further research. However it need not be published in journals but can be stored in public domain where every researcher has the opportunity to comment on the posted results.
Finally, the word "Research" itself says "search again"........
I cordially welcome all sorts of comments on this topic and thanks to Dr. Weihang Ding for posting such a debatable question.
We know that even 'positive results' published in biological sciences require validation and are not simply accepted. So, the same rigor should be applied to evaluate whether 'negative results' are not simply flaws in design or execution of experiments. That is the nature of science. Therefore, 'negative results' are not ready for the prime time. As suggested by several, such negative results may be deposited in a repository but do not deserve publication in mainstay or other kinds of journals. By the way, everyone in science has generated data in their life time that could not be published in peer reviewed journals. So, everybody feels the frustration. However, this does not qualify publication of data that are not worth publishing as judged your peers.
I agree that sometimes a "failed" experiment has some limited value for informing a better design of a future experiment, that is, "...we really screwed this up but now we know how to do it right."
In some cases, however, an experiment "fails" because it addressed the wrong hypothesis or tested the wrong effect. We see this all the time in statistical analyses of data wherein an effect or outcome that "should" work appears to fail - and then we discover some underlying mitigating random or fixed effect that confounded the result. In my opinion, often a "failed" experiment is instead, a "misunderstood" experiment.
Also, the above-mentioned "controversial" result is not necessarily a failure. Often there is acceptance in the literature of some result that is shown later to be wrong when better instrumentation comes along. I recall a number of papers wherein certain compounds (e.g. vinyl chloride, pentane, PFOA, etc.) were routinely misidentified by mismatches between external calibrants and unknown samples that were later corrected with better resolution or more specific detectors.
I think that carefully performed experiments with good QA/QC are always publishable and valuable regardless of the outcome. Just a thought...
This would be a good answer if as many feel rather difficult to achieve. But in addition it would be very useful if those who have had their technical questions answered with a really useful suggestions reported this and they might also add a comment of thanks to keep all of us motivated and sweet/
Databasing failed experiments... This is an interesting idea and I am not surprized it brings up so much debate ! But what is the issue really ? And what can we do in practice ?
A failing experiment, what is it : obtaining a result that was not the one expected, or obtening no (significant) result at all ? In the first case, the hypothesis to be tested experimentally is at question, not the results observed. In the second the experimental protocol or data analysis are to be questioned. This is the simplistic view in philosophy of sciences. But in reality in both cases the hypothesis could be the culprit, inducing a wrong experimental set up; and in both cases results/data can be put in question as well .
What can we do with such a database anyway ? For a database to be useful, one should anticipate a useful way to query it and get information from it. I am then really worried by the formatting of faling experiments in a database. The devil being in details as everyone knows, an experiment would have to be related in all its very details, as well as the experimental set-up. It has been a terrible challenge to agree on the informations to be provided and norms for doing it with e.g. transcriptomics. Still now a lot data and results are useless, not because they are wrong but because the condition of their obtention is insufficiently documented.
So, I am afraid that if the idea of putting failed experiments in a database is a nice one, doing it in practice would be -for the time being- rather useless. For the time being ... indeed, significant progress in text analysis may in the future allow to make use of experiments with free text description of the whole process, from hypothesis to result (or the reverse). But we are yet far from it .
I would rather plea for a database of experiments, whatever the result, positive or negative, expected or unexpected. Indeed the more it goes the less it becomes possible for anyone to even try to reproduce experiments which are published, not even to mention the ones that are/have been done in your own lab by a colleague!
This would build up a ground of methods that works for real, and others that often fail or work only in some hands and at some places. This would allow expertise to be queried, and routine observation of failures as weel, which could even be studied on their own .. One could even think of making it compulsory, when submitting a manuscript or reporting to a founding agency, just like sequences in Genbank ..
This is a very good question. My opinion is that there is a need for a prospective registry of basic science experiment design reporting such as that of clinical studies (clinicaltrials.gov) or for systematic reviews (PRODPERO).
The purpose of these protocol registries is to show at the time of publication the protocol was not violated. However this may not apply so strictly in basic science designs.
Such a registry may be very helpful to scientists in many ways.
Registry of clinical trials is absolutely necessary and serves an important function. However, such a registry for basic sciences is neither practical nor needed because of the complexity involved and continuously evolving nature of biological sciences. Similarly, clinical research also cannot be confined within a frame. It is mostly basic research done using clinical materials. Whereas procurement of clinical materials has to meet certain standards and highly regulated, research using these materials is not controlled or cannot be absolutely regulated.