From what I have heard from other researchers, many good journals have a tendency to turn down papers that report negative results. Are they being 'selectively'-biased?
I agree with Ferran. The most important reason for rejecting a paper is related to its hypotheses. We have all seen that an article which is against of strong and clear hypotheses does not accept easily. On the other hand, some papers have problems with expressing their new hypotheses. A good representation of brand new results may help accepting a paper with novel ideas.
I have actually observed reviewers rejecting papers on the basis that there is no positive results endorsing the so-called popular hypothesis. I believe the term for this is confirmation bias? (Correct me if I'm mistaken)
And instead requesting for a redesign of the experiments until positive results are obtained. One reviewer commented that there is usually a 1 in 10 chance of obtaining positive results and hence felt that negative results which do not support the hypothesis does not have much value. Has anyone else encountered similar experiences?
Selection bias is alive and kicking and, perhaps, most noticeable in highly-regarded disciplines - such as medicine. A good example is to check out Ben Goldacre's popular critique of medical research evidence - particularly related to clinical trials. The selection bias, in such cases, profoundly affects people's lives - especially patients lives.
The problem is everyone wants to aim for A class journals. But these journals at times are dominated by popular thinking which makes it difficult to get new ideas published.