In an ever dynamic world where every detail permanently changes in time and space, 'the world of yesterday differs from the world of today' and 'the world of today differs from the world of tomorrow'. Thus, what has been written yesterday does not reflect the future and therefore should be considered false in at least one scale of analysis or perception.
But why do people have memory including science-based memory reflected in publications? There must therefore be at least some link between what happened in the past and what will happen in the future or not? If a bird lays more eggs in habitat A than in habitat B does this persist in time creating consensus across publications dealing with similar topics?
Miranda, many illustrations direct to "controversial" statistical inference. I agree with the article. I have pointed this out in a number of fora in RG but I always get down voted. It s indeed most likely that an "experiment", with a very small non-random sample, but which is inferentially analyzed, will be representative of the population it purports to represent. True experiments are supposed to include both control and experimental groups, the size of which must be statistical and the selection, random- features which are lacking in many published "experimental results". I hope I will not be down voted for this shared information.Ed
I don't want to accuse anyone of making false articles. Sometimes findings may change or they may be different from others because results change from place to place or from time to time. Other issues may also influence your findings but those may not be as significant. On a personal note, what I publish I do it with the guarantee that is based on solid literature reviews and valid questionnaires. Everyone who publishes should do it to contribute in some way, not just to publish something which may be false. Also, if someone does that he/she should not be allowed to publish ever again as false claims should not be tolerated in any research.
@Mahfuz, our findings are true, in the conditions or circumstances of our research at that point in time.
@Ed, from what you have discussed, I agree that 'a very small non-random sample' isn't going to allow us to infer to a population. My research uses large numbers of respondents. So I use a t-test. Besides reporting t (df) and p, what else must be included?
If I did Mann-Whitney test, what must be reported?
I don't agree with the statement. There could be some disagreement possible with the methods followed or discussion presented. In principle all published research findings should be accepted as true.
Prof N, I agree with you, Mahfuz et al. But there's someone who doesn't wish to discuss in an open environment. You can be sure it isn't me who behaved in a negative manner. Thanks.
As Mahfuz said I also don't agree to the statement. Yesterday or from the known beginning the Sun always raised from east and falls in west. Today also remain the same and tomorrow also will be. This is todays truth. As an argument one may say that the Sun still raised from east but not as exactly as in 100 year befor (because the axis of the earth has been changing). As Overall the sun raised from the east either yesterday or today and will raised tomorrow also.
I agree with Eddie about the paper finding as it is talking about "controversial" statistical inference. Such researches need to follow the guidelines and also they will be applicable for the selected sample but very difficult to generalize for the total population. So the arguments presented in the paper may be true but that is valid only for a narrow segment of research and not for all the research domains.
As a researcher one should always design the study better then the previous one. The new result may be having some additional point when compare to the old one but we can't comment the old one is false. Rather we can say this new study got some new information which was not mention in the previous studies. The different in result need to be discuss in details.
it's sad that researchers should cheat at all. Then do you think that their findings can be replicated by others to any degree? Also wouldn't they be found out? What do you all think? False data can lead others astray, and waste time and resources.
This is difficult to answer, Marcel. But science is based on measurement. Correct and true results have the tendency to move it forward, bring progress in our research for our scientific community and global village. Do you agree?
Whereas if some person cheats just to publish, most of us would say it's not ethical. An analogy may be, if we ask for directions to a place, and the info given to us is correct, the chances are that we would arrive. But if the info supplied is false, we would get lost, and make no progress in our travelling, wasting time and effort.
The word "false" in the article is actually a qualified one, perhaps used just to attract attention. Authors definitely will not cheat. What the article claims, which I believe the writer has data to support that claim, is the inconsistencies in the findings between similar studies, agrees, refutes, agrees, which while supposedly normal in the development of a theory, is in a "scandalous" situation to warrant such author's "hypothesis". The argument of Miranda is in order, being knowledgeable on the matter. I believe the writer has focused on articles with weak statistical analysis (such as very small, non-random samples inferentially analyzed). I myself have read a lot them in published papers. Unfortunately many researchers overlook this matter. In fact, as I said, whenever I point this out in RG, I get down voted.
Ed, don't worry about the negative ones who are anonymous. Let's get on with our discussion. BTW, please answer the questions I asked about things I should report if I use t-test and if I were to use Mann-Whitney test. I must thank all of you for helping me in my learning, although I'm a slow coach!
I am old now and have changed my ways, but when I was young and learning to be a scientist, I used to like to find what seemed possibly an error in a published work and then reveal the error by means of experiment. I was driven in part (not completely) by a competitive urge. After a while I lost interest in this approach because it was so easy to find minor errors. Sometimes these errors falsified a conclusion drawn in the paper, but more usually they introduced a reason for doubt. Of course, it is no surprise that there are errors in published works because perfection is virtually impossible.
I think many scientists are 'perfectionists'. We try to find an answer that is irrefutable, but reality ensures that we fall down, only to get up and try again. That's the way the system is supposed to function, and it usually works effectively.
The problem seems not that scientists make mistakes, but that non-scientists expect perfection, when perfection is impossible.
I think the question refers to the use of statistics in research. Beyond that little book "How To lie with Statistics", i remember that I read somewhere, that the experimental animals behave like the researcher expects! Thus I believe that many studies misuse statistics in order to validate their speculations. I think this is the root of the problem.
I think the title is to be more a provocation than a statement. Moreover many of you have experienced to act as referees of articles, there is no article in which it is not possible to make any objections or find potential bias. However, this cannot detract from the work of colleagues. An example are the studies of meta-analysis, from which many papers are excluded because they are not eligible. And we know that in Medicine meta-analysis of studies contribute to build the data that lead to evidence-based guidelines used in practice. And if one is suspected of committing an error, the judge looks at a right action away if it was in accordance with the guidelines and not with this or that isolated paper. In conclusion, at least in medicine, there is a logical explanation of the title shown in the provocative question, although this does not call into doubt the honesty of the researcher.
Thanks for your views and responses. I like them all :)
"I think many scientists are 'perfectionists'. We try to find an answer that is irrefutable, but reality ensures that we fall down, only to get up and try again. "
@Alexandre: both your statement are great!
From Prof Kamal: "Generalization is wrong. We all know that the "truth" is relative!!"
From Enzo: "the title is to be more a provocation than a statement."
Thanks dear friends: I don't think I could have come up with all these ideas that you have added, enriching this thread of ours :)
Agree with @Kamal sir, truth is relative and it depends upon individual perspectives.
BTW, I think no research finding is false. No researcher would like to ever do it. It may be some improper knowledge and practice which may lead to a wrong conclusion. But, as a researcher efforts are always true.
Thanks for your views, Darasingh. I agree that truth is relative to the knowledge that we have at a moment in time. (So I have just put on RG, a dataset that shows how as a scholar I must be able to accommodate new info to what I have learned.)
But if you read this thread slowly, and thoroughly, it seems some researchers lack ethics, and that is very sad! What other conclusion/s can we WALK TO?
The point is *not* whether you agree or disagree with the intentionally provocative title of the article. Obviously, when you want to grab attention nowadays, you have to shout in some way: some do it literally, some do it with well-chosen rhetorics, and others do it in still more hideous but effective ways.
The merit of the article is IMHO that it puts its finger(s) on the many flaws in statistical thinking and practice in so-called empirical sciences and the immense risk to scientific progress if researchers rely on one or two introductory courses in applied statistics for the rest of their (professional) life. It is too easy then for half-truths or downright false ideas about statistical methods to get firmly anchored in own's research habits, never to question again whether those habits are valid and effective anymore.
But there is even a much deeper going value of this article, because the many pitfalls of applied probability and statistics are well-known for decades and you may write a thick book about it (see e.g. Kline's book about one of those topics). The deeper awareness is that *truth* is far from the only determinant of today's theorizing and research in whatever discipline or field you are looking. Especially in the *natural* sciences there are a host of other interests, factors, motives, etc. that determine what will be studied, how it will be done and if and how it will be published.
I had hoped I had read this article earlier in my life!
'The merit of the article is IMHO that it puts its finger(s) on the many flaws in statistical thinking and practice in so-called empirical sciences and the immense risk to scientific progress if researchers rely on one or two introductory courses in applied statistics for the rest of their (professional) life. It is too easy then for half-truths or downright false ideas about statistical methods to get firmly anchored in own's research habits, never to question again whether those habits are valid and effective anymore.'
Dear All, Lijo Francis, Patrick Low and others that you know are not able any longer to log in their RG account from yesterday. RG suspended their accounts without any reason. I don't know why, but I find this an outstanding abuse. Please help them and inform all other participants as this is a signal that things are not going in the right way in RG before it happens to you!
Dear Mahfuz, I agree with your point of view. Every work is a long-term experiment, that's why statistics is a result of hard-working practical labor.I highly appreciate every scientific experience.
Dear Enzo, we wait. LIJO is a serious honest researcher, just has a commitment to his work and us his friends, to encourage us. That's why he's in our hearts.
I just send a message to RG for this disappearance. I think more such messages should be send. The address is https://www.researchgate.net/contact. Do you think that we should all protest?
Yes Pardis, I agree. Lijo did say that there was a complaint that he posted the same response on various threads. I remember I saw the Yoga breathing pic, more than once. But what's so wrong about that. The threads were on reducing stress, after all.
Thanks for your contribution, Prof Abdalla. I agree:
'a little variable in temperature or a pH, or time can entail something different. '
This teaches me to be simple and humble in reporting; the next researcher may get slightly different results. So far, my results were parallel to previous research.
@Prof Abdalla, different results but CORRECT results, and new learning. Bravo, please report for the progress of science that must go on, inch by inch.....
Friends, sometimes Blessings come in disguise. Now Lijo has more time to work on his research without RG cares or commitments. Lijo wrote this 5 hours ago to us all:
'Just now I have received the Cyprus Visa in my Passport.
Now I am going to prepare the PPT slides to present my papers in Cyprus EDS Conference. Since I have two papers to present.
Dear Valentina, Prof and friends, this is from Patrick:
'Dear Miranda
Thanks from the bottom of my heart, will just move on. I will need to evaluate on what's next. As said, would be troublesome to write in/ input all those data/ article abstracts in my profile -a lot of effort and time. Better to move on - unless they reinstate or put back my RG account.
Dear friends, Prof Gianni, Prof K, Prof L, et al., just to keep you all updated:
Prof Lijo won an award for his presentation at the Cyprus EDS conference, 11-15 May 2014. Hooray. In relation to this thread, I believe that most researchers I know report their findings truthfully :)
I think that this paper was awarded: Performance of different hollow fiber membranes for seawater desalination using membrane distillation by Lijo Francis, Noreddine Ghaffour, Ahmad Al-Saadi, Gary Amy (Saudi Arabia)! Congratulations!