What kind of malpractices and cheating are possible in carrying out a meta-analysis survey? I am starting this debate purely with an intention of educating students who are subjected to such malpractices.
I am not sure what you mean: are you concerned about the possibility that the primary datasets are fabricated or that the researcher doing the meta-analysis might engage in malpractice (e.g. selective representation)?
Mr. Achilleas, I am in fact trying to bring out both into limelight. My assumption (hypothesis) is that a meta-analysis offers more loop holes for a researcher to falsify his/her data. Especially in a survey involving large population, cooking of data is a possibility. In such a case, it takes courage to divert one's mind to a bias less data collection.
Yes, that *is* a disturbing thought. But don't you agree that if a meta-analysis is properly documented (as it should be in order to get published), it will be relatively easy to verify the calculations? I wonder if the risks involved justify whatever gain there is from falsifying the data.
I think that the greatest danger is that, when conducting the meta-analysis, you assume that the primary data are accurate, i.e. you assume good faith on behalf of the other researchers. But if their data was fabricated / falsified then the meta-analysis would produce false results.
Another danger with meta-analyses is related to 'salami publications'. Sometimes, researchers publish data from the same study in different articles. If this is not clearly stated, then there is a danger that the researcher doing a meta-analysis will count the same study more than once. Again, here's a serious threat to the validity of the meta-analysis, despite the reseracher's good intentions.
I definitely agree with your good self. Though a meta-analysis is properly documented there is a chance to skip a few lines (especially in a survey). For example: the height, weight and BMI could be filled at will in case of a big survey, and I hope you do agree with me that such practices are still common with a few 'rookie' researchers.
Getting a 'salami publication' is a 100% possibility. Another possibility is something called as 'data massage' in which a significant result is 'produced' somehow and not obtained.
I believe, it is high time that such 'bias' must be unearthed.